1. Introduction
Artificial Intelligence (AI) represents a potential revolution in all spheres of society, the effects of which must be addressed scientifically: from computational science [
1] to ethics [
2], to medicine [
3], to work [
4], to economics and management [
5], and even in the field of education [
6]. A clear example of the wide range of risks and opportunities of introducing AI in a complex context of social and environmental sustainability is the management of water: from exploring better risk management for the sustainable use of water [
7] or constructing smart cities [
8] to the high risk represented by the enormous amount of water used for its production [
9].
Digital innovations are a key pillar of our society’s identity, particularly with the introduction of artificial intelligence (AI) into everyday life.
The risks and opportunities of AI in all fields of society need to be better understood, particularly in terms of equity and social sustainability. In this sense, social science—and particularly sociology—has the responsibility to explore and minimize the impact of AI on the reproduction of inequalities and disparities in our society [
10]. From a sociological perspective, the role of new technologies is of crucial interest in the formation and development of both individual and community identity, focusing on the urban–rural dichotomy [
11,
12]. These technologies, by enabling new forms of social interaction, communication, and access to information, influence not only how individuals perceive themselves but also how communities are formed and evolve. In this context, digital platforms and AI-driven tools play a crucial role in mediating social relationships and shaping collective narratives, profoundly impacting the dynamics of belonging and social participation [
13].
In the field of education, the impact of chatbots has been specifically evaluated [
14,
15,
16,
17]. It is important to consider that while applications like AI-based language models may offer significant benefits, it is also crucial to be aware of the differences in the type of social support provided through online chat services, as this can influence users’ perceptions of service quality and well-being, in this case, students [
18]. AI tools are transforming education by enabling the potential of personalized learning through machine learning algorithms (such as Carnegie Learning and Knewton). However, these systems can also inadvertently reinforce existing societal biases, such as gender biases in tutoring or reflecting historical inequalities in learning outcomes. Similarly, tools like speech-to-text software, smart glasses, and predictive text systems (for example, Google’s Live Transcribe), improve accessibility for students with disabilities, enabling better engagement in learning. In lower-resource settings, services like Squirrel AI and Duolingo help bridge educational gaps, offering personalized learning experiences. Additionally, platforms cater to students from diverse family structures, including non-traditional environments, by providing flexible learning options. AI also plays a role in the college admissions process, where tools may assist minority and low-income students in gaining access to higher education, though concerns about bias remain. Specialized apps, such as Autism Expressed, are designed to support students with Autism Spectrum Disorder (ASD), helping them improve communication and social skills. Moreover, AI-driven grading tools aim to automate and streamline assessment, yet these systems require continuous refinement to ensure fairness and equity in educational evaluation.
Additionally, ethical and privacy concerns related to the use of chatbots in educational environments should be considered in this context [
19]. Furthermore, technological limitations, novelty effects, and cognitive load have been pointed out as challenges in education [
20]. Moreover, issues such as reliability [
11,
12] have also been discussed. All these elements can contribute to the deepening of social inequalities, whether due to the difficulties in accessing these technological tools or the quality of such tools, with certain groups relying on less precise or equitable solutions.
Sociology must increasingly prioritize the study of new technologies and their impact, especially as they reshape fundamental aspects of human interaction and identity. While education research has started to address how technology influences learning and access, sociology needs to investigate more deeply how these innovations transform social structures, shape personal identities, and alter community dynamics. Despite growing interest, this area remains largely unexplored, underscoring the need for comprehensive studies that examine the broader social implications of technology [
21].
A sociological perspective enables an analysis not only of technologies themselves but also of how they interact with social structures, reinforcing or challenging existing power dynamics. Various theoretical frameworks—such as Critical Theory, Pierre Bourdieu’s concept of cultural capital, digital divide theories, and Actor-Network Theory (ANT)—offer valuable tools for critically assessing the introduction of AI in education and its broader societal implications. These approaches allow for a nuanced understanding of how AI technologies may shape educational experiences, influence social hierarchies, and impact issues of equity and access. From Critical Theory, which invites ethical reflection on the implementation of AI [
22,
23,
24,
25,
26], to Bourdieu’s framework of cultural capital, which explains disparities in access to and use of technology [
27,
28], and Actor-Network Theory (ANT), which highlights the complex interactions between human and non-human actors, research in this area continues to evolve [
27,
28].
Critical Theory is particularly useful for analyzing how AI may reproduce existing power dynamics in education [
22,
24]. The commodification of education through AI, where learning becomes increasingly mechanized, risks alienating marginalized students by disconnecting them from the human aspects of the educational experience [
23,
25]. Similarly, Bourdieu’s theory of cultural capital explains how students from different social backgrounds interact with AI-based educational tools, valuing certain knowledge and skills that could perpetuate generational inequalities [
26,
29]. Theories on the digital divide are also relevant in this context, as they address both disparities in access to digital technologies (first-level digital divide) and the unequal ability to effectively use these technologies (second-level digital divide) [
26].
Meanwhile, Actor-Network Theory (ANT) offers a broader perspective by positioning AI not merely as a tool, but as an active participant within the network of educational actors, including students, teachers, administrators, and policymakers. ANT suggests that AI reshapes relationships, roles, and power dynamics within this network. By examining how different actors interact with AI systems, a deeper understanding can be gained of how educational practices and outcomes are reconfigured, and how inequalities are negotiated or reinforced [
27,
28].
The evolution of sociology in its approach to technology also reflects a significant shift. While sociological studies from the 1980s and 1990s primarily viewed AI as a field of scientific knowledge, more recent studies tend to focus on its practical applications and how they affect everyday life [
25]. This shift implies an expansion in the way AI is understood, now seen not only as a matter of academic research but as a technology with a direct impact on social, cultural, and economic systems. For instance, the increasing commodification of AI in education raises concerns about its role in the reproduction of inequalities [
10,
30].
Moreover, recent sociological research examines how people interact with AI technologies and how these technologies shape social behaviors [
14,
31]. An analysis from the perspective of the sociology of technology allows for the observation not only of how scientists develop AI but also of how the applications of this technology can transform or reinforce power structures. AI can act as an agent of both transformation and perpetuation of inequalities, depending on how it is implemented and regulated within society [
10,
32].
It is undeniable that the literature in the context of social sciences and social studies reflects this dichotomy. The social impacts of these disparities show how, as these technologies become increasingly integrated into educational environments, new innovative ways to enhance learning outcomes coexist with the risk of exacerbating existing inequalities, particularly for vulnerable groups who face barriers such as lack of access to digital tools and low levels of digital literacy.
For this reason, this research aims to explore the sociological perspective on the introduction of AI as an educational tool and the risks associated with digital illiteracy, as well as its impact on vulnerable populations through a bibliometric analysis of various studies in the field that address this topic. The goal of this work is to go beyond and answer questions about the type of research that has been conducted in sociology regarding the impact of educational AIs on vulnerable populations, the widening of the digital divide, or the technological exclusion of certain social groups.
By analyzing these topics through the Web of Science Core Collection, extracting 1515 relevant studies, we can better understand how AI-driven educational practices contribute both to the mitigation and the amplification of issues related to social exclusion, fostering more equitable learning environments.
2. Methods
This study utilizes science mapping analysis to explore both the structural and dynamic aspects of scientific research, providing insights into the cognitive framework of the academic field [
33]. The chosen approach offers a rationale for using SciMAT GPLv3 and other bibliometric tools due to their capacity to provide a comprehensive analysis of the scientific literature. SciMAT, in particular, offers unique features such as longitudinal mapping and thematic evolution analysis. SciMAT was selected precisely because it enables a detailed analysis aligned with the study’s objectives. These functions enhance the understanding of trends and developments within a specific field over time, allowing for a richer, more nuanced view of how research evolves. By incorporating these features, we can identify and visualize conceptual subdomains within the field and monitor its thematic evolution over time [
33,
34].
The bibliometric analysis was conducted using the SciMAT software [
33], which relies on the H-index [
35] and co-word analysis [
36]. The methodology follows a four-phase process:
“Detection of research topics”: In this initial phase, a co-word network is created using keywords extracted from the articles. A clustering algorithm [
36] is then applied. This phase helps identify and visualize the conceptual subdomains of a research field and track its thematic evolution.
“Visualization of research topics and thematic network”: The second phase involves representing the identified topics through diagrams and thematic networks, with centrality (the relevance of a topic, calculated using c = 10 × Σekh) and density (internal cohesion of the network) as key dimensions.
“Identification of thematic areas”: In the third phase, the analysis focuses on the frequency of occurrences and thematic significance by examining the overlaps in the clusters.
“Performance analysis”: In the final phase, the association between keywords and thematic areas is examined using bibliometric indicators such as the number of publications, citations, and variations of the H-index [
35]. The similarity between keywords is calculated using a word co-occurrence matrix and an equivalence index function [
36].
The following outlines the key steps involved in the bibliometric analysis conducted in this study using the SciMAT tool. After selecting the relevant time periods, the next step is to choose the analysis units (such as periods, documents, references, authors, or terms). Data reduction techniques are applied to manage the massive and heterogeneous datasets by reducing millions of variables to a manageable scale [
37]. Co-occurrence analysis follows, filtering the network using a minimum link weight threshold to highlight the most significant links [
38]. The process also includes network analysis, reduction, normalization, grouping, document assignment, quality assessment, longitudinal analysis, visualization, and interpretation.
Following data export, the focus shifts to data preprocessing. This stage involves cleaning the data to remove duplicates and irrelevant entries, ensuring the dataset is focused and reliable. Normalization is also crucial, as it standardizes author names, journal titles, and keywords to eliminate inconsistencies that could affect the analysis. Researchers may apply filtering criteria to further refine the dataset, such as limiting the scope to publications within a specific date range or quality metrics.
After preprocessing, the cleaned and normalized dataset is imported into SciMAT, which facilitates various analytical processes. One key feature is longitudinal mapping, which allows researchers to create visual representations of the research landscape over time. This involves mapping publications across different years to illustrate trends, such as the growth of specific topics or research areas. Additionally, thematic evolution analysis is conducted to identify key themes within the literature, clustering similar publications and revealing dominant research themes while assessing how these themes have evolved over time.
The next phase involves the interpretation of results, where visual outputs generated by SciMAT are used to present findings clearly, including graphs and thematic maps. This analysis is followed by a thorough discussion of the implications of the findings in relation to the initial research questions, highlighting contributions to the existing body of knowledge and identifying gaps or opportunities for future research.
Finally, it is crucial to document the methodology comprehensively, detailing each step of the process from data collection to interpretation. This not only enhances the credibility of the research but also aids other scholars in replicating the study. In the conclusion, researchers summarize the key findings, emphasizing the significance of methodological rigor and the insights gained from the bibliometric analysis. Additionally, seeking peer reviews can provide valuable feedback, allowing for further refinement of the analysis and reporting to enhance clarity and impact. The methodology section should demonstrate the dynamic nature of the research landscape.
The bibliometric analysis process in SciMAT starts with data collection, focusing on the H-index [
35] and co-word analysis [
36]. A clear research topic is defined by specific questions and scope within the field. Appropriate bibliographic databases such as Scopus, Web of Science, or Google Scholar are selected to gather relevant publications. This involves executing search queries with carefully chosen keywords, authors, and publication years, utilizing Boolean operators to refine the results. Once the relevant literature is identified, data exportation follows, typically in a format such as CSV or RIS for easy import into SciMAT, including details like titles, abstracts, keywords, authors, publication years, and citation counts [
33]. Only works published in the Web of Science Core Collection are selected in this study. This library offers a comprehensive and curated selection of academic articles that ensures high-quality, peer-reviewed content. This approach allows us to examine AI-related research trends, developments, and thematic evolution in a robust and reliable database.
This approach is based on a broad, non-systematic review of the relevant literature, which was subsequently validated by an expert panel in the field of AI, education, and social exclusion (ALM, RSC, RGO [
14]). The selection of keywords is driven by the main objectives of the research project and is further supported by the literature, which underscores the issue of the introduction of AI as an educational tool in vulnerable settings. Engaging community members is vital for fostering genuine change in vulnerable environments, positively impacting health and well-being in urban areas, and enhancing social cohesion. Moreover, co-designing interventions with community members ensures that actions are tailored to local needs, boosting involvement and ownership of the solutions.
The study focuses on the following keywords and research strategy: “artificial intelligence” OR “educational innovation” OR “educational method” (Topic) AND exclusion OR “social exclusion” OR “digital illiteracy” OR “digital inequalities” OR vulnerability OR diversity OR “digital gap” (Topic) AND education (Topic) AND “social science” OR “social studies” (Topic) within the Web of Science Core Collection. This keyword selection, grounded in the main objectives of the project and supported by the existing literature, underscores the link between AI and inequalities [
10,
28,
30].
Following data export, data preprocessing takes place. This involves cleaning the data to remove duplicates and irrelevant entries, ensuring the dataset is focused and reliable. Normalization is crucial, as it standardizes author names, journal titles, and keywords to eliminate inconsistencies that could affect the analysis. Researchers may apply additional filtering criteria, such as limiting the dataset to publications within a specific date range or by certain quality metrics, to further refine the dataset.
After preprocessing, the cleaned and normalized dataset is imported into SciMAT, which enables various analytical processes. The longitudinal mapping feature, for instance, allows researchers to create visual representations of the research landscape over time, illustrating trends in the growth of specific topics or research areas. Additionally, thematic evolution analysis identifies key themes in the literature, clustering similar publications to reveal dominant research themes and assessing how these themes have evolved over time.
To further enhance clarity and interpretation, SciMAT identifies “research topics” by creating a standardized co-word network from selected keywords and applying a clustering algorithm [
37]. Topics are analyzed in terms of centrality (level of interaction between networks, calculated as c = 10 × Σekh) and density (internal cohesion and strength of the network). The frequency of occurrences and thematic relevance are then analyzed within clusters, and bibliometric indicators (e.g., number of publications, citations, various h-index types) are used to examine relationships between keywords and thematic areas [
35]. Additional detail regarding the steps within data processing (e.g., dimension reduction [
38]) could help readers draw clearer conclusions about the analytical depth of this study.
Strategic diagrams are employed to examine the evolution of core themes, with centrality on the X-axis and density on the Y-axis (see
Figure 1). The performance analysis assesses literature production by analyzing centrality and density, with relationships graphically represented in Cartesian coordinates. High centrality, for instance, may indicate a theme’s strong relevance to various subfields, suggesting it is a point of convergence within the broader research network. Conversely, themes with high density but lower centrality may represent more specialized or isolated areas of inquiry that, while relevant, may have less connectivity with broader research trends.
Thematic evolution highlights research themes that maintain conceptual continuity across different time periods. Solid lines between themes indicate the retention of the same name, while dashed lines represent thematic connections under different names. Thematic networks illustrate the clusters around specific networks. Although the Web of Science (WoS) indexes a substantial number of journals related to the topics of our analysis (2217), we have selected the top 10 journals with most papers in the research topic.
3. Results
The analysis of the data indicated a greater concentration of studies in “Education Research” (654). It highlighted the lack of a social studies research approach in this field. The most relevant journals publishing papers on this topic are the following (
Figure 2):
Sustainability (22);
BMC Public Health (17);
Education Sciences (16);
PLOS One (14);
BMC Medical Education (14);
Bordon-Revista de Pedagogía (11);
Physical Education and Sport Pedagogy (11);
International Journal of Inclusive Education (11);
Teachers College Record (10);
International Journal of Sustainability in Higher Education (10).
The analysis was carried out across four defined periods (
Figure 3): Period 1 (1996–2010), Period 2 (2011–2015), Period 3 (2016–2019), and Period 4 (2020–2024). The first period covers a longer time frame due to the lower volume of literature produced, whereas the following three periods are divided into five-year intervals.
A total of 96, 191, 356, and 872 documents were recorded for the four periods, respectively.
The results highlight the increasing interest among researchers in investigating the impact of artificial intelligence (AI) as an educational tool, particularly in relation to its effects on vulnerable populations and the digital divide. Recent years have seen a significant increase in technological innovation and the role of education in leveraging these tools to enhance learning outcomes [
21]. However, there is a lack of studies from a sociological perspective that explore how these innovations could exacerbate existing inequalities.
Figure 4 illustrates the significant increase in the number of keywords and shared words, as indicated by the index values (0.33, 0.46, 0.66). In the first period, there are 61 words. In the second period, there are 119 words, with 16 disappearing, 45 in common, and 74 new words. The third period consists of 168 words, with 8 disappearing, 331 new words, and 330 words in common. The last period includes 897 words, with 610 shared from the previous period and 287 new words.
As the graphs show, a clear evolution can be observed in both the complexity of the problems being addressed and the clustering of the network across the four periods.
The first ten words that appear most often are the following: Education (294); Diversity (133); Students (125); Higher Education (110); Children (82); Inclusion (74); Health (72); Gender (64); Impact (57).
Between positions 16 and 19 on the list of selected topics are those related to social exclusion (42), equity (42), and social justice (41). Other topics that have also been explored, highlighting the importance of artificial intelligence in community-related elements, include disability (34), mental health (35), exclusion (27), and vulnerability (25).
As shown in
Figure 5, each of the four periods selected for analysis includes the primary keywords, showing an increasing diversity of topics over time. A detailed description of each analysis period follows.
3.1. Period 1 (1996–2010)
The motor theme is “impact” while “immigrants”, “school” and “students” are highly developed and isolated themes, and “impact” is a basic theme. “Classroom”, “community”, and “culture” are basic themes.
During the first period (1996–2010), the analysis of thematic distribution according to the clusters (
Figure 6) reveals that the motor theme was “impact”, suggesting that early research primarily focused on how AI influenced educational outcomes. Themes such as “immigrants”, “school”, and “students” appear as well-developed but isolated topics, indicating a focus on specific groups without broader contextual integration in other educational contexts. The centrality of “school” and “students” highlights their relevance during this period, reflecting the prominent role of AI in structured educational environments, such as schools and with students.
On the other hand, themes such as “classroom”, “community”, and “culture” are less central but appear as basic themes. These represent the initial discussions about AI’s influence in traditional educational settings and its broader social implications, which could be expanded in future research.
The position of the theme “immigrants”, near the center but with low density, suggests growing interest in this topic. However, it remains underexplored, highlighting its potential for further development as research in this area progresses.
This first period shows the initial concerns of researchers about AI’s direct impact on educational outcomes, suggesting that research on AI was more focused on the structural aspects of formal education than on the broader social dynamics that can affect the overall educational experience.
3.2. Period 2 (2011–2015)
The motor themes are “adolescents”, “students”, “school”, and “programs”, while “experiences”, “vulnerability” and “behavior” are highly developed and isolated themes, and “mobile phone”, “pedagogy” and “risk” are basic themes. Emerging themes are “perspectives”, “educational policy” and “attitudes”.
During the second period, there was a shift in focus toward themes related to “adolescents”, “students”, “school”, and “programs”, which became motor themes (
Figure 7). This indicates a broader integration of AI into educational environments, especially those involving younger populations. The emergence of themes like “vulnerability” and “behavior” as isolated but well-developed topics suggests that AI began to play a relevant role in addressing student challenges.
Meanwhile, “mobile phones”, “pedagogy”, and “risk” emerged as basic and transversal themes, highlighting a growing concern with integrating AI into educational practices through technology.
Finally, the emergence of new themes such as “perspectives”, “educational policy”, and “attitudes” suggests an increasingly broad conversation about how AI shapes and is shaped by policies and societal views. These emerging themes reflect growing interest in the social and regulatory impacts of AI in the field of education, which anticipates future debates on its implementation and the ethical and social consequences of its use.
The distribution of the clusters in Period 2 shows a diversification of research, shifting from a more structural and results-focused approach, as seen in the first period, toward a deeper integration of AI into students’ daily lives. Additionally, it reveals growing interest in how technological tools can both mitigate and exacerbate inequalities, also addressing the ethical and political aspects of AI implementation in educational systems.
3.3. Period 3 (2016–2019)
The motor themes are “children”, “adults”, “university”, and “perception”, while “discourse” and “immigrants” are basic themes. Emerging themes are “literacy”, “innovation” and “challenges”. A highly developed and isolated theme is “students”.
During Period 3, a notable shift in research focus is observed, especially with the emergence of new motor themes such as “children”, “adults”, “university”, and “perception”. This diversification reflects a growing interest in the role of artificial intelligence (AI) across all educational stages and age groups, spanning from primary education to higher education (
Figure 8). As AI technology matured, discussions related to “literacy”, “innovation”, and the “challenges” posed by AI implementation in educational contexts emerged.
The inclusion of literacy as a central theme highlights the increasing awareness of the need to develop AI competencies among students and educators. This focus suggests that research began to recognize that, to leverage the benefits of AI, it is essential for users—both students and educators—to understand how this technology works and how it can be effectively utilized in educational environments.
Furthermore, the emergence of “innovation” as a prominent theme indicates that researchers were increasingly interested in how AI could transform educational practices, from personalized learning to the automation of administrative processes. However, the “challenges” associated with this transformation, which can range from ethical issues to technological barriers, became an integral part of the academic conversation.
On the other hand, the themes of “discourse” and “immigrants” were classified as basic transversal themes. This finding implies that the discussion surrounding inclusion and equity in access to technology was still developing and could have been explored more thoroughly in future research.
Period 3 reveals an evolution towards a more holistic and nuanced approach to AI in education. There is a growing interest in how technology affects different demographic groups and educational levels, reflecting an increasing awareness of the diversity in educational experiences. Additionally, the focus on literacy and innovation suggests that research is beginning to address not only the impacts of AI but also how to prepare future generations for a world where artificial intelligence will be ubiquitous.
3.4. Period 4 (2020–2024)
The motor themes are “social networks”, “classroom”, “intervention”, “future”, “competence” and “disability”, while “medical school”, “services” and “model” are basic themes. Emerging themes are “environment”, “school”, “mixed methods”, “bias” and “community” (returning themes). Highly developed and isolated themes are “impact” and “experiences”.
During Period 4, a significant shift is observed in the motor themes of research, now including “social networks”, “classroom”, “intervention”, “future”, “competence”, and “disability” (
Figure 9). This evolution suggests that research began to explore the impact of artificial intelligence (AI) in more nuanced contexts, such as its role in fostering social networks, enhancing classroom interventions, and supporting students with disabilities.
The inclusion of “social networks” as a motor theme reflects a growing interest in how AI can facilitate communication and collaboration among students, as well as how these platforms can influence learning. On the other hand, the centrality of “classroom” and “intervention” indicates a renewed focus on specific pedagogical strategies that may benefit from AI, suggesting that researchers are considering not only the implementation of AI but also how it can be effectively utilized in learning environments.
The emergence of themes like “environment”, “school”, “mixed methods”, “bias”, and “community” introduces more complex issues, such as the AI-driven educational environment, interdisciplinary research methods, and concerns over bias in AI. This focus highlights the need for more critical and reflective research on AI tools, ensuring they are fair and equitable for all students.
The presence of “impact” and “experiences” as highly developed but isolated themes suggests that, while these areas have been well studied, they still require further integration into broader discussions about AI’s evolving role in education. This indicates that, although there is a solid knowledge base in these areas, it is crucial to continue investigating how the impact of AI can intertwine with the experiences of students and educators in a wider educational context.
In this final period, a more sophisticated and multifaceted approach to AI in education is presented. Researchers are exploring how AI can influence various dimensions of learning and teaching, reflecting a commitment to inclusion, equity, and innovation in education. This also suggests a transition toward a more holistic approach that considers not only the benefits of AI but also its ethical and social implications, which is crucial for developing an inclusive and sustainable educational future.
When analyzing the four periods, it becomes evident that the emergence of new topics and the increasing number of core themes represent a trend in the study of AI from a social sciences perspective. A fundamental element is the role of the community, which stands as one of the main subjects of sociological study (
Figure 10). In this regard, there is a noticeable evolution: from being a marginal topic in the first period, associated with underdeveloped issues, to becoming an emerging theme connected to subtopics that directly address the lived experience of school dropout, communication in social interaction, digital inequalities, and, finally, the design and closure of unrelated subjects. Future studies should delve deeper into the development of this emerging theme, and greater efforts should be made to ensure that sociological issues surrounding inequality take on a more prominent role in studies on education and technology.
Although the term AI does not appear directly in the clusters or nodes, they have been carefully analyzed to better understand their underlying patterns and how they relate to the overall search, uncovering potential connections that may not be immediately visible. On the other hand, it is important to emphasize that all the findings are closely tied to a sociological perspective. This approach highlights the broader social implications of AI, particularly in terms of power dynamics, inequality, and the role of technology in shaping human interactions. The sociological lens allows for a deeper exploration of how AI systems influence educational practices, access to resources, and the reproduction of social inequalities, making it essential for understanding the full impact of AI on education and society at large. Findings in these areas underscore the importance of addressing the structural inequalities perpetuated through technology.
4. Discussion
The growing integration of AI in the educational realm not only presents opportunities to enhance teaching and learning but also raises critical challenges that must be addressed. As research continues to develop, educators, researchers, and policymakers need to collaborate to ensure that AI is used equitably and accessibly for all students, especially those from disadvantaged socioeconomic backgrounds. This will require a multifaceted approach that considers both access to technology and the development of digital competencies necessary to navigate an increasingly AI-mediated world.
The integration of AI technologies in educational environments raises significant ethical and social concerns, particularly regarding how these systems can reinforce or exacerbate inequities. For instance, Madianou [
24] emphasizes that AI systems, often designed with Eurocentric values, can perpetuate colonial power dynamics, suggesting that the role of AI in education is not neutral but rather reflects inherent biases in its design and implementation. This perspective aligns with Critical Theory’s emphasis on understanding the sociopolitical contexts that shape the production and dissemination of knowledge.
The capacity of AI to transform educational practices is also examined in the context of personalized learning systems. Katiyar [
39] discusses how AI-driven personalized learning can optimize educational outcomes by tailoring instruction to individual needs. However, this personalization can lead to a form of surveillance and control over students, raising questions about privacy and autonomy. The lens of Critical Theory allows for an examination of how these systems can inadvertently reinforce existing hierarchies by privileging certain learning styles or backgrounds over others. Additionally, the integration of AI in educational assessment, as explored by Du [
25], reveals a gap between technological capabilities and practical implementation, particularly in terms of equity and bias.
The ethical challenges associated with AI in education, as discussed by Akgün and Greenhow [
23], underscore the need for educators to critically engage with the technologies they employ. They argue that while AI can enhance learning experiences, it also presents ethical dilemmas that must be addressed to prevent reinforcing existing inequalities. This critical implication is essential for educators to navigate the complexities of AI integration and ensure that these technologies empower students rather than marginalize them.
The cultural capital posited by Bourdieu encompasses the knowledge, skills, and cultural competencies that individuals acquire through their education and upbringing, significantly influencing their educational trajectories and interactions with technology. Students from higher socioeconomic backgrounds often possess greater cultural capital, enabling them to navigate educational systems and technological tools more effectively. Educational tracking systems, which classify students based on their perceived ability, can exacerbate inequalities linked to cultural capital [
40].
The evolution of the sociological perspective on artificial intelligence (AI) in education has revealed multiple dimensions and dynamics at play. A total of 37 documents were published with the Author keyword from 2012 to 2024, some of which are oriented to digital health and patient care. The narrative of artificial intelligence (AI) research in education has evolved significantly, reflecting growing integration across diverse educational contexts and societal needs. Early studies, like Khan [
41], emphasized expanding access to learning for underserved populations using internet-based tools, laying the groundwork for future AI-driven educational technologies. The introduction of AI into augmentative communication for children with disabilities [
42] further showcased the transformative potential of AI in enhancing learning for marginalized groups. This trend of integrating AI into education aligns with global goals, as highlighted by Galés and Gallon [
43], who tied technological advancements to the Sustainable Development Goals (SDGs), promoting inclusive, equitable education worldwide.
As AI’s role in governance and educational policy became more evident [
44], attention turned to critical discussions on AI’s ethical implications and its influence on educational practices [
45]. Researchers like Kharbat et al. [
46] identified gaps in AI’s support for students with intellectual disabilities, urging a more inclusive design of AI technologies. Digital social innovations, particularly in Europe, started to reshape welfare systems, indicating a broader societal impact [
47]. By 2022, concerns over social implications, including biases and ethical considerations in AI, were prominent in discussions [
48], paralleled by the examination of AI’s acceptance among younger learners [
49].
In recent years, the intersection of AI and education has gained momentum with emerging technologies like the Metaverse [
20] and AI-driven health applications [
50], raising questions about the role of AI in personalized learning and social determinants of health. Chun and Elkins [
51] responded to the “crisis” of AI by proposing a new digital humanities curriculum centered around human values, while Lai et al. [
52] explored AI’s impact on adolescent emotional perception in China. As AI technologies continue to develop, concerns about bias and equity remain central to discussions on best practices [
53], along with growing interest in AI’s role in higher education, particularly with the introduction of tools like ChatGPT [
54].
The study reveals a significant gap in the thematic focus on the effects of AI in the field of education and the reproduction of inequality from a sociological perspective. This gap is evident in the limited number of publications addressing this topic and the analysis of clusters along their evolutionary lines. However, discussions on AI’s impact appear transversally in studies from other fields, particularly within education. The results of this study call for sociology to take a leading role in investigating this area, using its essential tools to better understand the dynamics between new technologies, global society, and education.
The thematic network for “community” expands from two nodes in earlier periods to seven in later ones, indicating a broader engagement in integrating educational AI within community services. Studies associated with community are on services for children attending schools for the emotionally and behaviorally disturbed [
55]. The research by Lai et al. [
52] explores whether interaction with generative artificial intelligence can enhance learning autonomy, comparing the effects of virtual companionship and knowledge acquisition preferences over time.
The only study directly related to the research strategy is the study that explores the identity and cognition of English language teachers instructing students with visual impairments in Türkiye, examining how personal, social, cultural, and educational factors shape these teachers’ experiences within the context of special education [
56]. This aligns with the broader trend in AI research towards inclusive approaches that support diverse needs within educational settings, highlighting the importance of sociocultural factors in educational AI applications.
This work has explored various theoretical frameworks to tackle the issue of digital inequalities. From the perspective of Critical Theory, the importance of questioning and transforming power structures that limit human self-realization is emphasized, focusing on emancipation and resistance to social inequalities, which is essential for understanding how cultural capital influences access to and use of technologies like AI [
22,
24]. Bourdieu [
57] defines cultural capital as the knowledge, skills, and competencies individuals possess, which are crucial for navigating educational and technological environments [
58]. Students with high cultural capital tend to benefit more from AI platforms, as they possess the cognitive and social tools necessary to leverage these technologies effectively. In contrast, those with lower cultural capital face greater obstacles, creating a gap in access to and use of AI [
25,
26,
58].
The Actor-Network Theory (ANT) broadens the perspective by presenting AI not only as a tool but also as an active participant in the educational ecosystem, reconfiguring relationships, roles, and power dynamics among students, teachers, and institutions. ANT considers both students and AI platforms as actors in a co-constructed network through their interactions [
59]. This theory allows for analyzing how relationships among students, teachers, and AI technologies transform and evolve, highlighting the importance of connections and collaboration in learning [
60]. AI platforms, far from being merely tools, influence relationships and knowledge construction between students and teachers [
61], reflecting a dynamic and changing learning network in which cultural capital can either facilitate or hinder students’ participation [
62]. Thus, the theoretical framework provided by Critical Theory and ANT is invaluable for exploring how AI reconfigures educational practices, with profound implications for inequality and social justice [
28,
63].
A more detailed analysis of the challenges and benefits of AI, based on demographic subgroups, shows evidence of greater access barriers among students from rural areas due to limitations in technological infrastructure, while urban students can more readily benefit from these tools, thus exacerbating digital divides [
64]. It would also be relevant to examine how AI could support students with disabilities or specific needs through adaptive platforms that respond to their particular requirements. This approach provides a more accurate picture of the various ways in which AI affects different populations, emphasizing the need for policies that consider the diversity and particularity of students.
Artificial intelligence (AI) is transforming education by addressing inequities through personalized learning experiences [
30,
65,
66], early detection of learning difficulties [
67], and automating administrative tasks [
17,
19]. By leveraging tools like natural language processing (NLP) and data analysis, AI enables educators to tailor resources to individual needs, improve accessibility, and identify gaps in knowledge. It helps bridge disparities across demographics and regions by optimizing teaching methods and making education more inclusive [
28]. Ethical implementation ensures AI remains a powerful tool for fostering equity in education.
Simultaneously, AI has the potential to widen global inequality due to three main factors: economic divergence, where wealthier nations have better access to AI technology and its benefits; disparities in AI research resources, with leading countries dominating development; and the reinforcement of existing biases, as AI systems trained on biased data can perpetuate inequalities. Addressing these issues will require international cooperation and ethical frameworks to ensure AI advancements benefit lower-income countries and vulnerable populations [
28,
68].
The combination of Critical Theory and ANT enables a deeper understanding of inequalities in access to education and technology. Critical Theory invites questioning of the structures that perpetuate these inequalities, while ANT provides a framework for observing how these structures manifest in daily interactions [
69]. For instance, students from disadvantaged socioeconomic backgrounds may have less access to technological and educational resources, limiting their ability to benefit from AI platforms [
64]. ANT helps to make these dynamics visible by considering how relationships between different actors (students, teachers, technologies) are configured and affect learning [
70].
Finally, the intersection between Bourdieu’s theory of cultural capital and the concept of the digital divide offers a nuanced understanding of how generational inequalities are perpetuated in the context of AI-based educational tools. The digital divide is generally categorized into two levels: the first-level digital divide, which refers to disparities in access to digital technologies, and the second-level digital divide, which refers to the unequal ability to effectively use these technologies.
The first-level digital divide highlights disparities in access to technology, which can be influenced by socioeconomic status, geographical location, and educational resources. Students from lower socioeconomic backgrounds often lack access to essential digital tools, which can create significant educational disadvantages. According to van Dijk [
63], access to technology is not just about physical availability, but also about the social and economic conditions that facilitate or hinder that access.
The second-level digital divide focuses on disparities in the ability to effectively use digital technologies. Even when students have access to technology, their capacity to use it effectively may vary significantly based on their cultural capital. Hargittai [
71] emphasizes that digital literacy is a crucial component of the second-level digital divide. Students with higher levels of cultural capital tend to be more competent in using technology for educational purposes, as they are typically exposed to digital tools and resources from an early age.
The integration of AI in educational environments transforms the relationships among various actors. For example, educators may see their roles shift from knowledge providers to facilitators of AI-driven learning experiences. This change can create new power dynamics, where AI systems have significant influence over curriculum design and assessment methods. Selwyn [
27] emphasizes that discussions about AI in education should be seen as inherently political, intertwined with issues of power, disadvantage, and marginalization.
To ensure that AI use in education benefits all students, especially those in vulnerable situations, various studies propose a set of practical recommendations for policymakers and educational institutions. First, it is necessary to establish clear privacy standards that regulate the use of AI in educational settings, protecting students’ personal data and preventing misuse of information [
23].
Different authors [
12,
13,
64] highlight the importance of creating funding opportunities that allow under-resourced schools, particularly in rural or vulnerable areas, to access AI technologies. In addition to resources, it is crucial to implement training programs to ensure that teachers are equipped with the skills needed for appropriate and pedagogical use of AI tools, ensuring inclusive and responsible implementation.
Finally, a line of research has focused on promoting the availability of AI platforms that offer personalized learning, allowing students to progress at their own pace and meeting the needs of those requiring additional support [
6,
32]. These practical measures seek not only to improve access to AI but also to ensure that its implementation in the educational sector aligns with principles of equity and social justice. Furthermore, they call for educators, researchers, and policymakers to work together in developing an AI framework in education that is inclusive and accessible.
The evolution of sociology towards a more complex approach to technology, marked by the emergence of digital sociology, also reflects a significant shift. This branch seeks to understand how digital technologies influence human behavior, relationships, and social norms. The integration of AI across various sectors raises ethical considerations regarding accessibility and equity in education, highlighting the need for a sociological lens to navigate these challenges.
Policymakers have a crucial role in harnessing AI to foster equity in education, and several actionable strategies can guide their efforts. First, ensuring equitable access to technology is essential. This includes allocating funding to provide underserved schools with the necessary infrastructure to access AI tools, as well as implementing subsidized programs or partnerships with technology companies to offer AI-based educational tools at reduced costs for low-income families.
Promoting digital literacy and cultural capital is another important strategy. Policymakers should focus on developing curricula that integrate digital literacy and AI education from an early age, teaching students not only how to use AI tools but also how to critically evaluate and engage with these technologies. Additionally, community-based programs that educate families about AI can help build cultural capital and empower students to navigate AI-based platforms effectively.
Supporting teacher training and development is also vital. Policymakers should provide ongoing professional development for educators on effectively integrating AI tools into their teaching practices. This training should emphasize understanding diverse cultural backgrounds and fostering positive student–teacher relationships. Establishing mentorship programs that connect experienced educators with those new to AI technologies can further promote knowledge sharing and collaborative learning.
It is equally important to foster inclusive AI development. Policymakers should engage stakeholders, including educators, students, and community members, in the development and selection of AI tools to ensure these technologies are relevant and effective. Creating ethical guidelines for AI implementation in education that prioritize equity, transparency, and accountability will help address biases in algorithms that could impact student outcomes.
Implementing data-informed policies is crucial for driving meaningful change. Policymakers should fund research that evaluates the impact of AI on various student populations, focusing on how these tools affect equity in educational outcomes. Additionally, establishing feedback mechanisms for collecting insights from students and teachers about their experiences with AI tools can help inform policy decisions and improve the effectiveness of resources.
Encouraging collaboration across sectors will enhance the impact of AI in education. Policymakers should promote partnerships between educational institutions and technology companies to develop tailored AI solutions that meet the specific needs of diverse student populations. Fostering collaboration between schools and community organizations can also create support networks for students and families, leveraging AI tools to enhance learning opportunities beyond the classroom.
Finally, policymakers should address existing barriers through policy revisions and exploring new funding models that support equitable AI integration. By reviewing and revising education policies to remove obstacles to effective AI use and ensuring that financial resources are allocated based on need rather than uniform distribution, policymakers can create an educational landscape where AI tools enhance learning opportunities equitably. Through these strategies, all students can be empowered to thrive in a technology-driven world.
In summary, sociological theory allows for the identification of risks in the implementation of AIs, considering various mitigation strategies (
Table 1), as has been outlined throughout the work.
This study has several limitations that should be acknowledged to contextualize the findings and guide future research. Firstly, the bibliometric analysis was confined to the Web of Science Core Collection, which, while comprehensive, may exclude relevant studies published in other databases or grey literature. This limitation could result in an incomplete representation of the existing research landscape on AI in education and digital inequalities. Additionally, the reliance on bibliometric methods inherently focuses on quantitative metrics, potentially overlooking the qualitative nuances and deeper sociological insights that qualitative analyses might capture. The so-called “grey literature”, including reports or innovation projects shared on blogs or other platforms, was not included in this phase. Such applied work is usually integrated into the academic literature with some delay and is expected to become available in indexed bibliographic databases soon. Additionally, the practical application examples provided lack reinforcement through studies demonstrating evidence of their efficacy and effectiveness. The thematic categorization, although systematically approached, is subject to the subjective interpretations of the researchers, which may influence the identification and evolution of themes. Language bias is another consideration, as the analysis likely prioritizes English-language publications, thereby neglecting valuable perspectives from non-English scholarly work. Lastly, while the study identifies gaps in sociological perspectives, particularly concerning digital illiteracy and socio-economic access disparities, it does not empirically investigate these issues. Future research should incorporate diverse methodologies, including qualitative approaches and cross-cultural studies, to address these gaps and provide a more comprehensive understanding of the sociological implications of AI integration in education.
5. Conclusions
The progression of AI research in education demonstrates a shift from understanding basic impacts on educational outcomes to addressing more nuanced themes such as vulnerability, digital literacy, bias, and community inclusion. This evolution is evident in studies that move from isolated analyses of specific populations to more comprehensive considerations of AI’s potential to foster inclusive and equitable educational environments. The thematic clusters and networked connections reflect increasing complexity, with interrelated topics like adolescent intervention, autism spectrum disorders, behavioral support, and services for children with disabilities. Each period demonstrates an expansion of AI’s application, moving from isolated studies on specific populations to broader considerations of how AI can foster inclusive, innovative educational environments while addressing challenges like bias and inequity. Research emphasizes the necessity of frameworks that ensure AI respects students’ agency, promoting autonomy within AI-supported educational environments.
Delving deeper into the results, it is found that there are 133 documents with the keyword “diversity” since 2002, although a direct link on how to address diversity with digital tools is only observed in 2023. Lea et al. [
72] mentions diversity in digital literacy, but it was only in 2023 that AI’s role in addressing diversity in education directly emerged, as seen in studies exploring AI for formative assessments and personalized learning. This integration can help address diverse student needs by automating analysis and providing insights while addressing challenges like algorithmic bias. However, it stresses the importance of teacher training and ethical AI use for inclusivity. Roshanaei [
32] highlights AI’s impact on education through personalized learning paths and data analytics, but also acknowledges the risk of bias in admissions and other processes, particularly for underrepresented groups, underlining the need for continuous refinement of AI systems for fairness.
It is important to underline that diversity without the digital element is still difficult to embrace from an educational point of view [
65]. Diversity is still a factor that education, especially formal education, struggles to address with a positive approach to inclusion. Instead, it is often treated as something to be managed, rather than embraced as a potential asset. In this sense, many of the limitations of artificial intelligence reflect patterns of exclusion that perpetuate existing forms of diversity, further reinforcing inequalities rather than promoting inclusivity.
A similar pattern is exhibited by the term “inclusion”, which is present in 74 documents since 2007, mostly related to special education [
73] and minorities [
66], and only in the latest period appears with a clear reference to digital education [
74]. In this study, Wang et al. [
74] examine how the use of digital devices affects adolescents’ academic performance, finding that educational use improves outcomes, while social or gaming use has a negative effect. The study highlights the importance of how devices are used rather than the type of device itself. Home use, with parental support, enhances academic performance, while school use tends to lower it. Factors such as nationality, age, and gender play a role, emphasizing the need to promote positive device usage, particularly among girls and younger students.
From 2012 to 2024, AI research in education has evolved from expanding access to learning for underserved populations to addressing ethical concerns, bias, and equity. Innovations such as the Metaverse and AI-driven health applications offer new possibilities for personalized learning, yet they also prompt significant ethical debates regarding inclusivity and the risk of reinforcing social inequalities. As AI’s influence on governance and policy increases, ethical concerns, such as privacy, bias, and equitable access, have become central, requiring frameworks that prioritize student rights and inclusive access to technology.
Approaches from sociology through Bourdieu’s concept of cultural capital highlight how disparities in students’ backgrounds influence their ability to effectively use AI-based educational tools. Access to and effective use of these technologies remains unevenly distributed, with students from lower socioeconomic backgrounds facing significant barriers due to both limited technological infrastructure and gaps in digital literacy. This emphasizes the need for inclusive educational policies and targeted support that address both the first-level digital divide (access) and the second-level digital divide (competency in using technology).
Collectively, these studies reflect a continuous evolution from the initial application of AI in expanding educational access to more intricate analyses of digital equity, technological literacy, and ethical implications. The sociological perspective, particularly through the lenses of Critical Theory and Actor-Network Theory (ANT), is increasingly recognized as essential to understanding AI’s transformative potential in reshaping educational practices on a global scale. This approach allows for a critical exploration of AI’s role in either addressing or exacerbating social inequalities, emphasizing the need for thoughtful and inclusive strategies as AI becomes embedded in educational systems worldwide.