Next Article in Journal
Emotion-Recognition System for Smart Environments Using Acoustic Information (ERSSE)
Previous Article in Journal
Audio-Driven Facial Animation with Deep Learning: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review

1
Institute of Accounting and Administration, University of Aveiro, 3810-193 Aveiro, Portugal
2
Digimedia, University of Aveiro, 3810-193 Aveiro, Portugal
3
Social and Organizational Study Centre, Porto Accounting and Business School, Polytechnic of Porto, 4465-004 Porto, Portugal
4
Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Information 2024, 15(11), 676; https://doi.org/10.3390/info15110676
Submission received: 3 October 2024 / Revised: 15 October 2024 / Accepted: 18 October 2024 / Published: 28 October 2024
(This article belongs to the Special Issue Generative AI Technologies: Shaping the Future of Higher Education)

Abstract

:
(1) Background: The development of generative artificial intelligence (GAI) is transforming higher education. This systematic literature review synthesizes recent empirical studies on the use of GAI, focusing on its impact on teaching, learning, and institutional practices. (2) Methods: Following PRISMA guidelines, a comprehensive search strategy was employed to locate scientific articles on GAI in higher education published by Scopus and Web of Science between January 2023 and January 2024. (3) Results: The search identified 102 articles, with 37 meeting the inclusion criteria. These studies were grouped into three themes: the application of GAI technologies, stakeholder acceptance and perceptions, and specific use situations. (4) Discussion: Key findings include GAI’s versatility and potential use, student acceptance, and educational enhancement. However, challenges such as assessment practices, institutional strategies, and risks to academic integrity were also noted. (5) Conclusions: The findings help identify potential directions for future research, including assessment integrity and pedagogical strategies, ethical considerations and policy development, the impact on teaching and learning processes, the perceptions of students and instructors, technological advancements, and the preparation of future skills and workforce readiness. The study has certain limitations, particularly due to the short time frame and the search criteria, which might have varied if conducted by different researchers.

Graphical Abstract

1. Introduction

The growing dominance of generative artificial intelligence (GAI) has led to significant changes in higher education (HE), prompting extensive research into its consequences. This development signifies a profound transformation, with GAI’s capabilities being integrated into personalized learning experiences, enhancing faculty skills, and increasing student engagement through innovative tools and technological interfaces. Understanding this process is crucial for two main reasons: it impacts on the dynamics of teaching within the educational environment and necessitates a reassessment of academic approaches to equip students with the necessary tools for a future where artificial intelligence (AI) is ubiquitous. Additionally, this evolution underscores the need to rethink and reinvent educational institutions, along with the core competencies that students must develop as they increasingly utilize these technologies.
AI and GAI, although sharing a common objective, cannot be understood as identical concepts. Marvin Minsky defined AI as “the science of getting machines to do things that would require intelligence if done by humans” (Minsky, 1985) [1]. This broad definition encompasses various fields that aim to mimic human behavior through technology or methods. GAI, for instance, includes systems designed to generate content such as text, images, videos, music, computer code, or combinations of different types of content [2]. These systems utilize machine learning techniques, a subset of AI, to train models on input data, enabling them to perform specific tasks.
To grasp the significance of AI in HE, it is crucial to examine the growing academic interest at the intersection of these two fields. In the past two years (2022–2023), there has been a marked increase in scholarly focus on this convergence, as demonstrated by the rising number of articles indexed in the Scopus and Web of Science (WoS) databases. This trend is supported by systematic evaluations of AI’s use in formal higher education. For example, studies by Bond et al. [3] and Crompton and Burke [4] provide a comprehensive analysis of 138 publications selected from a pool of 371 prospective studies conducted between 2016 and 2022. This increase highlights the expanding academic discussion, emphasizing the analysis and prediction of individual behaviors, intelligent teaching systems, evaluation processes, and flexible customization within the higher education context (op. cit.).
The importance of systematic literature reviews (SLRs) in this rapidly evolving discipline cannot be overstated. SLRs enable the synthesis of extensive research into aggregated knowledge, providing clear and practical conclusions. By employing well-established procedures such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), researchers ensure the comprehensive inclusion of all relevant studies while maintaining the integrity of the synthesis process. This method enhances the reliability and reproducibility of findings, thereby establishing a solid foundation for future research and the development of institutional policies [3,4].
While some studies explore the use of GAI in higher education, there are few articles that provide a systematic and comprehensive literature review on this topic. Additionally, existing reviews generally cover the period up to 2022. Given the significant advancements in AI, particularly GAI, over the past two years, it is crucial to investigate how this technology is shaping higher education and to identify the challenges faced by lecturers, students, and organizations.
Therefore, the main objective of this research was to conduct a systematic review of the empirical scientific literature on the use of GAI in HE published in the last two years. Selected articles were analyzed based on the main problems addressed, the research questions and objectives pursued, the methodologies employed, and the main results obtained. The Research Onion model, developed by Saunders, Lewis, and Thornhill [5], was used to analyze methodologies. The review adhered to the PRISMA methodology [6], and articles were identified and collected using the Scopus and WoS indexing databases.
The paper is structured as follows. Next, we detail the methods used in the selection and revision of the papers, including the inclusion and exclusion criteria. Then, we provide a brief description of the paper’s content, including the categories in which this content can be grouped (topics and methodologies used). This is followed by a discussion of the results and a proposal for a future research agenda. The paper ends with a conclusion and the presentation of the limitations of this research.

2. Methods

The research utilized an SLR methodology, which involved a series of structured steps: planning (defining the research questions), conducting (executing the literature search, selecting studies, and synthesizing data), and reporting (writing the report). This process adhered to the PRISMA guidelines as outlined by Page et al. [6].
During the planning phase, we formulated a research question (RQ) based on the background provided in Section 1:
RQ: What are the main problems, research questions, objectives pursued, methodologies employed, and key findings obtained in studies on GAI in HE conducted between 2023 and 2024?
The subsequent step involved identifying the search strategy, study selection, and data synthesis. The search strategy included the selection of search terms and the literature resources, and the overall search process. Deriving the research question aided in defining the specific search terms. For the eligibility criteria—comprising the inclusion and exclusion criteria for the review and the method of grouping studies for synthesis—we opted to include only articles that describe scientific empirical research on the use of GAI in higher education. In this context, we define empirical research as investigations in which researchers collect data to provide rigorous and objective answers to research questions and hypotheses. This approach intentionally excluded articles based solely on opinions, theories, or speculative beliefs to ensure a foundation of concrete evidence. We decided to use Scopus and WoS as our databases, with the search being conducted in January 2024. The next step was to identify synonyms for the search strings.
The search restrictions considered in Scopus were as follows: title, abstract, and keywords; period: since 1 January 2023; document type: article; source type: journal; language: English; publication stage: final and article in press. The search equation used was as follows: TITLE-ABS-KEY ((“higher education” OR “university” OR “college” OR “HE” OR “HEI” OR “higher education institution”) AND (“generative artificial intelligence” OR “generative ai” OR “GENAI” OR “gai”)) AND PUBYEAR > 2022 AND PUBYEAR < 2025 AND (LIMIT-TO (DOCTYPE, “ar”)) AND (LIMIT-TO (SRCTYPE, “j”)) AND (LIMIT-TO (PUBSTAGE, “final”) OR LIMIT-TO (PUBSTAGE, “aip”)) AND (LIMIT-TO (LANGUAGE, “English”)). As a result, we gathered 91 articles, of which only 87 were available. The search results were documented, and the articles were extracted for further analysis.
The search restrictions in the WoS were as follows: search by topic, including title, abstract, and keywords; period: since 1 January 2023; document type: article; language: English; publication stage: published within the specified period. The search equation used was as follows: TITLE-ABS-KEY ((“higher education” OR “university” OR “college” OR “HE” OR “HEI” OR “higher education institution”) AND (“generative artificial intelligence” OR “generative ai” OR “GENAI” OR “gai”)), with the previously outlined restrictions. As a result, we collected 61 articles. Eight of these articles were unavailable. One article was excluded because its title was in English, even though the article itself was written in Portuguese. Thus, we considered a total of 52 articles. The search results were documented, and the articles were extracted for further analysis.
The entire process was initially tested by the three researchers, with the final procedure implemented by one of them. All the articles were compiled into an excel sheet, where duplicates were identified and removed. This resulted in a final list of 102 articles.
The next step involved selecting the articles. The complete list was divided into three groups, with each group assigned to a different researcher. Each researcher reviewed their assigned articles, evaluating whether the keywords aligned with the search criteria and whether each article included empirical research. This evaluation was based on the abstract and, if necessary, the full article. For each article, the researcher provided one of three possible responses, based on the items just mentioned (keywords, the abstract, and, if necessary, the full article): “Yes” for articles the researcher was certain to include, “No” for those clearly to exclude, or “Yes/No” for cases where there were uncertainties about inclusion, indicating that the researcher was not entirely sure about including or excluding the article.
Articles marked as “Yes/No” were redistributed among the researchers for a second review to ensure that any initial uncertainties were resolved. The first researcher encountered 7 “Yes/No” cases, of which 2 were ultimately marked as “Yes” and 5 as “No” after the second review. Thus, out of the 30 articles initially assigned to researcher 1, 15 were included (“Yes”) in the final set of articles for review. For the second researcher, 8 articles were marked as “Yes/No”, of which 7 were marked as “No” following the second review, and 1 article, which was not available in Scopus or WoS, was excluded. Therefore, out of the 31 articles initially assigned to this researcher, 7 were included (“Yes”) in the final set. Lastly, for researcher 3, 16 articles were marked as “Yes/No”, all of which were considered “No” after the second review. As a result, out of the 41 articles initially assigned to researcher 3, 15 were included (“Yes”) in the final set of articles for review.
At the end of this process, 37 articles were selected and 65 were excluded (see Table 1 and Figure 1). These 37 articles were ultimately marked as “Yes” for the basis for this SLR (see Table 2 for the complete list of references).
These articles were published in 25 different journals, with only 7 journals featuring more than one article (Table 3).
The 37 articles were written by 119 different authors, of whom only 5 appear as authors on more than one article (Table 4).
Table 5 shows the geographical origins of the authors of the selected studies, based on the affiliations provided in the articles. The USA (6 authors), Hong Kong (5 authors), the UK (5 authors), and Australia (4 authors) stand out as the predominant countries of origin for the authors. The remaining countries are each represented by only 1 or 2 authors. It should be noted that some articles have authors from more than one country.
Figure 2 presents a word cloud generated from the abstracts of the 37 selected articles. The statistical analysis of the words in this word cloud shows that some words occured with a significantly high frequency, such as “Student” (n = 106), “AI” (n = 102), and “Educator” (n = 91). Figure 2 and Table 6 presents the words that occured at least 25 times in the set of 37 abstracts.
Each researcher independently conducted a grounded theory exercise using all the previously gathered information, including the articles themselves, to identify potential categories for each article. After this individual analysis, the three researchers combined their findings, resulting in the categorization and distribution of articles presented in Table 7. We established three main categories for the 37 selected studies: the use of GAI, stakeholder acceptance and perceptions, and specific tasks and activities. While some overlap exists—since stakeholders’ use of GAI often involves tasks like content analysis and content generation—this structure was chosen to capture distinct aspects of GAI’s impact on higher education. The aim of this process was to provide a comprehensive view of the various dimensions of GAI applications and their interactions across different areas.
Multiple efforts were made to minimize the risk of bias. The procedures of this investigation were thoroughly described and documented to ensure accurate reproducibility of the study. The three researchers conducted the procedures, with certain steps performed independently. The results were then compared and reassessed as needed.
Category A encompasses all studies that focus on the use of GAI technology as the core of the research. This category contains papers describing research on ChatGPT (sub-category A.1) and those addressing other technologies (sub-category A.2). Category B covers papers that examine the acceptance and perception of GAI from the perspective of different stakeholders, such as students, teachers, researchers, and higher education institutions. Here, the emphasis is on people rather than technology. Category C consists of studies that focus on specific tasks or activities, rather than on technology or people. These tasks include assessment, writing, content analysis, content generation, academic integrity, and feedback. It is important to note that GAI is a transversal aspect that unites all this research. This means that, in some cases, although a paper focuses on a particular stakeholder or activity, the technology factor is still present. However, the categorization was based on the core focus of each paper, even though technology is a common factor among them. Finally, a fourth category was added to encompass the methodology.

3. Results

The findings from the data synthesis are aimed at answering the research question (RQ) and are based on 37 papers, categorized and subcategorized as shown in Table 5 (see the previous section). These papers are divided into three main categories, as previously mentioned. The results are presented by category in the following paragraphs.

3.1. The Focus on Technology—The Use of GAI

The following is a comprehensive overview of 15 articles focused on the use of GAI. Each article was analyzed based on the main problems identified, research questions posed, objectives set, and main results achieved. Section 3.1.1 presents an analysis of articles specifically addressing ChatGPT [19,20,24,32,33,35]. Section 3.1.2 examines articles that take a broader perspective on the use of GAI [10,13,16,28,31,37,40,41,42].

3.1.1. The Use of GenAI Technology—The Case of ChatGPT

Six articles are summarized here, each examining the application of GAI technology, particularly ChatGPT, in different higher education settings. Despite the varied contexts of ChatGPT usage, these articles collectively address the common challenges and opportunities this technology presents.
In the study of Michel-Villarreal, Vilalta-Perdomo, Salinas-Navarro, Thierry-Aguilera, and Gerardou [32], the identification of challenges, potentialities, and barriers is explicitly outlined in the two research questions it presents (see p. 2 of the article). This study stands out as a particularly interesting case because ChatGPT was used as a data source to help address these research questions. A chat session was conducted with ChatGPT in the format of a semi-structured interview. Content analysis of this interaction revealed a range of opportunities (five for students and two for teachers), challenges (five), barriers (seven) and priorities (six). Some of these findings are noteworthy as they remain underexplored in literature. For instance, the opportunity of “providing ‘round-the-clock support to students’” is highlighted, which holds significant potential, particularly in distance learning scenarios (p. 10). Additionally, the authors emphasize that “incorporating ChatGPT into the curriculum can introduce innovative and interactive learning experiences” (p. 10), representing a novel opportunity. Furthermore, they identify two opportunities related to the role of teachers, including the possibility of freeing up more of their time by efficiently managing routine tasks, and using these technologies in various research activities, such as “by assisting with literature reviews, data analysis, and generating hypotheses” (p. 10). The primary challenges identified pertain to risks associated with academic integrity and quality control, suggesting a set of principles for the acceptable and responsible use of AI in HE (p. 11). Additionally, strategies for mitigating challenges, including policy development, education, and training, are proposed (p. 13).
French, Levi, Maczo, Simonaityte, Triantafyllidis, and Varda [24] also explore the impact of integrating OpenAI tools (ChatGPT and Dall-E) in HE, particularly focusing on their incorporation into the curriculum and their influence on student outcomes. The authors facilitated the use of these technologies by students in game development courses. Through five case studies, the authors observed a significant impact on students’ skill development. The students’ outputs “show that they have adopted creative, problem-solving and critical skills to address the task” (p. 16). Additionally, the students exhibited high levels of motivation and engagement with this approach. The authors acknowledge the broader challenge of providing students with access to such technologies, allowing them to make their own judgments about their usage, arguing “that both students and educators need to be flexible, creative, reflective and willing to increase their skills to meet the demands of a future society” (p. 18).
Another article also reports the use of ChatGPT in a specific HE context, particularly within engineering education [33]. This article focuses on assessment integrity, as indicated by its research question: “How might ChatGPT affect engineering education assessment methods, and how might it be used to facilitate learning?” (p. 560). A group of authors from different universities and engineering disciplines questioned ChatGPT to determine whether its responses corresponded to passable responses. The authors highlight, as a primary finding, the need to reevaluate assessment strategies, as they accumulated evidence suggesting that ChatGPT can generate passable responses.
Popovici [35] also addresses the necessity of developing new approaches and strategies for using GAI tools, particularly focusing on the positive use of ChatGPT in HE contexts. Specifically, the authors examined the application of ChatGPT within a functional programming course. They were surprised to find that their students were already using ChatGPT to complete assignments and recognized its potential to aid in their learning (p. 1). Subsequently, the authors employed ChatGPT as if it were a student to evaluate their performance in programming tasks and code review. The results indicated that “ChatGPT as a student would receive an approximate score of 7 out of a maximum of 10. Nonetheless, 43% of the accurate solutions provided by ChatGPT are either inefficient or comprise of code that is incomprehensible for the average student” (p. 2). These findings highlight both the potential and limitations of utilizing ChatGPT in programming tasks and code review.
Elkhodr, Gide, Wu, and Darwish [20] examined the use of ChatGPT in another HE context, specifically within ICT education. The study aimed to “examine the effectiveness of ChatGPT as an assistive technology at both undergraduate (UG) and postgraduate (PG) ICT levels” (p. 71), and three case studies were conducted with students. In each case study, students were divided into two groups: one group was permitted to use ChatGPT, while the other was not. Subsequently, the groups were interchanged so that each group of students performed the same tasks with and without the assistance of ChatGPT, and they were asked to reflect on their experiences (p. 72). The results indicated that students responded positively to the use of ChatGPT, considering it to be a valuable resource that they would like to continue using in the future.
Duong, Vu, and Ngo [19] describe a study in which a modified version of a Technology Acceptance Model (TAM) was used “to explain how effort and performance expectancies affect higher education students’ intentions and behaviors to use ChatGPT for learning, as well as the moderation effect of knowledge sharing on their ChatGPT-adopted intentions and behaviors” (p. 3). The results of the study show that student behavior is influenced by both effort expectancy and performance expectancy, which is evident in their use of ChatGPT for learning purposes (p. 13).

3.1.2. Exploring the Use of GAI Technology—A Broader Perspective

Nine articles provide a broad perspective on the use of GAI technology across various higher education contexts.
Articles by Lopezosa, Codina, Pont-Sorribes, and Vállez [31], by Shimizu et al. [37], and Yilmaz and Karaoglan Yilmaz [42] focused on the use of GAI in specific academic disciplines, demonstrating how GAI tools are used in educational contexts and their impact on specific domains of teaching and learning. The articles by Lopezosa, Codina, Pont-Sorribes, and Vállez [31] and Yilmaz and Karaoglan Yilmaz [42] specifically addressed the issue of integrating AI into journalism and programming education, respectively. In the context of journalism education, Lopezosa, Codina, Pont-Sorribes, and Vállez [31] aimed to “provide an assessment of their impact and potential application in communication faculties” (p. 2). They proposed training models based on the perspectives of teachers and researchers concerning the integration of AI technologies in communication faculties and their views on using GAI to “potentially transform the production and consumption of journalism” (p. 5). The results highlight the essential need to integrate AI into the journalism curriculum, although opinions on specific issues vary. A strong consensus has emerged on the ethical issues involved in using GAI tools. Regarding programming education, Yilmaz and Karaoglan Yilmaz [42] argue that GAI tools can help students develop skills in various dimensions, such as code creation, motivation, critical thinking, and others. They also note that when challenges are significant, “the use of AI tools such as ChatGPT does not have a significant effect on increasing student motivation” (p. 11).
In turn, Shimizu et al. [37] discussed the impact of using GAI in medical education, particularly its effects on curriculum reform and the professional development of medical practitioners, with concerns regarding “ethical considerations and decreased reliability of the existing examinations” (p. 1). The authors conducted a SWOT analysis, which identified 169 items grouped into five themes: “improvement of teaching and learning, improved access to information, inhibition of the existing learning processes, problems in GAI, and changes in physicians’ professionalism” (p. 4). The analysis revealed positive impacts, such as improvements in the teaching and learning process and access to information, alongside negative impacts, notably teachers’ concerns about students’ ability to “think independently” and issues related to ethics and authenticity (p. 5). The authors suggest that these aspects be considered in curriculum reform, advocating for an adaptive educational approach.
Another set of articles [13,40,41] examined the impact of GAI on HE at a macro scale, focusing on policy development, institutional strategies, and broader curricular transformations. Walczak and Cellary [40] specifically explored “the advantages and potential threats of using GAI in education and necessary changes in curricula” as well as discussing “the need to foster digital literacy and the ethical use of AI” (p. 71). A survey conducted among students revealed that the majority believed “students should be encouraged and taught how to use AI” (p. 90). The article provides a thematic analysis of existing challenges and opportunities to HE institutions. They acknowledge the impact that the introduction of GAI has on the world of work, raising questions about the future nature of work and how to prepare students for this reality, emphasizing that human performance is crucial to avoid “significant consequences of incorrect answers made by AI” (p. 92). Among the study’s main conclusions and recommendations, it highlights the ethical concerns in using GAI tools and the need to critically assess the content they produce.
Watermeyer, Phipps, Lanclos, and Knight [41] also raised concerns about the labor market, specifically regarding academic labor. Their work examines how GAI tools are transforming scholarly work, how these tools aim to alleviate the pressures inherent in the academic environment, and the implications for the future of the academic profession. The authors found that the uncritical use of GAI tools has significant consequences, making academics “less inquisitive, less reflexive, and more narrow and shallow scholars” (p. 14). This introduces new institutional challenges for the future of their academic endeavors.
Chan [13] focused on developing a framework for policies regarding the use of AI in HE. A survey was conducted among students, teachers, and staff members which included both quantitative and qualitative components. The results indicate that, according to the respondents, there are several aspects arising from the use of AI technologies, such as ChatGPT. For example, the importance of integrating AI into the teaching and learning process is recognized, although there is still little accumulated experience with this use. Additionally, there is “strong agreement that institutions should have plans in place associated with AI technologies” (p. 9). Furthermore, there is no particularly strong opinion about the future of teachers, specifically regarding the possibility that “AI technologies would replace teachers” (p. 9). These and other results justify the need for higher education institutions to develop AI usage policies. The authors also highlight several “implications and suggestions” that should be considered in these policies, including areas such as “training”, “ethical use and risk management”, and “fostering a transparent AI environment”, among others (p. 12).
A third group of articles [10,16,28] addressed how GAI affects learning processes, student engagement, and the overall educational experience from the students’ perspectives. Chiu [16] focused on the students’ perspectives, as reflected in the research question: “From the perspective of students, how do GAI transform learning outcomes, pedagogies and assessment in higher education?” (p. 4). Based on data collected from students, the study presents a wide range of results grouped into the three areas mentioned in the research question: learning outcomes, pedagogies, and assessment. It also presents implications for practices and policy development organized according to these three areas. Generally, the study suggests the need for higher education to evolve to incorporate the changes arising from AI development, offering a set of recommendations in this regard. It also shows that “students are motivated by the prospect of future employment and desire to develop the skills required for GAI-powered jobs” (p. 8).
The perspective of students is also explored in Jaboob, Hazaimeh, and Al-Ansi [28], specifically through data collection from students in three Arab countries. The study aimed “to investigate the effects of generative AI techniques and applications on students’ cognitive achievement through student behavior” (p. 1). Various hypotheses were established that relate GAI techniques and GAI applications to their impacts on student behavior and students’ cognitive achievement. The results show that GAI techniques and GAI applications positively impact student behavior and students’ cognitive achievement (p. 8), emphasizing the importance of improving the understanding and implementation of GAI in HE, specifically in areas such as pedagogy, administrative tasks for teachers, the economy surrounding HE systems, and cultural impacts.
Chan and Hu [10] also focused on students’ perceptions of the integration of GAI in higher education, investigating their familiarity with these technologies, their perception of the potential benefits and challenges, and how these technologies can help “enhance teaching and learning outcomes” (p. 3). A survey was conducted among students from six universities in Hong Kong, and the results show that the students have a “good understanding of GAI technologies” (p. 7) and that their attitudes towards these technologies are positive, showing willingness to use them. The results also point out that the students have some concerns about using GAI, such as fears of becoming too reliant on these technologies and recognizing that they may limit their social interactions.

3.2. The Focus on the Stakeholders—Acceptance and Perceptions

The following summary highlights 10 articles that examine the acceptance and perceptions of GAI usage from the perspectives of various stakeholders.
In Yilmaz, Yilmaz, and Ceylan [43], the authors outlined their objectives and research questions, focusing on the acceptance and perceptions of AI-powered tools among students and professors. To obtain observable results, the methods were defined based on the following objectives. To measure the degree of acceptance among students regarding educational applications, the authors proposed developing a tool based on the Unified Theory of Acceptance and Use of Technology (UTAUT) model. The results “indicated that all items possessed discriminative power” (p. 10) and that “the instrument proves to be a valid and reliable scale for evaluating students’ intention to adopt generative AI” (p. 10). Nevertheless, as with any research tool, further studies are necessary to corroborate these findings and ensure the tool’s validity across different populations and contexts.
The objective of the study by Strzelecki and ElArabawy [39] was to investigate the implications of integrating AI tools, particularly ChatGPT, into higher education contexts. The study highlights the benefits of AI chat, such as “reducing task completion time and providing immediate responses to queries, which can bolster academic performance and in turn, foster an intention to utilize such tools” (p. 15). The findings indicate that the three variables of performance expectancy, effort expectancy, and social influence significantly influence behavioral intention. This study suggests that students are more likely to utilize ChatGPT if they perceive it to be user-friendly and require less effort. This is particularly true as ChatGPT offers multilingual conversational capabilities and enables the refinement of responses. Furthermore, the results indicate that the acceptance and usage of ChatGPT are positively correlated with the influence of instructors, peers, and administrators who promote this platform to students. The authors note that ChatGPT is “not adaptive and is not specifically designed for educational purposes” (p. 18).
Chen, Zhuo, and Lin [14] offer several conclusions regarding the relationship between technology characteristics and performance. Specifically, the article provides practical recommendations for students on the appropriate use of the ChatGPT system during the learning process. Additionally, it offers guidance to developers on enhancing the functionality of the ChatGPT system. The study revealed that overall quality is a “key determinant of performance impact”. To influence the learning process effectively, the platform must support individualized learning for students, necessitating the continuous optimization and customization of features, as well as the provision of timely learning feedback.
The study by Chergarova, Tomeo, Provost, De la Peña, Ulloa, and Miranda [15] aimed to evaluate the current usage and readiness to embrace new AI tools among faculty, researchers, and employees in higher education. The analysis was performed regarding the AI tools and pricing model. To this end, the Technology Readiness Index (TRI) was used. The outcomes demonstrate that most users prefer the cost-free options for AI tools used in creative endeavors and tasks such as idea generation, coding, and presentations. Statements from the study participants indicate that “the participants showed enthusiasm for responsible implementation in regards to integrating AI generative tools” (p. 282). Additionally, professors incorporating these technologies into their teaching practice should undertake a responsible approach”.
Chan and Lee [11] underscored the significance of integrating digital technology with conventional pedagogical approaches to enhance educational outcomes. Their findings have implications for the formulation of evidence-based guidelines and policies for the integration of GAI, aiming to cultivate critical thinking and digital literacy skills in students while fostering the responsible use of GAI technologies in higher education. It was concluded that integrating technology with traditional teaching methods is of paramount importance to facilitate an effective learning experience. To achieve this, evidence-based guidelines and policies must be developed to enable GAI integration, support the development of critical thinking and digital literacy skills in students, and promote the responsible use of GAI technologies in higher education.
Essel, Vlachopoulos, Essuman, and Amankwa [22] aimed to investigate the impact of using ChatGPT on the critical, creative, and reflective thinking skills of university students in Ghana. The findings indicate that the incorporation of ChatGPT significantly influenced critical, reflective, and creative thinking skills, as well as their respective dimensions. Consequently, the study provides guidance for academics, instructional designers, and researchers working in the field of educational technology. The authors highlighted the “potential benefits of leveraging the ChatGPT to promote students’ cognitive skills” and noted that “didactic assistance in-class activities can positively impact students’ critical, creative, and reflective thinking skills” (p. 10). Although the study did not assess the outcomes of the learning process or the effectiveness of different teaching methods, it can be concluded that using ChatGPT for in-class tasks can facilitate the development of cognitive abilities.
In the study by Rose, Massey, Marshall, and Cardon [36] the aim was to gain insight into how computer science (CS) and information systems (IS) lecturers perceive the impact of new technologies and their anticipated effects on the academic sector. The authors noted that the utilization of these technologies by students allows them to complete their assignments more efficiently. Furthermore, these technologies have the potential to facilitate the identification of coding errors in the workforce. However, there is a growing concern about the rise in plagiarism among students, which could negatively impact the integrity of higher education. Additionally, there is a “potential impact of AI chatbots on employment” (p. 185).
The study presented by Chan and Zhou [12] examined the relationship between student perceptions and their intention to employ GAI in higher education settings. The authors noted the importance of “enhancing expectancies for success and fostering positive value beliefs through personalized learning experience and strategies for mitigating GAI risks” (p. 19). The findings indicated a strong correlation between perceived value and the intention to use GAI and a relatively weak inverse correlation between perceived cost and the intention to use GAI. An analysis of GAI implications in other domains, such as education, was performed. It is crucial to evaluate the potential long-term consequences and the ethical challenges that may arise from its widespread adoption.
Greiner, Peisl, Höpfl, and Beese [25] investigated the potential for AI to be employed as a decision-making agent in semi-structured educational settings, such as thesis assessment. Furthermore, they explored the nature of interactions between AI and its human counterparts. It was observed that students’ acceptance and willingness to adopt GAI are central, highlighting the need for further research in this area. Consequently, this work presents an instrument for measuring students’ perceptions of GAI, which can be employed by researchers in subsequent studies of GAI adoption. The authors also suggested that there is a sufficient foundation to analyze AI–human communication. Additionally, the study provides valuable insights into the potential application of AI within higher education, particularly in the evaluation of academic theses.
This following study [8] examined the impact of GAI tools on researchers and research related to higher education in Saudi Arabia. The results show that participants have positive attitudes and high awareness of GAI in research, recognizing the potential of these tools to transform academic research. However, the importance of adequate training, support, and guidance in the ethical use of GAI emerged as a significant concern, underlining the participants’ commitment to responsible participants’ commitment to responsible research practices and the need to address the potential biases associated with using these tools.

3.3. Focus on Tasks and Activities—Utilizing GAI in Various Situations

Twelve articles are summarized, focusing on the application of GAI in diverse contexts such as assessment, writing, content analysis, content generation, academic integrity, and feedback.
In the study presented by Singh [38], the authors assessed the impact of ChatGPT on scholarly writing practices, with particular attention to potential instances of plagiarism. Furthermore, the relatively under-researched field of GAI and its potential application to educational contexts was discussed, referencing three professors from South Africa. To understand the impact of such technologies on the teaching and learning process, a comprehensive academic study is essential, encompassing not only universities but also all educational settings. The authors noted “that lecturers need to develop their technical skills and learn how to incorporate these kinds of technologies into their classes and adapt how they assess students” (p. 218). Finally, the insights from the professors summarize their views on the impact of ChatGPT on plagiarism within higher education and its effects on scholarly writing.
The study by Farazouli, Cerratto-Pargman, Bolander-Laksov, and McGrath [23] aimed to investigate the potential impact of emerging technologies, specifically AI chatbots, on the assessment practices employed by university teachers. The empirical observations revealed that while participants were not specifically requested to identify responses written by the chatbot, they were required to provide scores and evaluate the quality of responses during the Turing Test experiment. The results from focus group interviews indicated that participants were consistently mindful of the possibility that their assessment might be influenced by the presence of text generated by ChatGPT. The authors noted that “participants perceived that the evaluation of the responses required them to distinguish between student and chatbot texts” (p. 10). The findings suggest that the generated responses might have been influenced by AI, leading to flawed responses that were similar or identical to those produced by the chatbot and student. In contrast, the study aims to examine teachers’ responses and perceptions regarding emerging technological artifacts such as ChatGPT, with the goal of understanding the implications for their assessment practices in this context.
In their study, Barrett and Pack [9] examined the potential interactions between an inexperienced or inadequately trained educator or student and a GAI tool, such as ChatGPT. The aim was to inform approaches to GAI integration in educational settings and provide “initial insights into student and teacher perspectives on using GAI in academic writing” (p. 18). A potential drawback of this study is the non-random selection of the sample, which limits the generalization of the findings to a larger, broader population.
De Paoli [18] presents the results and reflections of an experimental investigation conducted with the LLM GPT 3.5-Turbo to perform an inductive thematic analysis (TA). The authors stated that it “was written as an experiment and as a provocation, largely for social sciences as an audience, but also for computer scientists working on this subject” (p. 18). The experiment compared the results of the research on the ‘gaming’ and ‘teaching’ datasets. The issue of whether an AI natural language processing (NLP) model can be used for data analysis arises from the fact that this form of analysis is largely dependent on human interpretation of meaning by humans.
In the following study [26], the objective was to rigorously examine the discourses used by five online paraphrasing websites to justify the use of an automated paraphrasing tool (APT). The aim was to identify the appropriate and inappropriate ways that these discourses are deployed. The competing discourses were conceptualized using the metaphorical representation of the dichotomy between a sheep and a wolf. Additionally, the metaphor of educators acting as shepherds was employed to illustrate how students may become aware of the claims presented on the APT websites and develop critical language awareness when exposed to such content. Educators can assist students in this regard by acquiring an understanding of how these websites use language to persuade users to circumvent learning activities.
The article by Kelly, Sullivan, and Strampel [29] provides a novel foundation for enhancing our understanding of how these tools may affect students as they engage in academic pursuits at the university level. The authors observed “that students had relatively low knowledge, experience, and confidence with using GAI”. Additionally, the rapid advent of these resources in late 2022 and early 2023 meant that many students were initially unaware of their existence. The limited timeframe precluded academic teaching staff from considering the emerging challenges and risks associated with GAI and how to incorporate these tools into their teaching and learning practices. The findings indicated that students’ self-assessed proficiency in utilizing GAI ethically increases with experience. It is notable that students are more likely to learn about GAI through social media.
The study by Laker and Sena [30] provides a foundation for future research on the significant impact that AI will have on HE in the coming years. The integration of GAI models such as ChatGPT, in higher education—particularly in the field of business analytics—offers both potential advantages and inherent limitations. AI has the potential to significantly enhance the learning experience of students by providing code generation and step-by-step instructions for complex tasks. However, it also raises concerns about academic dishonesty, impedes the development of foundational skills, and brings up ethical considerations. The authors obtained insights into the accuracy of the generated content and the potential for detecting its use by students. The study indicated that ChatGPT can offer accurate solutions to certain types of assessments, including straightforward Python quizzes and introductory linear programming problems. It also illustrates how instructors can identify instances where students have used AI tools to assist with their learning, despite explicit instructions not to do so.
Two studies covered, specifically, the topic of academic integrity/plagiarism. The study by Perkins, Roe, Postma, McGaughran, and Hickerson [34] examined the effectiveness of academic staff utilizing the Turnitin artificial intelligence (AI) detection tool to identify AI-generated content in university assessments. Experimental submissions were created using ChatGPT, employing prompting techniques to minimize the likelihood of detection by AI content detectors. The results indicated that Turnitin’s AI detection tool has the potential to support academic staff in detecting AI-generated content. However, the relatively low detection accuracy among participants suggests a need for further training and awareness. The findings demonstrate that the Turnitin AI detection tool is not particularly robust against the use of these adversarial techniques, raising questions regarding the ongoing development and effectiveness of AI detection software.
The aim of the article by Currie and Barry [17] was to analyze the growing challenge of academic integrity in the context of AI algorithms, such as the GPT 3.5-powered ChatGPT chatbot. This issue is particularly evident in nuclear medicine training, which has been impacted by these new technologies. The chatbot “has emerged as an immediate threat to academic and scientific writing” (p. 247). The authors concluded that there is a “limited generative capability to assist student” (p. 253) and noted “limitations on depth of insight, breadth of research, and currency of information” (p. 253). Similarly, the use of inadequate written assessment tasks can potentially increase the risk of academic misconduct among students. Although ChatGPT can generate examination answers in real-time, its performance is constrained by the superficial nature of the evidence of learning produced by its responses. These limitations, which reduce the risk of students benefiting from cheating, also limit ChatGPT’s potential for improving learning and writing skills.
Alexander, Savvidou, and Alexander [7] proposed a consideration of generative AI language models and their emerging implications for higher education. The study addressed the potential impact on English as a second language teachers’ existing professional knowledge and skills in academic writing assessment, as well as the risks that such AI language models could pose to academic integrity, and the associated implications for teacher training. In conclusion, the authors noted that it is “fully reliable way of establishing whether a text was written by a human or generated by an AI” (p. 40), and that “human evaluators’ expectations of AI texts differ from what in reality is generated by ChatGPT” (p. 40).
In their article, Hassoulas, Powell, Roberts, Umla-Runge, Gray, and Coffey [27] addressed the responsibility of integrating assessment strategies and broadening the definition of academic misconduct as this new technology emerges. The results suggest that, at present, experienced markers cannot consistently distinguish between student-written scripts and text generated by natural language processing tools, such as ChatGPT. Additionally, the authors confirm that “despite markers suspecting the use of tools such as ChatGPT at times, their suspicions were not proven to be valid on most occasions” (p. 75).
The article by Escalante, Pack, and Barrett [21] examined the use of GAI as an automatic essay evaluator, incorporating learners’ perspectives. The findings suggest that AI-generated feedback did not lead to greater linguistic progress compared to feedback from human tutors for new language students. The authors note that there is no clear superiority of one feedback method over the other in terms of scores.

3.4. Analysis of the Methodologies Employed

3.4.1. General Analysis

The methodologies employed by the authors of the 37 selected papers, which investigate the integration of AI tools such as ChatGPT across various domains, particularly in education, were analyzed. These methodologies were categorized and examined using the model proposed by Saunders et al. [44]—the Research Onion. This model provides a comprehensive and visually rich framework for conducting or analyzing methodological research in the social sciences. It provides a structured approach with several layers, each of which must be sequentially examined.
Saunders et al. [44] divided the model into three levels of decision-making: 1. The two outermost rings encompass research philosophy and research approach. 2. The intermediate level includes research design, which comprises methodological choices, the research strategy, and the time horizon. 3. The innermost core consists of tactics, including aspects of data collection and analysis. Each layer of the Research Onion presents choices that researchers must confront, and decisions at each stage influence the overall design and direction of the study.
By aligning the papers within the structured layers of the Research Onion, we aimed to gain a deeper understanding of the methodological choices made across these studies and to provide a comprehensive overview of the trends and focal points in GAI research.
Since the research philosophy adopted by the authors is often not clearly stated in most papers, we chose to focus on three major areas: research approach, research strategies, and data collection and analysis. Research approaches can be either deductive or inductive. In the deductive approach, the researcher formulates a hypothesis based on a preexisting theory and then designs the research approach to test it. This approach is suitable for the positivist paradigm, enabling the statistical testing of expected results to an accepted level of probability. Conversely, the inductive approach allows the researcher to develop a theory rather than adopt a preexisting one.
Our analysis reveals that 15 out of 37 papers adopted a deductive approach, while 22 employed an inductive approach. Examples of studies employing the deductive approach include those of Popovici [35], Yilmaz and Karaoglan Yilmaz [42], and Greiner, Peisl, Höpfl, and Beese [25]. In contrast, examples of studies using the inductive approach include papers by Walczak and Cellary [40], French, Levi, Maczo, Simonaityte, Triantafyllidis, and Varda [24], and Barrett and Pack [9]. Scheme 1 illustrates the distribution of the papers according to the research approach employed.
Delving deeper into the Research Onion, the research strategies layer reflects the overall operational approach to conducting research. Among the 37 articles analyzed, the most frequently encountered strategies were survey research, followed by experimental research and case studies. The least used were literature reviews and mixed methods, as shown in Scheme 2:
  • Experimental research: researchers aim to study cause–effect relationships between two or more variables. Examples include the works of Alexander, Savvidou, and Alexander [7] and Strzelecki and ElArabawy [39].
  • Survey research: this method involves seeking answers to “what”, “who”, “where”, “how much”, and “how many” types of research questions. Surveys systematically collect data on perceptions or behaviors. Examples include the papers of Yilmaz, Yilmaz, and Ceylan [43] and Rose, Massey, Marshall, and Cardon [36].
  • Case studies: researchers conduct in-depth investigations. Examples include the works of Lopezosa, Codina, Pont-Sorribes, and Vállez [31], as well as Jaboob, Hazaimeh, and Al-Ansi [28].
Scheme 2. Distribution of papers by research strategies.
Scheme 2. Distribution of papers by research strategies.
Information 15 00676 sch002
The final layer analyzed was the data collection and analysis methods, which are crucial for understanding how empirical data are gathered. As shown in Scheme 3, the most used methods are surveys and questionnaires, followed by experimental methods and then interviews and focus groups.
  • Surveys and questionnaires: according to the analyzed papers, 12 studies gathered quantitative data from broad participant groups. Examples include Elkhodr, Gide, Wu, and Darwish [20] and Perkins, Roe, Postma, McGaughran, and Hickerson [34].
  • Experimental methods: central to 11 studies, these methods tested specific hypotheses under controlled conditions. Examples include Currie and Barry [17] and Al-Zahrani [8].
  • Interviews and focus groups: seven studies collected qualitative data. Examples include Singh [38] and Farazouli, Cerratto-Pargman, Bolander-Laksov, and McGrath [23].
Scheme 3. Distribution of papers by research strategies.
Scheme 3. Distribution of papers by research strategies.
Information 15 00676 sch003
The comprehensive classification of the 37 papers reveals trends in the methodological choices made by the researchers. Survey research predominates, being used in 12 studies, compared to experimental research (9), case studies (6), and qualitative approaches (7). This distribution underscores a preference for surveys, likely due to several advantages, such as the ability to generalize findings across a larger population. Additionally, surveys are well suited for exploratory research aimed at gauging perceptions, attitudes, and behaviors towards GAI technologies. Conversely, while experimental research offers the advantage of isolating variables to establish causal relationships, it was not employed as frequently as it could have been. This may be due to the logistical complexities and higher costs associated with conducting such studies.
The diversity in data collection and analysis methods used across these papers highlights the varying research priorities and objectives. While surveys and questionnaires dominate, ensuring broad coverage and ease of analysis, methods like interviews and focus groups are invaluable for their depth. These qualitative tools are essential for exploring nuances that surveys might overlook.

3.4.2. Thing Ethnography: Adapting Ethnographic Methods for Contemporary Challenges—Analysis of a Case

Qualitative research approaches are continually adapting to address emerging challenges and leverage novel technologies. Thing ethnography exemplifies this transformation by modifying conventional ethnographic techniques to accommodate contemporary limitations, such as restricted access and the need for rapid data gathering. A distinguishing feature of this methodology, especially in its latest implementations, is the integration of artificial intelligence tools, such as ChatGPT, into the ethnographic interview process. This innovative methodology allows researchers to incorporate AI as part of the ethnographic method, providing a distinct perspective on data collection in the digital technology era. Due to this novelty, we decided to analyze it in a deeper way.
Thing ethnography is a more efficient iteration of classic ethnography, aiming to collect cultural and social knowledge without requiring extensive on-site presence. This approach is particularly advantageous in dynamic contexts marked by rapid technological progress and frequent changes. Thing ethnography empowers researchers to expedite the analysis of human–AI interactions and their impacts by incorporating AI platforms like ChatGPT into their studies.
The research study “Challenges and opportunities of generative AI for higher education as explained by ChatGPT” [32] exemplifies this innovative methodology. This research employed ethnography to assess the integration of GAI tools in higher education. An innovative feature of this study was the utilization of ChatGPT for conducting certain ethnographic interviews, thereby gathering data on AI and employing AI as a tool in the data collection process. This approach enabled a comprehensive understanding of AI’s function and its perception among students and instructors, providing a depth of insight that conventional interviews conducted solely with humans may not adequately capture.
Integrating AI, such as ChatGPT, with thing ethnography offers numerous benefits, including enhanced data gathering efficiency and improved data quality. The research study by Michel-Villarreal, Vilalta-Perdomo, Salinas-Navarro, Thierry-Aguilera, and Gerardou [32] exemplifies the effectiveness of thing ethnography in rapidly producing comprehensive and meaningful observations that contribute to the development of educational policies and practices regarding AI. This approach extends the conventional boundaries of ethnographic research and adapts them to accommodate the intricacies of digital interaction and AI facilitation.
The integration of AI interviews into object ethnography marks a methodological advancement in qualitative research. By employing AI as both the subject and tool in ethnographic investigations, researchers can uncover new social and cultural dimensions of interaction in the digital era. This evolving methodology promises to offer insights into the complex relationship between humans and AI, potentially reshaping our understanding of technology and society.

4. Discussion

4.1. Discussion of Results

The integration of GAI in HE has been studied across various dimensions, revealing its multifaceted impact on educational practices, stakeholders, and activities. This discussion synthesizes findings from the 37 selected articles, grouped into three categories: focus on the technology, focus on the stakeholders, and focus on the activities, as outlined in Table 5.

4.1.1. Focus on the Technology—The Use of GAI

The technological capabilities and limitations of GAI tools, particularly ChatGPT, have been central to numerous studies. These tools offer unprecedented opportunities for innovation in education but also pose significant challenges.
ChatGPT’s application in higher education is widespread, with studies exploring its role in enhancing student support, teaching efficiency, and research productivity. For example, a study by Michel-Villarreal, Vilalta-Perdomo, Salinas-Navarro, Thierry-Aguilera, and Gerardou [32] utilized ChatGPT in semi-structured interviews to identify educational challenges and opportunities, revealing significant potential for providing continuous student support and introducing interactive learning experiences. In game development courses, the integration of ChatGPT and Dall-E significantly improved students’ creative, problem-solving, and critical skills, demonstrating the tools’ ability to foster flexible and adaptive learning [24].
In specific educational contexts like engineering, ChatGPT’s impact on assessment integrity has been scrutinized. Researchers found that ChatGPT could generate passable responses to assessment questions, prompting a reevaluation of traditional assessment methods to maintain academic standards [33]. Similarly, in functional programming courses, ChatGPT assisted students with assignments and performance evaluations, highlighting both its potential and limitations [35].
Beyond ChatGPT, GAI tools have been integrated into various academic disciplines, demonstrating their broad applicability and impact. For instance, in journalism education, GAI tools were seen as transformative, with significant ethical considerations regarding their use [31]. In programming education, GAI tools enhanced coding skills and critical thinking, though their impact on motivation varied depending on the challenge’s complexity [42]. In medical education, the need for curriculum reform and professional development to address ethical concerns and improve teaching and learning processes was emphasized [37]. At the institutional level, GAI tools were recognized for their potential to drive policy development, digital literacy, and ethical AI use, though concerns were raised about their possible negative impact on academic labor, such as making scholars less inquisitive and reflexive [40,41].
Students’ perspectives on GAI technology further enrich this narrative. Research indicated that students were generally familiar with and positively inclined towards GAI tools, although they expressed concerns about over-reliance and social interaction limitations. These studies suggested that higher education must evolve to incorporate AI-driven changes, focusing on preparing students for future employment in AI-powered jobs [10,16,28].
This discussion reflects the multifaced impact and interconnection of GAI in HE:
  • Versatility and potential: GAI tools like ChatGPT demonstrate significant potential across various disciplines, enhancing student support, teaching efficiency, and research productivity. They offer innovative learning experiences and assist in routine educational tasks, thereby freeing up valuable time for educators to focus on complex teaching and research activities (e.g., [24,32]).
  • Assessment challenges: the use of ChatGPT in educational settings raises concerns about assessment integrity. Studies have shown that ChatGPT can generate passable responses to assessment questions, prompting the need for reevaluating traditional assessment strategies to maintain academic standards (e.g., [33]).
  • Broader impact: beyond specific applications like ChatGPT, GAI tools have broad applicability and impact across different academic disciplines, including journalism, programming, and medical education. These tools are recognized for their transformative potential, though ethical considerations and the need for curriculum reform are essential (e.g., [31,37,42].

4.1.2. Focus on Stakeholders: Acceptance and Perceptions

The acceptance and perceptions of GAI usage among various stakeholders, including students, teachers, and institutional leaders, are crucial for successful integration. Several studies employed theoretical models to measure acceptance levels and identify influencing factors.
  • Students’ acceptance: the Unified Theory of Acceptance and Use of Technology (UTAUT) was used for evaluating students’ acceptance of GAI. These studies confirmed the tool’s validity but recommended further research to ensure its applicability across different contexts [43]. Key factors influencing students’ behavioral intentions towards using ChatGPT included performance expectancy, effort expectancy, and social influence. For instance, the user-friendliness and multilingual capabilities of ChatGPT were found to enhance its acceptance [39].
  • Instructors’ perceptions: instructors’ perceptions highlighted the practical implications of GAI integration. Research indicates that the overall quality and customization of GAI tools were key determinants of their impact on learning. Continuous optimization and timely feedback were essential to maximize benefits [14]. Moreover, responsible implementation was emphasized, with educators encouraged to adopt a cautious approach when integrating AI into their teaching practices [11,15].
Institutional impact: at the institutional level, GAI tools were recognized for their potential to drive policy development and broader curricular transformations. Surveys and studies suggested that higher education institutions should develop comprehensive plans for AI usage, incorporating ethical guidelines and risk management strategies [13]. These institutional strategies are crucial for fostering a supportive environment for AI adoption and addressing potential ethical concerns [11].
Regarding the stakeholders, the main conclusions can be summarized as follows:
  • Student acceptance: the acceptance of GAI tools among students is influenced by factors such as performance expectancy, effort expectancy, and social influence. Studies indicate that the user-friendliness and multilingual capabilities of tools like ChatGPT enhance their acceptance. Effective promotion and support from educators and administrators are crucial (e.g., [39,43].
  • Instructor perceptions: instructors recognize the practical implications of integrating GAI tools. Key determinants of impact include the overall quality and customization of these tools. Continuous optimization, timely feedback, and responsible implementation are essential for maximizing benefits and addressing potential challenges (e.g., [14,15].
  • Institutional strategies: higher education institutions need to develop comprehensive plans for AI usage, incorporating ethical guidelines and risk management strategies. Institutional support is vital for fostering a positive environment for AI adoption and addressing concerns about academic labor and ethical use (e.g., [13,41]).

4.1.3. Focus on Tasks and Activities: Utilizing GAI in Various Situations

Finally, the practical application of GAI technologies extends across various tasks and activities in higher education, including assessment, writing, content analysis, content generation, academic integrity, and feedback.
The impact of ChatGPT on scholarly writing practices and assessment is a key area of exploration. Concerns about potential plagiarism have prompted calls for educators to develop technical skills and adapt assessment strategies to effectively incorporate AI tools [23,38]. These studies highlighted the need for clear guidelines to distinguish between human and AI-generated content, thereby ensuring academic integrity.
GAI’s role in content analysis and generation is another significant focus. For example, an experimental investigation using GPT-3.5-Turbo for inductive thematic analysis explored whether AI models could effectively interpret data typically analyzed by humans [18]. Additionally, research on automated paraphrasing tools emphasizes the importance of educators understanding the persuasive language these tools use to develop students’ critical language awareness [26].
Academic integrity challenges posed by GAI tools are recurrent themes. Studies examined the impact of GAI across various disciplines, including nuclear medicine training, where ChatGPT’s potential to generate superficial examination answers was highlighted [17]. The effectiveness of AI detection tools like Turnitin in identifying AI-generated content was also evaluated, suggesting a need for further training and development to improve detection accuracy [34].
The implications of GAI tools for providing feedback and enhancing the learning experience have been explored in various studies. For instance, research on the impact of GAI tools on English as a second language teachers and their assessment practices underscored the importance of balancing human and AI-generated feedback [7]. Additionally, discussions on the integration of assessment strategies and the broader definition of academic misconduct revealed that experienced markers often struggled to distinguish between student-written and AI-generated texts [27].

4.1.4. Main Findings

Overall, the discussion suggests that while GAI tools, such as ChatGPT, offer significant opportunities to enhance education, their integration must be carefully managed to address challenges related to academic integrity, ethical use, and the balance between AI assistance and traditional learning methods. By considering the perspectives of technology, stakeholders, and roles, this synthesis provides a comprehensive understanding of the multifaceted implications of GAI in higher education.
Here are the main findings regarding the activities (Figure 3):
  • Academic integrity: the integration of GAI tools poses challenges related to academic integrity, particularly concerning plagiarism and the authenticity of AI-generated content. Clear guidelines and policies are necessary to ensure academic standards are met and promote responsible use (e.g., [23,34,38]).
  • Educational enhancement: GAI tools can significantly enhance the learning experience by providing support in tasks like content generation, analysis, and feedback. However, balancing AI assistance with traditional learning methods is crucial to ensure comprehensive educational development (e.g., [18,26].
  • Feedback and assessment: the role of GAI tools in providing feedback and shaping assessment practices is significant. These tools can offer valuable insights and support, though the distinction between human and AI-generated content remains a challenge. Effective integration requires a nuanced approach to feedback and assessment strategies (e.g., [7,27]).
Figure 3. A summary of the main findings.
Figure 3. A summary of the main findings.
Information 15 00676 g003

4.2. Contributions in Relation to Other Systematic Reviews

Although the primary study covers the period from 2023 to January 2024, an additional search of the databases was conducted before finalizing the analysis. This search identified three relevant studies which provided a basis for comparing our results with existing findings, highlighting areas of alignment and divergence [45,46,47].
While our study examined GAI in general and its use in higher education, Filippi and Motyl [47] offer a systematic review, including the presentation of inclusion and exclusion criteria and subsequent procedures, specifically focusing on the adoption of LLMs in engineering education. Their research provides insights into how LLMs can be helpful across fields such as mechanical, software, and chemical engineering, confirming a positive impact on student learning, like our findings. Moreover, Filippi and Motyl [47] emphasize that the best results occur when LLMs are used as a complementary tool to traditional learning methods. Students performed better when they did not rely solely on LLMs, supporting our own caution against over-reliance. Finally, regarding the impact on critical thinking, Filippi and Motyl [47] underscore the importance of integrating LLMs in a way that promotes, rather than diminishes, critical thinking skills, reflecting our concerns.
Baig and Yadegaridehkordi [45] cover the use of ChatGPT in higher education and its influence on educational processes, including its limitations and the need for continuous improvement. Like the present study, their research addresses the post-adoption stages, intention to use, and acceptance of these technologies. Furthermore, they stress the need for further research into ChatGPT’s diverse applications and benefits across various academic roles, including personalized learning experiences, instant feedback, efficient grading, supervision facilitation, and lesson planning, to name a few.
Lastly, the study by Castillo-Martínez, Flores-Bueno, Gomez-Puente, and Vite-Léon [46] explores the broader use of AI in scientific research within higher education, offering insights into how AI, including ChatGPT, can influence research processes, thereby extending our study. Nevertheless, both studies (ours and [46] stress the importance of balancing AI assistance with human oversight to maintain academic quality and creativity.
In summary, the results of the present study, along with the three studies mentioned above, report positive outcomes regarding the ability of AI, ChatGPT, and LLMs to enhance student engagement, efficiency, and overall learning outcomes. Whether in classroom learning, administrative tasks, or scientific research, LLMs like ChatGPT are valuable tools.

4.3. Research Agenda

In this section, considering the main findings, we propose a potential research agenda for the future. Based on the research questions, findings, and conclusions of the 37 studies, we identify six key areas where knowledge about GAI and its use in HE needs further development. These areas are as follows (Figure 4):
  • Assessment integrity and pedagogical strategies: it is necessary to develop robust assessment methods and pedagogical strategies that effectively incorporate GAI tools while maintaining academic integrity. For example, we need to understand how traditional assessment strategies can be adapted to account for the capabilities of GAI tools like ChatGPT and identify the most effective pedagogical approaches for integrating GAI tools into various disciplines without compromising academic standards.
  • Ethical considerations and policy development: another area requiring further research is the establishment of ethical guidelines and institutional policies for the responsible use of GAI tools in higher education. Possible research questions include the ethical challenges arising from the use of GAI tools in educational contexts and how higher education institutions can develop and implement policies that promote the ethical use of GAI.
  • Impacts on teaching and learning processes: it is essential to investigate how GAI tools influence teaching methodologies, learning outcomes, and student engagement. Questions such as how GAI tools impact student engagement, motivation, and learning outcomes across different disciplines, as well as what best practices exist for integrating GAI tools into the curriculum to enhance learning, require further study.
  • Student and instructor perceptions: further research is needed to explore the perceptions of GAI tools among students and teachers. Researchers should investigate the acceptance of these tools and identify factors influencing their adoption. For instance, it is crucial to understand what drives the acceptance and usage of GAI tools among students and instructors, and how perceptions of these tools differ across various demographics and educational contexts.
  • Technological enhancements and customization: it is essential to evaluate the effectiveness of various customization and optimization strategies for GAI tools in educational settings. For instance, it is important to understand how GAI tools can be customized to better meet the needs of specific educational contexts and disciplines and to identify which technological enhancements can improve the usability and effectiveness of these tools.
  • Future skills and workforce preparation: it is crucial to understand the role of GAI tools in preparing students for future employment and developing necessary skills for the evolving job market. Research should focus on identifying the essential skills students need to effectively use GAI tools in their future careers and how higher education curricula can be adapted to incorporate these tools, preparing students for AI-driven job markets.
Figure 4. Summary of the research agenda.
Figure 4. Summary of the research agenda.
Information 15 00676 g004

5. Conclusions

The adoption of GAI is irreversible. Increasingly, students, teachers, and researchers consider this technology as a valuable support for their work, impacting all aspects of the teaching–learning process, including research. This systematic literature review of 37 articles, published between 2023 and 2024 on the use of GAI in HE, reveals some concerns.
GAI tools have demonstrated their potential to enhance student support, improve teaching efficiency, and facilitate research activities. They offer innovative and interactive learning experiences while aiding educators in managing routine tasks. However, these advancements necessitate the reevaluation of assessment strategies to maintain academic integrity and ensure the quality of education.
The acceptance and perceptions of GAI tools among students, instructors, and institutional leaders are critical for their successful implementation. Factors such as performance expectancy, effort expectancy, and social influence significantly shape attitudes toward these technologies. Ethical considerations, particularly concerning academic integrity and responsible use, must be addressed through comprehensive policies and guidelines.
Moreover, GAI tools can significantly enhance various educational activities, including assessment, writing, content analysis, and feedback. However, balancing AI assistance with traditional learning methods is crucial. Future research should focus on developing robust assessment methods, ethical guidelines, and effective pedagogical strategies to maximize the benefits of GAI while mitigating potential risks.
This study presents several limitations. Firstly, the articles reviewed only cover a period up until the end of January 2024. Therefore, this analysis needs to be updated with studies published after this date. Secondly, the search query used may be considered a limitation, as using different words might identify different studies. At the time the search was conducted, the keywords we used were those that seemed most appropriate, although other keywords could have been used to broaden the scope of the review and include other relevant studies, such as LLMs (large language models). Additionally, this SLR is limited to studies conducted in HE. Research conducted in other contexts or educational levels could provide different results and perspectives. Finally, this study employs a mixed approach, using a descriptive analysis in Section 3 and a critical meta-analysis in Section 4. This choice aimed to provide both a comprehensive overview and a synthesis of key findings, though it may reduce the level of individual critique for each article in favor of a broader comparative analysis.
For future work, it is important to note that the use of GAI in higher education is still in its early stages, presenting numerous opportunities for further analysis across topics such as pedagogy, assessment, ethics, technology, and the development of skills for future workforce competitiveness. While this study provides a broad synthesis of the selected papers, future research could benefit from a more focused analysis of how each study addresses specific challenges and opportunities, including academic integrity, pedagogical strategies, and workforce preparation. A detailed evaluation of individual contributions in these areas would offer deeper insights and enhance the review’s granularity.
In conclusion, while GAI tools like ChatGPT offer transformative opportunities for higher education, their integration must be carefully managed. By addressing ethical concerns, fostering stakeholder acceptance, and continuously refining pedagogical approaches, higher education institutions can fully harness the potential of GAI technologies. This approach will not only enhance the educational experience but also prepare students for the evolving demands of an AI-driven future.

Author Contributions

Conceptualization, J.B., A.M. and G.C.; methodology, J.B., A.M. and G.C.; formal analysis, J.B., A.M. and G.C.; writing—original draft preparation, J.B., A.M. and G.C.; writing—review and editing, J.B. A.M., and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by national funds through the FCT—Foundation for Science and Technology, I.P., under the projects UIDB/05460/2020 and UIDP/05422/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fjelland, R. Why general artificial intelligence will not be realized. Humanit. Soc. Sci. Commun. 2020, 7, 10. [Google Scholar] [CrossRef]
  2. Farrelly, T.; Baker, N. Generative artificial intelligence: Implications and considerations for higher education practice. Educ. Sci. 2023, 13, 1109. [Google Scholar] [CrossRef]
  3. Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Chong, S.W.; Siemens, G. A meta-systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. Int. J. Educ. Technol. High. Educ. 2024, 21, 4. [Google Scholar] [CrossRef]
  4. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  5. Saunders, M.; Lewis, P.; Thornhill, A. The Research Onion of Mark Saunders. In Research Methods for Business Students, 8th ed.; Pearson: London, UK, 2019. [Google Scholar]
  6. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 372. [Google Scholar] [CrossRef] [PubMed]
  7. Alexander, K.; Savvidou, C.; Alexander, C. Who wrote this essay? Detecting AI-generated writing in second language education in higher education. Teach. Engl. Technol. 2023, 23, 25–43. [Google Scholar] [CrossRef]
  8. Al-Zahrani, A.M. The impact of generative AI tools on researchers and research: Implications for academia in higher education. Innov. Educ. Teach. Int. 2023, 61, 1029–1043. [Google Scholar] [CrossRef]
  9. Barrett, A.; Pack, A. Not quite eye to A.I.: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. Int. J. Educ. Technol. High. Educ. 2023, 20, 59. [Google Scholar] [CrossRef]
  10. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  11. Chan, C.K.Y.; Lee, K.K.W. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  12. Chan, C.K.Y.; Zhou, W. An expectancy value theory (EVT) based instrument for measuring student perceptions of generative AI. Smart Learn. Environ. 2023, 10, 64. [Google Scholar] [CrossRef]
  13. Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [Google Scholar] [CrossRef]
  14. Chen, J.; Zhuo, Z.; Lin, J. Does ChatGPT play a double-edged sword role in the field of higher education? An in-depth exploration of the factors affecting student performance. Sustainability 2023, 15, 16928. [Google Scholar] [CrossRef]
  15. Chergarova, V.; Tomeo, M.; Provost, L.; De la Peña, G.; Ulloa, A.; Miranda, D. Case study: Exploring the role of current and potential usage of generative artificial intelligence tools in higher education. Issues Inf. Syst. 2023, 24, 282–292. [Google Scholar] [CrossRef]
  16. Chiu, T.K.F. Future research recommendations for transforming higher education with generative AI. Comput. Educ. Artif. Intell. 2024, 6, 100197. [Google Scholar] [CrossRef]
  17. Currie, G.; Barry, K. ChatGPT in nuclear medicine education. J. Nucl. Med. Technol. 2023, 51, 247–254. [Google Scholar] [CrossRef]
  18. De Paoli, S. Performing an inductive thematic analysis of semi-structured interviews with a large language model: An exploration and provocation on the limits of the approach. Soc. Sci. Comput. Rev. 2023, 42, 997–1019. [Google Scholar] [CrossRef]
  19. Duong, C.D.; Vu, T.N.; Ngo, T.V.N. Applying a modified technology acceptance model to explain higher education students’ usage of ChatGPT: A serial multiple mediation model with knowledge sharing as a moderator. Int. J. Manag. Educ. 2023, 21, 100883. [Google Scholar] [CrossRef]
  20. Elkhodr, M.; Gide, E.; Wu, R.; Darwish, O. ICT students’ perceptions towards ChatGPT: An experimental reflective lab analysis. STEM Educ. 2023, 3, 70–88. [Google Scholar] [CrossRef]
  21. Escalante, J.; Pack, A.; Barrett, A. AI-generated feedback on writing: Insights into efficacy and ENL student preference. Int. J. Educ. Technol. High. Educ. 2023, 20, 57. [Google Scholar] [CrossRef]
  22. Essel, H.B.; Vlachopoulos, D.; Essuman, A.B.; Amankwa, J.O. ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Comput. Educ. Artif. Intell. 2024, 6, 100198. [Google Scholar] [CrossRef]
  23. Farazouli, A.; Cerratto-Pargman, T.; Bolander-Laksov, K.; McGrath, C. Hello gpt! goodbye home examination? An exploratory study of AI chatbots’ impact on university teachers’ assessment practices. Assess. Eval. High. Educ. 2023, 49, 363–375. [Google Scholar] [CrossRef]
  24. French, F.; Levi, D.; Maczo, C.; Simonaityte, A.; Triantafyllidis, S.; Varda, G. Creative use of OpenAI in education: Case studies from game development. Multimodal Technol. Interact. 2023, 7, 81. [Google Scholar] [CrossRef]
  25. Greiner, C.; Peisl, T.C.; Höpfl, F.; Beese, O. Acceptance of AI in semi-structured decision-making situations applying the four-sides model of communication—An empirical analysis focused on higher education. Educ. Sci. 2023, 13, 865. [Google Scholar] [CrossRef]
  26. Hammond, K.M.; Lucas, P.; Hassouna, A.; Brown, S. A wolf in sheep’s clothing? Critical discourse analysis of five online automated paraphrasing sites. J. Univ. Teach. Learn. Pract. 2023, 20, 8. [Google Scholar] [CrossRef]
  27. Hassoulas, A.; Powell, N.; Roberts, L.; Umla-Runge, K.; Gray, L.; Coffey, M.J. Investigating marker accuracy in differentiating between university scripts written by students and those produced using ChatGPT. J. Appl. Learn. Teach. 2023, 6, 71–77. [Google Scholar] [CrossRef]
  28. Jaboob, M.; Hazaimeh, M.; Al-Ansi, A.M. Integration of generative AI techniques and applications in student behavior and cognitive achievement in Arab higher education. Int. J. Hum. -Comput. Interact. 2024, 24, 1–14. [Google Scholar] [CrossRef]
  29. Kelly, A.; Sullivan, M.; Strampel, K. Generative artificial intelligence: University student awareness, experience, and confidence in use across disciplines. J. Univ. Teach. Learn. Pract. 2023, 20, 12. [Google Scholar] [CrossRef]
  30. Laker, L.F.; Sena, M. Accuracy and detection of student use of ChatGPT in business analytics courses. Issues Inf. Syst. 2023, 24, 153–163. [Google Scholar] [CrossRef]
  31. Lopezosa, C.; Codina, L.; Pont-Sorribes, C.; Vállez, M. Use of generative artificial intelligence in the training of journalists: Challenges, uses and training proposal. El Prof. De La Inf. 2023, 32, 1–12. [Google Scholar] [CrossRef]
  32. Michel-Villarreal, R.; Vilalta-Perdomo, E.; Salinas-Navarro, D.E.; Thierry-Aguilera, R.; Gerardou, F.S. Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Educ. Sci. 2023, 13, 856. [Google Scholar] [CrossRef]
  33. Nikolic, S.; Daniel, S.; Haque, R.; Belkina, M.; Hassan, G.M.; Grundy, S.; Lyden, S.; Neal, P.; Sandison, C. ChatGPT versus engineering education assessment: A multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity. Eur. J. Eng. Educ. 2023, 48, 559–614. [Google Scholar] [CrossRef]
  34. Perkins, M.; Roe, J.; Postma, D.; McGaughran, J.; Hickerson, D. Detection of GPT-4 generated text in higher education: Combining academic judgement and software to identify generative AI tool misuse. J. Acad. Ethics 2024, 22, 89–113. [Google Scholar] [CrossRef]
  35. Popovici, M.-D. ChatGPT in the classroom: Exploring its potential and limitations in a functional programming course. Int. J. Hum. -Comput. Interact. 2023, 39, 1–12. [Google Scholar] [CrossRef]
  36. Rose, K.; Massey, V.; Marshall, B.; Cardon, P. IS professors’ perspectives on AI-assisted programming. Issues Inf. Syst. 2023, 24, 178–190. [Google Scholar] [CrossRef]
  37. Shimizu, I.; Kasai, H.; Shikino, K.; Araki, N.; Takahashi, Z.; Onodera, M.; Kimura, Y.; Tsukamoto, T.; Yamauchi, K.; Asahina, M.; et al. Developing medical education curriculum reform strategies to address the impact of generative AI: Qualitative study. JMIR Med. Educ. 2023, 9, e53466. [Google Scholar] [CrossRef]
  38. Singh, M. Maintaining the integrity of the South African university: The impact of ChatGPT on plagiarism and scholarly writing. S. Afr. J. High. Educ. 2023, 37, 203–220. [Google Scholar] [CrossRef]
  39. Strzelecki, A.; ElArabawy, S. Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. Br. J. Educ. Technol. 2024, 55, 1209–1230. [Google Scholar] [CrossRef]
  40. Walczak, K.; Cellary, W. Challenges for higher education in the era of widespread access to generative AI. Econ. Bus. Rev. 2023, 9, 71–100. [Google Scholar] [CrossRef]
  41. Watermeyer, R.; Phipps, L.; Lanclos, D.; Knight, C. Generative AI and the automating of academia. Postdigital Sci. Educ. 2023, 6, 446–466. [Google Scholar] [CrossRef]
  42. Yilmaz, R.; Karaoglan Yilmaz, F.G. The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy, and motivation. Comput. Educ. Artif. Intell. 2023, 4, 100147. [Google Scholar] [CrossRef]
  43. Yilmaz, F.G.K.; Yilmaz, R.; Ceylan, M. Generative artificial intelligence acceptance scale: A validity and reliability study. Int. J. Hum. -Comput. Interact. 2023, 39, 1–13. [Google Scholar] [CrossRef]
  44. Saunders, M.; Lewis, P.; Thornhill, A. Research Methods for Business Students, 6th ed.; Pearson: London, UK, 2007. [Google Scholar]
  45. Baig, M.I.; Yadegaridehkordi, E. ChatGPT in higher education: A systematic literature review and research challenges. Int. J. Educ. Res. 2024, 127, 102411. [Google Scholar] [CrossRef]
  46. Castillo-Martínez, I.M.; Flores-Bueno, D.; Gómez-Puente, S.M.; Vite-León, V.O. AI in higher education: A systematic literature review. Front. Educ. 2024, 9, 1391485. [Google Scholar] [CrossRef]
  47. Filippi, S.; Motyl, B. Large language models (LLMs) in engineering education: A systematic review and suggestions for practical adoption. Information 2024, 15, 345. [Google Scholar] [CrossRef]
Figure 1. A PRISMA 2020 flow diagram illustrating the selection process of studies, using the template provided [6].
Figure 1. A PRISMA 2020 flow diagram illustrating the selection process of studies, using the template provided [6].
Information 15 00676 g001
Figure 2. A word cloud generated from the abstracts of the 37 selected articles, highlighting the most frequently occurring words, with words that occurred at least 25 times.
Figure 2. A word cloud generated from the abstracts of the 37 selected articles, highlighting the most frequently occurring words, with words that occurred at least 25 times.
Information 15 00676 g002
Scheme 1. Distribution of papers by research approach employed.
Scheme 1. Distribution of papers by research approach employed.
Information 15 00676 sch001
Table 1. The process of the inclusion and exclusion of articles.
Table 1. The process of the inclusion and exclusion of articles.
ArticlesYesNoYes/NoArticles Not Directly Available from Scopus or WoS
Researcher 13013107
Cases Yes/No: Second
opinion (Researcher 3)
25
Researcher 1: Final 1515
Researcher 2317168
Cases Yes/No: Second
opinion (Researcher 1)
07 1
Researcher 2: Final 723 1
Researcher 341151016
Cases Yes/No: Second
opinion (Researcher 2)
016
Researcher 3: Final 1526
TOTAL1023764 1
Table 2. Articles selected for review.
Table 2. Articles selected for review.
Articles
(Alexander, Savvidou, and Alexander, 2023) [7]
(Al-Zahrani, 2023) [8]
(Barrett and Pack, 2023) [9]
(Chan and Hu, 2023) [10]
(Chan and Lee, 2023) [11]
(Chan and Zhou, 2023) [12]
(Chan, 2023) [13]
(Chen, Zhuo, and Lin, 2023) [14]
(Chergarova, Tomeo, Provost, De la Peña, Ulloa, and Miranda, 2023) [15]
(Chiu, 2024) [16]
(Currie and Barry, 2023) [17]
(De Paoli, 2023) [18]
(Duong, Vu, and Ngo, 2023) [19]
(Elkhodr, Gide, Wu, and Darwish, 2023) [20]
(Escalante, Pack, and Barrett, 2023) [21]
(Essel, Vlachopoulos, Essuman, and Amankwa, 2024) [22]
(Farazouli, Cerratto-Pargman, Bolander-Laksov, and McGrath, 2023) [23]
(French, Levi, Maczo, Simonaityte, Triantafyllidis, and Varda, 2023) [24]
(Greiner, Peisl, Höpfl, and Beese, 2023) [25]
(Hammond, Lucas, Hassouna, and Brown, 2023) [26]
(Hassoulas, Powell, Roberts, Umla-Runge, Gray, and Coffey, 2023) [27]
(Jaboob, Hazaimeh, and Al-Ansi, 2024) [28]
(Kelly, Sullivan, and Strampel, 2023) [29]
(Laker and Sena, 2023) [30]
(Lopezosa, Codina, Pont-Sorribes, and Vállez, 2023) [31]
(Michel-Villarreal, Vilalta-Perdomo, Salinas-Navarro, Thierry-Aguilera, and Gerardou, 2023) [32]
(Nikolic et al., 2023) [33]
(Perkins, Roe, Postma, McGaughran, and Hickerson, 2024) [34]
(Popovici, 2023) [35]
(Rose, Massey, Marshall, and Cardon, 2023) [36]
(Shimizu et al., 2023) [37]
(Singh, 2023) [38]
(Strzelecki and ElArabawy, 2024) [39]
(Walczak and Cellary, 2023) [40]
(Watermeyer, Phipps, Lanclos, and Knight, 2023) [41]
(Yilmaz and Karaoglan Yilmaz, 2023) [42]
(Yilmaz, Yilmaz, and Ceylan, 2023) [43]
Table 3. Journals with multiple selected articles.
Table 3. Journals with multiple selected articles.
Journaln
International Journal of Educational Technology in Higher Education4
Computers and Education: Artificial Intelligence3
International Journal of Human-Computer Interaction3
Issues in Information Systems3
Education Sciences2
Journal of University Teaching and Learning Practice2
Smart Learning Environments2
Table 4. Authors contributing to multiple articles.
Table 4. Authors contributing to multiple articles.
Authorsn
Chan, C.K.Y.4
Barrett, A.2
Pack, A.2
Yilmaz, F.G.K.2
Yilmaz, R.2
Table 5. The geographical origin of study authors based on their affiliations, considering that some articles have authors from more than one country.
Table 5. The geographical origin of study authors based on their affiliations, considering that some articles have authors from more than one country.
Countryn
USA6
Hong Kong5
UK5
Australia4
Poland2
Turkey2
Vietnam2
China1
Cyprus1
Egypt1
Germany1
Ghana1
Ireland1
Japan1
Jordan1
Mexico1
Netherlands1
New Zealand1
Oman1
Romania1
Saudi Arabia1
Singapore1
South Africa1
Spain1
Sweden1
Taiwan1
Yemen1
Table 6. Words that occur at least 25 times in the abstracts of the 37 selected articles.
Table 6. Words that occur at least 25 times in the abstracts of the 37 selected articles.
Wordn
Student106
AI102
Educator91
Use88
ChatGPT81
Study65
General62
Tool58
Learn55
Higher53
Research49
Academic40
Technology36
Assess35
GenAI32
Integrity29
Impact29
Intelligent28
University28
Result28
Artificial27
GAI27
Find26
Potential25
Model25
Table 7. Articles by category.
Table 7. Articles by category.
CategorySubcategoryArticles
A. Use of GAIA.1. The use of GAI technology—the case of Chat GPT[19,20,24,32,33,35]
A.2. Exploring the use of GAI technology—a broader perspective[10,13,16,28,31,37,40,41,42]
B. Acceptance and perceptionsB.1. Students[11,12,14,15,22,39,43]
B.2. Teachers[36]
B.3. Researchers[8]
B.4. Institutions[25]
C. SituationsC.1. Assessment[23,27,29]
C.2. Writing[9]
C.3. Content analysis[18]
C.4. Content generation[34]
C.5. Academic integrity[7,17,26,30,38]
C.6. Feedback[21]
D. Methodologies employed All
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Batista, J.; Mesquita, A.; Carnaz, G. Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review. Information 2024, 15, 676. https://doi.org/10.3390/info15110676

AMA Style

Batista J, Mesquita A, Carnaz G. Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review. Information. 2024; 15(11):676. https://doi.org/10.3390/info15110676

Chicago/Turabian Style

Batista, João, Anabela Mesquita, and Gonçalo Carnaz. 2024. "Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review" Information 15, no. 11: 676. https://doi.org/10.3390/info15110676

APA Style

Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review. Information, 15(11), 676. https://doi.org/10.3390/info15110676

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop