Next Article in Journal
Embracing the Extend Platform in Postgraduate Education: Unveiling Student Perspectives on Technological Trends in Course Delivery
Previous Article in Journal
Facilitators and Barriers to Online Teaching and Educational Technology Use by University Lecturers during COVID-19: A Systematic Review of Qualitative Evidence
Previous Article in Special Issue
Editorial for the Special Issue on EdTech in Higher Education: Future Perspectives on Teaching and Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing the Potential and Risks of AI-Based Tools in Higher Education: Results from an eSurvey and SWOT Analysis

Department of Engineering and Computer Science, Institute Patient-Centered Digital Health, Bern University of Applied Sciences, 3012 Bern, Switzerland
*
Author to whom correspondence should be addressed.
Trends High. Educ. 2023, 2(4), 667-688; https://doi.org/10.3390/higheredu2040039
Submission received: 3 November 2023 / Revised: 29 November 2023 / Accepted: 4 December 2023 / Published: 6 December 2023

Abstract

:
Recent developments related to tools based on artificial intelligence (AI) have raised interests in many areas, including higher education. While machine translation tools have been available and in use for many years in teaching and learning, generative AI models have sparked concerns within the academic community. The objective of this paper is to identify the strengths, weaknesses, opportunities and threats (SWOT) of using AI-based tools (ABTs) in higher education contexts. We employed a mixed methods approach to achieve our objectives; we conducted a survey and used the results to perform a SWOT analysis. For the survey, we asked lecturers and students to answer 27 questions (Likert scale, free text, etc.) on their experiences and viewpoints related to AI-based tools in higher education. A total of 305 people from different countries and with different backgrounds answered the questionnaire. The results show that a moderate to high future impact of ABTs on teaching, learning and exams is expected by the participants. ABT strengths are seen as the personalization of the learning experience or increased efficiency via automation of repetitive tasks. Several use cases are envisioned but are still not yet used in daily practice. Challenges include skills teaching, data protection and bias. We conclude that research is needed to study the unintended consequences of ABT usage in higher education in particular for developing countermeasures and to demonstrate the benefits of ABT usage in higher education. Furthermore, we suggest defining a competence model specifying the required skills that ensure the responsible and efficient use of ABTs by students and lecturers.

1. Introduction

In recent years, the landscape of higher education has undergone a profound transformation, driven by rapid advances in artificial intelligence (AI) technologies. These developments have created new opportunities and challenges, heralding a future in which AI-based tools (ABTs) promise to revolutionize how students learn, how educators teach and how universities operate [1]. Some of the promising use cases of ABTs include automated grading [2], personalized learning [3,4,5], generating vignettes as educational sources [5] and interacting with virtual learning assistants.
As the capabilities of AI, particularly generative AI, continue to expand, it is imperative that we critically assess the implications of its integration into higher education. Large language models (LLMs), with their ability to generate human-like text and to provide instant translations, have been readily adopted in educational contexts, revolutionizing content creation, language learning and accessibility. For example, Leiker et al., 2023, developed “a course prototype leveraging an LLM, implementing a robust human-in-the-loop process to ensure the accuracy and clarity of the generated content” [6]. Beyond this, Yan et al. identified in their scoping review 53 use cases for LLMs in automating education tasks that can be grouped into nine main categories: profiling/labeling, detection, grading, teaching support, prediction, knowledge representation, feedback, content generation and recommendation [7]. Since these use cases have been retrieved from the scientific literature, it remains unclear whether they are already considered in education practice. Through an eSurvey among students and lecturers, we want to obtain an impression of the current status of knowledge and implementation of such use cases in higher education.
Additionally, the emergence of generative AI models has raised intriguing questions and sparked discussions about the consequences and responsibilities associated with their use. Generative AI is a technology that uses deep learning models to produce content that resembles what a human might produce in response to complicated and varied cues (e.g., languages, instructions, questions) [8]. Generative AI may generate written work that appears to be analytical and intelligent enough to serve as, among other things, reliable graduate-level essays, syllabi, lecture notes, software code, translations and much more [9]. Concerns are raised that current student assessments such as essay writing can no longer be conducted because generative AI can produce the required content in a short amount of time [10]. Several studies are already available that test generative AI models and their capabilities to pass exams [11,12,13]. We want to find out out which measurements lecturers and students suggest to address these issues and which aspects might hamper the usage of ABTs in higher education contexts.
Specifically, the objective behind our study is to guide future research on AI-based tools, including generative AI in higher education, and to draw practical and research implications. Specifically, we want to address the following research questions:
  • What is the level of familiarity among students and lecturers with various types of AI-based tools used in higher education, and are they aware of potential applications?
  • To what extent have AI-based tools already been integrated into higher education?
  • How do stakeholders anticipate exams and teaching and learning methods evolving in the future with the increased integration of AI-based tools into higher education, and what are the expected benefits and changes?
  • What specific competencies, skills or training are required for students and lecturers to successfully and responsibly apply AI-based tools in higher education?
  • What are the potential opportunities and threats that may arise from the widespread application of AI-based tools in higher education, and how can institutions proactively address and leverage these opportunities while mitigating the threats?
This paper seeks to address these pressing questions through a mixed methods approach, culminating in a comprehensive SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. To illuminate the way forward, our study draws on the insights and experiences of lecturers and students from diverse backgrounds and geographical locations. Some related studies have been recently conducted; Chan et al., 2023, collected “university students’ perceptions of generative AI technologies […] in higher education, focusing on familiarity, their willingness to engage, potential benefits and challenges, and effective integration”, but considered only undergraduate students in Hong Kong [14]. Other researchers reflected on potential limitations and benefits of LLMs and generative AI for education [15] without a concrete assessment of lecturers. Van der Vorst et al. explored the potential impact of educational AI applications in personalized learning. They described the opportunities in and threats to AI in this context as derived from interviews and a literature search [16]. Farrokhnia et al. presented the results from a SWOT analysis of ChatGPT [17]. In contrast to their work, our SWOT analysis is based on a survey that was conducted internationally, and we focused not only on ChatGPT, but on a broad range of ABTs.
Through our research, we aim to provide valuable insights that will guide educational institutions, policymakers and the academic community to harness the benefits of AI-based tools while responsibly navigating the evolving landscape of higher education in the age of generative AI models. The need for such an assessment is underscored by the profound impact that AI promises to have on the educational ecosystem, shaping the learning experiences of generations to come.

2. Methods

To answer our research questions, we first formulated guiding questions for our SWOT analysis. Based on these questions, we developed a questionnaire to be distributed among students and lecturers. Then, we conducted a survey using the questionnaire. The results were analyzed and answers to our questions guiding the SWOT analysis were aggregated. Details of the single steps are described in the following.

2.1. SWOT Analysis

A SWOT analysis is a method for identifying strengths, weaknesses, opportunities and threats. The concept of a SWOT analysis has its roots in strategic management research, which gives it a very practical orientation [18]. Specifically, the practical orientation refers to an emphasis on using information, skills and techniques in real-world situations. A SWOT analysis on the use of AI-based tools in higher education is a valuable tool to evaluate the current state and future prospects of AI adoption. It helps leverage the strengths and opportunities associated with AI in education, ensuring that the most is made of these technologies to enhance teaching, learning and administrative processes. Furthermore, identifying weaknesses and threats at this early stage when AI-based tools are increasingly attracting interest will help to proactively address potential challenges and mitigate risks associated with AI adoption. This can help prevent negative outcomes and unintended consequences.
To apply the SWOT analysis methodology to the field of ABT usage in higher education, we consider strengths and weaknesses as features of ABTs themselves, or “internal” features. Conversely, opportunities include the economic, technical, social, political, legal and environmental features representing the context of ABTs in higher education, also referred to as “external” features. Threats are, similarly, external features that may prevent further real-world implementation of ABTs in higher education. From the survey, we collected and interpreted the results in terms of strengths, weaknesses, opportunities and threats of ABTs in higher education. The relevant questions driving our SWOT analysis are listed in Table 1. They were derived by brainstorming among the authors after defining the research questions for this study (see Section 1).

2.2. Survey

2.2.1. Development of the Questionnaire

Based on our SWOT questions, we drafted the first version of a questionnaire, starting with a brainstorming session among the authors to collect as many questions as possible. We followed the SPSS framework published by Helfferich [19]. The resulting set of questions was checked for relevance and redundancies. The agreed questions were sorted thematically and answer options were assigned. The final questionnaire consisted of three sections, Tools and Usage, Future and Expectations and Ethics, comprising seven, nine and four items, respectively. Besides a common set of questions, one question was only asked to students (whether they used ABTs in exams) and one only to lecturers (whether they used ABTs to prepare lectures or assignments). Seven demographic questions, as well as introductory text and information regarding data usage and privacy, were added. We tested the understandability of the questions and pre-defined answers in a pre-test conducted with four external people. Their feedback was used to finalize the questionnaire. It comprised 27 items, including open-ended questions and Likert scale ratings. Likert scale questions were mandatory, while free-text questions were not mandatory.
The survey was technically implemented using a local instance of the tool LimeSurvey, hosted at the researchers’ institution. No user-related information or timestamps were collected to ensure anonymity. The study design was submitted to the ethics committee of the Canton of Bern, who confirmed that no ethics approval was necessary (Req-2023-00319).

2.2.2. Recruitment of Participants

The link to the questionnaire was distributed among the authors’ professional and private networks, as well as on social media. Any student or lecturer studying or teaching at a higher education institution was eligible to participate. Our aim was to obtain replies from a large amount of students and lecturers worldwide, with a minimum of 100 participants. The survey was open for 29 days (6 March–3 April 2023). The participants were not compensated for their participation. However, to acknowledge participation in this study, participants answering all questions were eligible to participate in a lottery where they could register to win one of ten vouchers of EUR 20 each.

2.2.3. Data Analysis and Reporting

For a qualitative analysis of the results, we used Verbi MAXQDA Plus 2022. A quantitative analysis of predefined survey variables was performed using IBM SPSS statistics, version 29.0.0.0. Completed survey answers were exported from LimeSurvey as Excel files, as well as SPSS syntax and data files.
In a manually conducted pre-processing step, unusual answers were identified by comparing free-text answers. Suspected cases that included duplicated or unrelated answers were discussed among the authors and removed from the dataset upon agreement. To conduct the quantitative analysis, categorical variables were transformed into ordinal scales in SPSS (Likert scales, age groups, the highest level of education obtained). Cases that specified a different profession than student or lecturer were marked as “user missing values”, ignoring them in the statistical analysis. For qualitative analysis, items containing free-text answers were imported into the analysis tool. Next, text passages (segments) were assigned to 0 … n categories (codes) of the code system by one author (DR), which was developed iteratively; after a first round of defining codes and assigning text segments to them, sub-codes were merged in order to reduce complexity.
Survey variables being answered on a Likert scale were tested for normal distribution using Pearson’s Chi-Squared test in order to define further methods for analysis. Pearson’s Chi-Squared tests showed that the empirical distribution deviated significantly from a normal distribution for all Likert scale questionnaire items (df = 4, p ≤ 0.001). Therefore, further analyses of central dependencies between ordinal Likert scale dependent variables and categorical independent variables were continued using non-parametric tests.
These tests comprised independent sample Kruskal–Wallis tests for non-binary independent variables and Mann–Whitney U tests for binary independent variables. All results were reported as valid percentages (ignoring user missing values) and were rounded to one decimal place. For hypotheses tests, the alpha value was set to 0.05. As independent variables, we compared students vs. lecturers (A) and ICT vs. other field of study groups (B), as well as non-binary groups based on age (C) and highest obtained level of education (D). The analyzed survey results were used to aggregate the information on strengths, weaknesses, opportunities and threats.

3. Results of the Survey

3.1. Characteristics of Participants

A total of 556 participants started answering the questionnaire; 331 participants completed it. In total, 26 unusual cases were identified and excluded. Therefore, a total of 305 valid cases were considered for analysis. The characteristics of the participants are shown in Table 2. A percentage of 35.4% of participants were between 18 and 24 years old (n = 108). The second biggest age group comprised participants between 25 and 34 years old (n = 82). Only one participant was older than 64. A percentage of 57.9% of participants were currently enrolled as students in some higher education institution, while 42.1% claimed to teach at a higher education institution. Eight participants mentioned another profession. From these eight participants, no demographics were collected. The majority of participants resided in Switzerland (67.5%), USA (11.5%), Germany (9.2%) and Austria (7.9%). Other countries of residence included Spain, Norway, Australia, Iran, Italy, Liechtenstein, Sweden and the United Kingdom (one participant per country). Among all participants, the most often mentioned main field of study was Information and Communication Technologies (33.8%), followed by Health and Welfare (20%). The frequencies of all other fields ranged between 1.0 and 8.5%.
A total of 23.9% of participants stated having an ISCED level 3 (upper secondary education) as their highest completed level of education, according to the International Standard Classification of Education (ISCED-11); 22.6% of the participants assigned themselves to ISCED level 8 (Doctoral), 20.5% to ISCED level 6 (Bachelors) and 17.5% to ISCED level 7 (Masters).

3.2. Tools and Usage

3.2.1. Familiarity and Usage of ABTs

Participants were presented with a list of ABTs and asked whether they had heard of the listed tools (see Figure 1 and Table A1 in Appendix A) and used them (see Figure 2 and Table A2 in Appendix A). Additionally, they had the opportunity to list additional tools in the free text.
Google Translate, ChatGPT and DeepL were familiar to more than 80% of lecturers and students. Other tools such as DALL-E, OpenAI GPT-3, OpenAI Codex, Stable Diffusion and GitHub Copilot were more familiar among lecturers than students. Only one participant reported not having heard of any of the listed ABTs.
Twenty-three participants mentioned additional tools they had heard of in addition to the ones provided by our pre-defined list. They can be grouped into tools for generating images (Midjourney, Photoshop AI tools, starryai, Nvidia Canvas), tools for chatting or generating text (Bard, Bing Chatbot, DeepL Write, Compose AI, Google Assistant), and tools supporting scientific writing (Elicit, Perplexity AI).
Additional tools mentioned were Papago (an AI tool answering questions on open access), Wolfram Alpha (providing AI support in mathematical tasks), Tabnine (supports in software development), Cogram (meeting assistant), GPTZero (check for AI-generated content) and IBM Watson (chatbot technology).
Translation tools were among the most often mentioned ABTs that have been used; 81.6% of the lecturers and 90.1% of the students used Google Translate and 70.4% of the lecturers and 75% of the students used DeepL. ChatGPT was among the top 3 most frequently mentioned ABTs that have been used (64% of the lecturers and 66.3% of the students). The other tools (DALL-E, OpenAI GPT-3, OpenAI Codex, Stable Diffusion and GitHub Copilot) were less frequently selected.
Twenty participants mentioned additional tools they have already used in addition to the ones provided by our pre-defined list, including tools for generating images (Midjourney, n = 5) and tools supporting scientific writing (Elicit, n = 2). Fifteen other tools were mentioned only once, e.g., AI Writers (e.g., Writesonic, DeepL Write, Quillbot, Neuroflash), AI detectors (e.g., GPTZero), AI photo editors or image creators (e.g., Adobe Lightroom, Nvidia Canvas, Adobe Photoshop AI tools, Avatarify), research assistants (e.g., Bearly-AI, Research Rabbit).

3.2.2. Use Cases of ABTs in Higher Education

Participants were asked which tasks they had already used ABTs for. We asked for three types of tasks. These types comprise conducting tasks for project work during exams (only students) or for preparing lectures or assignments (only lecturers). Furthermore, participants could provide free-text answers for other tasks. The results are described below.
Use for project work. The majority of the participants had used ABTs for conducting tasks in project work. The most frequently mentioned tasks were translation (63.6%) and writing text (38.5%). Lecturers reported on the use for image-specific tasks such as image generation, editing, segmentation, etc., more often than students, see Table 3. In contrast, students seem to use ABTs more frequently for coding tasks within projects than lecturers. Finally, 13.7% of the respondents (n = 40) reported not having used ABTs for project tasks.
Sixteen participants provided additional application areas for projects, including language-related support (n = 4), gathering inspiration (n = 3) and for specialized application areas (n = 3). Other individual participants mentioned finding sources, brainstorming, answering questions, information checking and transcription as supported project tasks.
Use during exams. A total of 73.3% of students answered No to the question on whether they used ABTs during exams (n = 172). Upon a comparison between the main fields of study, the fields of Education (40%), Arts and Humanities (35.70%), Business, Administration and Law (33.3%) and ICT (39.6%) showed the highest proportion of students that answered Yes to this question, see Figure 3.
Use for preparing lectures and assignments. We only asked lecturers whether they used ABTs for preparing lectures and assignments. A total of 58.2% of the respondents (n = 71) reported not having used ABTs for preparing their lectures or assignments. The most common usage was for generating assignments, with 23.8% of the respondents (n = 29) having used ABTs for this purpose. Using ABTs for rating answers was another popular application, with 17.2% of the participants (n = 21) having used ABTs for this task. ABTs were also used for generating lectures by 23.0% of the respondents (n = 28). A percentage of 13.9% of the participants (n = 17) claimed they had used ABTs for other purposes not specified in detail. The respondents reported having used ABTs for idea validation, translation, assignment verification, report writing, assignment creation and ideation.
Usage for other contexts. A total of 115 free-text answers were provided mentioning use cases of ABTs others than in exams, preparing lectures or assignments or project work. The most frequently mentioned aspects included testing of tools or using them “for fun” (n = 28), translation (n = 16) or language-related and text-generation tasks (n = 10). Information retrieval and question answering were mentioned by nine participants. Figure 4 shows the word cloud generated from the free-text answers.
Additional use cases listed outside the educational and research context included specifying plants, writing job applications, personal use (writing letters) and testing the possibilities of AI, as well as generating website texts.
Mentioned use cases within education and research were data analysis, natural language processing (e.g., content analysis, topic analysis, sentiment analysis), network processing and analysis, art creation, audio processing, using ABTs as a knowledge source to get answers to questions instead of asking lecturers and using ABTs as a study buddy.

3.2.3. Observed Errors

In addition, we were interested in the observed errors when using ABTs for specific tasks. The majority of the participants had experienced ABTs producing the wrong results, with 66.7% of the respondents (n = 194) reporting this issue. In total, 11.7% (n = 34) of the participants experienced ABTs displaying discrimination, 24.1% (n = 70) encountered ABTs displaying bias, and 21.6% (n = 63) reported not having observed any errors in ABTs. As far as free-text items are concerned, translation errors and lacking precision were mentioned most often (n = 7). Other aspects included hallucination (n = 3), limited knowledge, grammar mistakes, discrimination and lacking adaption to target audience (n = 1, respectively).

3.3. Future and Expectations

3.3.1. Impact of ABTs

We asked the participants to judge the impact of ABTs in educational application areas, including creating text, images or programming code, getting feedback on programming code, etc. Judgments were made on a five-item Likert scale, see Figure 5 and Table A3. We can see that for all tasks, a moderate to severe impact was assigned by the majority of the participants. In particular, for creating text, answering questions, programming, code feedback and creating code, more than 50% of lecturers and students selected “severe impact”. Interestingly, more lecturers than students recognized a severe impact of ABTs on creating text. A less significant impact was estimated for editing and creating images as well as editing videos by both participant groups.
Additionally, we asked for an overall judgment of the impact of ABTs on exams, teaching and learning, see Figure 6 and Table A4. Lecturers and students foresee a moderate to severe impact on exams. In particular, 35% of the lecturers selected “severe impact”, while only 26.9% of the students selected this option. In contrast, students rather selected “moderate impact” for this aspect (37.4% students vs. 28.5% lecturers). A rather moderate impact was estimated by both groups on teaching (43.5% lecturers, 34.3% for students). The impact on learning was judged to be moderate or severe by the majority of participants. Below, we describe significant differences observed between students and lecturers, between fields of study, between age groups and between the highest level of education obtained.
Comparing students and lecturers. Lecturers foresaw a higher future impact than students (Mdn. = 5 (severe impact) for both groups) regarding the application of ABTs to create text, exact Mann–Whitney U test: U = 9113.000, p = 0.015. For all other items, no significant difference between students and lecturers was observed.
Comparing field of study. Participants with a background in Information and Communication Technologies foresaw a higher impact of ABTs on exams than participants within other study fields (Mdn. = 4 (moderate impact) for both groups), exact Mann–Whitney U test: U = 8530.000, p = 0.018.
Comparing age groups. Upon comparison of central tendencies based on age groups, several significant differences were observed: Younger participants predominantly foresaw less impact than older participants. For example, participants in the 18–24 and 25–34 age groups foresaw less impact regarding the application of ABTs for creating text than participants in the 45–54 age group (p = 0.001, p = 0.000).
Participants in the 25–34 age group foresaw less impact regarding the application of ABTs for creating images than participants in the 45–54 and 55–64 age groups (p = 0.007 for both groups). The same applied when comparing the 35–44 and 55–64 age groups (p = 0.030). However, the 18–24 age group foresaw a greater impact than the 25–34 age group (p = 0.043, Mdn. = 4 (moderate impact) for both groups).
Comparing the highest level of education. The variable range of highest level of education obtained was limited to three groups, comprising ISCED levels 1–3 (IA, up to upper secondary), 4–6 (IB, up to bachelors) and 7–8 (IC, masters and doctorate). In the comparisons involving Group IC, a consistently higher future impact was shown compared to both Group IA and Group IB. In the comparisons involving Group IB, a consistently lower future impact was shown compared to both Group IA and Group IC, see Table 4.

3.3.2. ABT-Induced Changes on Future Exams, Teaching and Learning

In the following, we summarize the qualitative responses to questions addressing the future of exams, teaching and learning given the availability of ABTs.
Expected changes to future exams. A total of 157 free-text answers provided thoughts on expected changes related to exams caused by ABT usage. For lecturers, ABTs might support in grading (n = 9) and generating questions (n = 3). Measures should be taken to counteract the use of ABTs during exams if forbidden (n = 3). Several changes to the exam environment and design were mentioned. In the future, there might be more closed-book exams without access to technology (n = 29) or oral exams (n = 24). Some participants endorsed the integration of ABTs into exams (n = 12).
Regarding exam content, participants expect that a stronger focus will be placed on applying knowledge, practice-oriented exams as well as reasoning and arguing (n = 34) in future exams. Testing knowledge might lose importance (n = 19), while the complexity of questions might increase when ABTs are used during exams (n = 10). Twelve people explicitly stated that they expect no or little changes regarding exam content.
Expected changes to future teaching. Aspects of changes to future teaching were extracted from 144 free-text answers. Lecturers might generate examples and teaching content with ABTs (n = 18) and use them as teaching tools (n = 14). ABTs might facilitate student-focused teaching (n = 7), answering questions (n = 6) and fostering interaction and engagement among students (n = 4). As student-related aspects, skills for using ABTs might gain importance in the curricula of the future (n = 26), as well as teach critical thinking (n = 11). However, lecturers also need to learn how to use ABTs correctly (n = 4).
Expected changes to future learning. With respect to the ABT-induced changes to learning habits, survey participants regarded avoiding blind trust, reducing reliability on ABTs and fostering reflection and interpretation skills as important (n = 22). An increased efficiency (n = 21), individualization (n = 14) and memorization (n = 9) were seen as additional chances for future learning. Learning content might change with respect to learning the application of ABTs themselves (n = 4), learning only an overview (n = 4) and basic skills (n = 4). ABTs were seen as a knowledge source (n = 8) and as a means to summarize contents (n = 5).

3.3.3. Required Competencies for Successful ABT Use in Higher Education

We provided a set of skills and asked the participants to select those they consider relevant to using ABTs successfully, see Figure 7. The majority of students and lecturers selected critical thinking as a necessary skill (81.4% students, 90.3% lecturers). Other skills that were considered most relevant are data literacy, continuous learning and problem solving. Communication and collaboration skills were considered less relevant by the participants. Interestingly, the opinions of students and lecturers differed significantly for domain expertise as required skills; 49.4% of students, but 72.6% of lecturers selected this option. Furthermore, the opinions on essential technical skills differed (36% students vs. 45.2% lecturers). Only a small percentage of respondents (1.4%, n = 4) did not provide an answer. Free-text answers regarding skills included effective prompting (n = 3) as well as interpretation and computational skills (n = 1, respectively).

3.3.4. Opportunities and Problems of Applying ABTs in Higher Education

A total of 134 free-text answers regarding the opportunities and problems of ABT usage were analyzed. They included 129 mentions of potential threats and 55 mentioned opportunities. The most often listed threats were use of ABTs without reflection (n = 32), wrong results from ABTs (n = 17), plagiarism (n = 15), loss of analytical and critical thinking (n = 13) and facilitating cheating (n = 10). An increased efficiency (n = 16), possible automation of processes (n = 7) and faster information access (n = 6) were reported as opportunities.

3.3.5. Challenges for Universities and Mitigation Strategies

Among the participants, avoiding plagiarism was most often regarded as a challenge for universities, followed by ensuring fair examination modes and fair grading. More specifically, participants confirmed the following challenges:
  • Avoiding plagiarism (67.7%, n = 199);
  • Ensuring fair examination modes (64.6%, n = 190);
  • Teaching lecturers how to use the tools (61.9%, n = 182);
  • Ensuring fair grading (57.1%, n = 168);
  • Teaching students how to use the tools (50.7%, n = 149);
  • Ensuring privacy (30.6%, n = 90).
Regarding mitigation strategies, the majority of the participants (81.0%, n = 238) believed that universities should teach students how to use ABTs as part of the curriculum, see Figure 8. Other popular coping strategies included teaching lecturers how to use ABTs (73.5%, n = 216), conducting closed-book written exams (42.5%, n = 125) and conducting oral exams (39.1%, n = 115). A smaller proportion of respondents (7.8%, n = 23) claimed that universities should completely forbid the use of ABTs, while only 1.7% (n = 5) provided no answer. As far as free-text answers are concerned, adaption of assessment methods (n = 11) and embracing the usage of ABTs (n = 5) were mentioned.

3.4. Ethical Aspects

Several ethical aspects were mentioned by the respondents: responsible and transparent usage of ABTs (n = 9), bias of models (n = 4) and the potential large-scale influence of ABT usage (n = 3). Additional aspects were mentioned by individual participants, e.g., rising inequality, copyright infringement and explicit consent.
Interestingly, participants with background in ICT (Mdn. = 2 (disapprove)) disapproved more than participants within other study fields (Mdn. = 3 (neutral)) with the statement “Applying ABTs by students within assessments or homework is plagiarism”, exact Mann–Whitney U test: U = 8355.000, p = 0.023.

4. Discussion

4.1. Principal Results

The results show that a moderate to high impact of ABTs on teaching, learning and exams is expected in future by our participants. The impact may result in changes to the content and functioning of exams. Some ABTs, such as translation tools or generative AI tools, are already in routine use by students and lecturers; however, the entire potential of ABTs in higher education still needs to be discovered. The usefulness of ABTs might depend on the field of study, with some fields benefiting more than others (e.g., teaching interpretation on a music instrument might benefit less than teaching programming). Specific skills such as prompting or validating and interpreting AI-generated results are essential to benefit from ABTs and to use the tools appropriately, also considering ethical aspects. This is expected to be the duty of universities and higher education institutions.

4.2. SWOT Analysis

From the survey results, we derived answers to the questions guiding the SWOT analysis listed in Section 2.1.

4.2.1. Strengths

Students and lecturers already use ABTs for tasks within teaching and learning and in exams. The currently most popular tasks supported by ABTs are related to scientific writing, translating texts and programming. Big advantages of ABT usage are expected to result from more complex use cases, including:
  • AI-powered tutoring systems for individualized support and personalized learning experiences;
  • Automation of administrative tasks for increased efficiency;
  • Advanced content generation for engaging learning materials;
  • Getting data-driven insights for informed decision making.
AI-powered tutoring systems can simulate one-to-one interactions with students, provide immediate feedback to student-generated content, answer questions and offer personalized guidance. Use cases around AI-powered tutoring systems have already been identified by Zawacki et al. in 2019 [20]. They reported that such systems can enhance student engagement, improve learning outcomes and alleviate the pressure on lecturers to provide additional support. An example of such a tutoring system is AI-generated feedback on programming code [21]. Chen et al. found through a literature review that AI has already been extensively adopted and used in education in different forms, such as web-based and online intelligent education systems, humanoid robots and web-based chatbots [22]. Our survey results are consistent with this.
AI might automate or support time-consuming administrative tasks, such as grading multiple choice assessments, formulating feedback to student papers based on lecturers’ notes or organizing and analyzing student data. This allows lecturers to focus more on instructional design, student engagement and providing qualitative feedback, leading to increased efficiency and productivity. This is confirmed by previous studies of Zawacki et al. [20]. When used to support grading and in providing feedback, ABTs can provide a second opinion and could in this way contribute to equal treatment in assessments, advice, etc.
AI algorithms are able to generate and curate educational content, such as presentations, quizzes for repetition, alternative questions for exams, case examples or even multimedia resources. This expands the diversity of exercises and educational material and improves the quality of available educational material. This use case reflects the developments around technology-enhanced learning [23]. For example, Sovrano et al. described an interactive e-book that uses question-answering technology to generate specialized knowledge graphs and adaptive explanations [24].
Through interaction with learning material, data are generated which can be collected and analyzed to learn more about the student’s learning progress. AI tools can analyze these vast amounts of data generated by students’ interactions with educational platforms, identifying patterns, trends and gaps in knowledge. Lecturers can leverage these insights to make data-informed decisions on teaching content, track student progress and identify areas where additional support or intervention may be required. AI can also tailor educational content to individual students based on their unique learning needs, preferences and pace of learning [16]. Adaptive learning platforms powered by AI algorithms can provide customized recommendations, adaptive assessments and targeted feedback to enhance student engagement and understanding. They are among the greatest achievements of AI in higher education [25]. They address one of the biggest challenges in teaching contexts, which is that everyone has a different pace of learning and understanding of instructions. Seneviratne et al. proposed a system that uses AI to enhance student and lecturer performance by monitoring attendance, behavior and lecture quality [26]. The system showed high accuracy in recognizing student activity and marking attendance. The paper concludes that automating classroom activities can positively impact students and lecturers.
Resulting from these use cases, more efficient ABT usage can be achieved for both lecturers and students. Focus can be placed on content, while creating text might be outsourced to ABTs. Individual gaps in specific skills could be filled by ABTs and in this way support achieving equity. Upon consideration of students with limitations in reading or writing, ABTs could assist in understanding complex text or writing. Even speech outputs of written content could promote integration of people with disabilities. This demonstrates the advantages of ABT usage as well as the tasks in which ABTs can provide support.
Modes of exams will change, and new skills will be required to consciously interact with ABTs and, more importantly, to use the results produced by these tools. More individualized learning is expected, as the use cases described before demonstrate.

4.2.2. Weaknesses

ABTs also present certain weaknesses and challenges. If the training data are biased or lack diversity, the AI tools may perpetuate or amplify those biases. This can result in inequalities, discrimination or limited perspectives in educational materials and recommendations, undermining the goal of equitable education. Depending on the field of study, ABTs are more or less applicable and useful. There is still only limited evidence on the usefulness and acceptance of ABTs in higher education.
ABTs lack the human interaction and interpersonal connection that can be crucial for effective teaching and learning. While ABTs can provide personalized feedback and guidance, they may not fully replace the nuanced interactions and emotional support that human educators can offer. The role of educators remains crucial in guiding and contextualizing the use of AI tools to ensure optimal learning experiences for students.
Our participants reported they observed errors in the output of ABTs. Thus, AI algorithms may struggle to fully understand the context and complexity of educational content. They may not grasp the subtleties of certain subjects or be able to provide the same level of nuanced analysis and interpretation as human instructors. This can lead to limitations in their ability to provide comprehensive feedback or address complex student inquiries.
AI-based tools may have limitations in adapting to rapidly evolving educational practices and pedagogical approaches. They may struggle to accommodate diverse learning styles or address unconventional teaching methods that prioritize creativity, critical thinking and hands-on experiences.

4.2.3. Opportunities

Technology-enhanced learning with systems using AI is not new [23]. However, intelligent tutoring systems have not yet been introduced into daily practice in higher education. No scientific evidence could be found for this missing translation. However, the current hype and the ongoing discussions about generative AI and its possible impact on higher education could once more trigger discussion and research in this field. Courses and working groups for lecturers have recently been established in universities [27]. This helps to create awareness of ABTs and trigger reflections. Given that individuals are also testing the currently available systems in a personal context, this might help increase the acceptance of ABTs in higher education.
The presented use cases indicate that higher education could benefit from ABTs. ABTs can provide interactive and immersive learning experiences, increasing student engagement and motivation. They can help to analyze large datasets from learning interactions to provide valuable insights that inform instructional design, curriculum development and targeted interventions. By automating routine administrative tasks, ABTs are freeing up educators’ time to focus on teaching, mentorship and student support. In particular, the availability of an intelligent tutor could fill a gap between students and lecturers; students may have reservations or fears about contact lecturers when questions arise. ABTs could fill this gap.
Possibly triggered by the improved availability of ABTs in the last year, our survey participants acknowledged a moderate to high impact of ABT usage on future teaching, learning, exams and specific tasks such as writing and programming. These trends could contribute to the development of concrete use cases and ABT usage in higher education.

4.2.4. Threats

Our survey showed that students and lecturers must acquire new skills and competencies, such as reasoning, critical thinking, data literacy, problem solving and interpreting and validating ABT results. They should be aware of their rights and responsibilities regarding data privacy, understand how their data are being used and make informed choices about the use of AI. Additionally, they should be aware of the dominance of AI opinion and the potential for misuse. While ABTs have the potential to make teaching and learning more efficient and effective, it is important to critically evaluate their results. Such digital skills have to be taught in universities, including how to use these tools responsibly and ethically [4].
The lack of adequate training programs or support systems can hinder the effective adoption and use of AI tools, leading to skills gaps and potential resistance to change. Other skills may be weakened, resulting in a dependency on the technology and a loss of knowledge and creativity. For example, the competence of formulating thoughts may be reduced when generative AI can formulate text from a set of keywords.
We have identified several challenges that may hinder the actual implementation of ABTs in higher education: their use without reflection, wrong information that is learned and shared, plagiarism and ensuring fair examination modes and fair grading in an educational future with ABT presence. The hype surrounding generative text producers has raised concerns in universities, leading to bans on their use in higher education contexts and investigations into plagiarism associated with their use [28]. The lack of guidelines for the responsible use of ABTs in higher education, particularly in assignments and exams, can lead to neglect and their prohibition. A recent survey showed that 18% of the 100 top US universities ban ABTs by default unless instructors say otherwise [27]. AI-based tools rely on collecting and analyzing vast amounts of student data. This raises concerns about data privacy and confidentiality and the potential misuse or unauthorized access to sensitive information. Institutions must ensure robust security measures and comply with data protection regulations. Furthermore, students are at risk of making data available to ABTs that are supposed to be protected due to copyright or other regulations they are not aware of.
In addition, the use of AI in education raises ethical concerns about transparency and accountability. It can be difficult to understand and explain how AI generates its output and makes decisions or recommendations, which can be problematic for lecturers and students who need to explain the origin of their results or trust the tools they use. Ensuring transparency and the ethical use of AI is essential to maintain the integrity of the educational process. Furthermore, the use of AI has the potential to compromise values through discrimination or any kind of bias. AI algorithms are susceptible to bias if they are trained on biased datasets or if they are not carefully monitored and tested. Unchecked bias can perpetuate discrimination, reinforce stereotypes or disadvantage certain groups of students, undermining the principles of fairness and equity in education.
Last, institutions of higher education face challenges in implementing and maintaining these tools caused by the need for significant financial investments. Educational institutions with limited budgets may face challenges in acquiring and integrating these tools or making licenses available to lecturers and students, potentially exacerbating resource inequalities. The use of AI-based tools may also lead to monopolies and a technological divide, limiting the equitable use of AI for all students and lecturers.

4.3. Limitations

4.3.1. Conduct of the Survey

Although this study was carefully planned and designed, it has some limitations. One limitation is that the questionnaire was only pre-tested regarding understandability, but not validated. The sample size was three times larger than expected, but the participants do not represent all continents. In fact, only four countries are well represented. Participants were recruited from the authors’ professional and personal networks and may therefore represent a biased sample. A total of 34% of participants work in the ICT sector, which is the authors’ field of study. It is also possible that only participants with a general interest in AI and AI tools completed the questionnaire, which could lead to bias.
In addition, the design of the questionnaire had some limitations: The items organization of study program and job description were not marked as mandatory, which resulted in missing values. We therefore refrained from analyzing these items. All other questions were marked as mandatory, which on the one hand might have reduced the completion rate (drop-out rate of 41%), but on the other hand increased the quality of the data. Additionally, for many questions, we provided answer options, such as an ABT, to choose. This might have been a biased selection, although we tried to select the most important tools. To overcome this limitation, we allowed participants to write down additional tools or thoughts in an optional free-text box.
The questionnaire included a lottery to increase response rates, particularly among students. The lottery could only be accessed from the final page of the questionnaire. However, despite the precautions intended to prevent abuse, it appears that this lottery encouraged questionnaire abuse/spam. A total of 26 spam-suspected cases were identified manually and eliminated before analysis. It cannot be guaranteed that all spam cases were eliminated, which could have distorted the findings of this study. For example, we noted that one lecturer selected ISCED-1 as their highest level of education, corresponding to primary education. Moreover, there is a chance that valid cases were excluded by our exclusion strategy, although only obvious cases based on duplicates or unrelated answers were removed.

4.3.2. Quantitative Analysis

With regard to inferential statistics, several test methods were used, the assumptions of which must be assessed in order to enable the interpretability and validity of the results.
In this survey, only non-parametric tests were conducted due to two reasons: First, our survey contains only nominal and ordinal variables, which reduces the amount of applicable parametric tests. Second, we tested all Likert scale items regardless of whether they follow a normal distribution using Pearson’s Chi-Squared test. The corresponding variables fulfill the test’s assumptions, namely, they are categorical and have expected cell frequencies ⩾5. If the expected cell frequencies were lower, a Kolmogorov–Smirnov test could have been used as an alternative. As a result, all items deviated significantly from a normal distribution, which prohibits the application of most parametric tests.
In this study, Likert-scaled variables were regarded as ordinal data, which is still an ongoing controversy in research [29]. The ordinal scale enabled us to conduct non-parametric tests to analyze differences in the central tendencies of variables between groups using the Mann–Whitney U test and independent samples Kruskal–Wallis test.
The conducted hypothesis tests yielded several significant differences. However, ordinal data should only be compared based on medians, which were the same for several compared dependent variable groups. Moreover, in some cases, the test summaries showed a significant difference, although a post hoc analysis discounted these differences in some cases, showing insignificant differences. This could be due to the fact that the chosen method, Bonferroni correction, might be too conservative [30]. The effect size was not computed for any hypothesis test.

5. Conclusions

In conclusion, our study has provided valuable insights into students’ and lecturers’ experiences and perceptions of the use of ABTs in higher education. Through a comprehensive survey covering different disciplines and countries, we conducted a SWOT analysis to identify the strengths, weaknesses, opportunities and threats associated with ABTs in higher education.
Our findings underline the need for a strategic approach to realize the potential of ABTs in higher education. It is clear that new skills and competencies may be required of students and teachers to achieve successful implementation of ABTs. Research should be directed towards identifying and strengthening these skills and developing a competency model for the effective use of ABTs. Subsequently, curricula should be adapted to incorporate these competencies, and courses for teachers should be developed to ensure that both students and teachers are well prepared for the responsible and effective use of ABTs. This approach will also require adjustments to assessment methods to ensure fairness and equity for all learners.
However, it is important to acknowledge the existing gaps in our understanding of the effectiveness and potential unintended consequences of the use of ABTs in educational settings. These gaps include the impact on student–teacher relationships, the potential devaluation of self-images and the unknown mental health aspects of increased ABT availability and use. Further research and ongoing monitoring are essential to fully address these issues.
In addition, our study highlights the ethical and societal implications of ABTs in higher education. It is imperative to determine not only the technological and legal feasibility of specific AI applications, but also their desirability and societal impact. The use of AI applications may raise concerns about privacy and empowerment, particularly if the modeling of student learning behavior is not appropriately constrained.
Ultimately, the responsibility for addressing misinformation and the potential unintended consequences arising from the use of AI in higher education lies with universities and educational institutions. These institutions should establish policies, guidelines and ethical frameworks for the responsible use of AI. Prioritizing student welfare, data protection and equitable access to AI tools is crucial. They must also provide the necessary resources, training and support to educators and students to ensure the effective and ethical use of AI. Furthermore, the sustainability and environmental implications of operating ABT systems and training algorithms should not be overlooked.
In summary, our research highlights the complex and multifaceted nature of integrating ABTs into higher education. To maximize their benefits and minimize their drawbacks, a holistic and well-informed approach is essential, taking into account the diverse needs and concerns of students, teachers and the wider educational community.

Author Contributions

Conceptualization, K.D., D.R. and R.G.; methodology, K.D., D.R. and R.G.; set up of questionnaire, D.R. and R.G.; statistical analysis, D.R.; SWOT analysis and results interpretation, K.D. and D.R.; writing—original draft preparation, K.D. and D.R.; writing—review and editing, K.D., R.G. and D.R.; visualization, D.R. and R.G.; supervision, K.D.; funding acquisition, K.D., D.R. and R.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by BeLearn and Bern University of Applied Sciences.

Institutional Review Board Statement

The study design was submitted to the ethics committee of the Canton of Berne, who confirmed that no ethics approval was required for this survey.

Informed Consent Statement

All subjects filling the questionnaire were informed of the purpose of the study and data usage.

Data Availability Statement

Supplementary data including the complete questionnaire are available at https://doi.org/10.17605/OSF.IO/W6N9M, (accessed on 15 October 2023).

Acknowledgments

We acknowledge the participation of all lecturers and students answering the questions of the survey.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. ABTs that participants claimed to have heard of (n = 297). The tools were provided for selection.
Table A1. ABTs that participants claimed to have heard of (n = 297). The tools were provided for selection.
GroupLecturersStudentsTotal
N125172297
Google Translate115 (92.0%)161 (93.6%)276
ChatGPT110 (88.0%)155 (90.1%)265
DeepL100 (80.0%)141 (82.0%)241
DALL-E61 (48.8%)59 (34.3%)120
OpenAI GPT-375 (60.0%)76 (44.2%)151
OpenAI Codex30 (24.0%)19 (11.0%)49
Stable Diffusion31 (24.8%)29 (16.9%)60
GitHub Copilot46 (36.8%)49 (28.5%)95
None0 (0%)1 (0.6%)1
Table A2. ABTs that participants claimed to have used (n = 297). The tools were provided for selection.
Table A2. ABTs that participants claimed to have used (n = 297). The tools were provided for selection.
GroupLecturersStudentsTotal
N125172297
Google Translate102 (81.6%)155 (90.1%)257
ChatGPT80 (64.0%)114 (66.3%)194
DeepL88 (70.4%)129 (75.0%)217
DALL-E27 (21.6%)34 (19.8%)61
OpenAI GPT-332 (25.6%)26 (15.1%)58
OpenAI Codex12 (9.6%)6 (3.5%)18
Stable Diffusion11 (8.8%)13 (7.6%)24
GitHub Copilot15 (12.0%)11 (6.4%)26
None0 (0%)2 (1.2%)2
Table A3. Impact on different tasks. For each task, the upper and lower numbers designate the percentage of lecturers and students, respectively.
Table A3. Impact on different tasks. For each task, the upper and lower numbers designate the percentage of lecturers and students, respectively.
TaskNo ImpactVery Mild ImpactMild ImpactModerate ImpactSevere Impact
Creating text2.44.88.016.868.0
0.67.08.832.650.9
Creating images3.46.023.138.529.1
4.87.324.829.133.9
Editing images3.45.225.027.638.8
3.09.818.329.339.6
Editing videos3.714.723.929.428.4
2.515.521.728.631.7
Answering questions5.64.88.921.858.9
3.04.110.723.758.6
Programming *2.79.89.825.951.8
0.66.57.731.054.2
Creating code6.26.29.723.954.0
1.94.56.533.553.5
* including code feedback
Table A4. Impact on exams, teaching and learning in percentages. For each kind of impact, the upper and lower numbers designate the percentage of lecturers and students, respectively.
Table A4. Impact on exams, teaching and learning in percentages. For each kind of impact, the upper and lower numbers designate the percentage of lecturers and students, respectively.
QuestionNo ImpactVery Mild ImpactMild ImpactModerate ImpactSevere Impact
Impact on exams4.11318.728.535.8
5.88.821.137.426.9
Impact on teaching4.82.42543.524.2
3.613.62634.322.5
Impact on learning3.28.110.537.940.3
1.81013.536.538.2

References

  1. Akinwalere, S.N.; Ivanov, V. Artificial intelligence in higher education: Challenges and opportunities. Bord. Crossing 2022, 12, 1–15. [Google Scholar] [CrossRef]
  2. González-Calatayud, V.; Prendes-Espinosa, P.; Roig-Vila, R. Artificial intelligence for student assessment: A systematic review. Appl. Sci. 2021, 11, 5467. [Google Scholar] [CrossRef]
  3. Maghsudi, S.; Lan, A.; Xu, J.; van Der Schaar, M. Personalized education in the artificial intelligence era: What to expect next. IEEE Signal Process. Mag. 2021, 38, 37–50. [Google Scholar] [CrossRef]
  4. O’Connor, S. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ. Pract. 2022, 66, 103537. [Google Scholar] [CrossRef]
  5. Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef] [PubMed]
  6. Leiker, D.; Finnigan, S.; Gyllen, A.R.; Cukurova, M. Prototyping the use of Large Language Models (LLMs) for adult learning content creation at scale. arXiv 2023, arXiv:2306.01815. [Google Scholar]
  7. Yan, L.; Sha, L.; Zhao, L.; Li, Y.; Martinez-Maldonado, R.; Chen, G.; Li, X.; Jin, Y.; Gašević, D. Practical and ethical challenges of large language models in education: A systematic literature review. arXiv 2023, arXiv:2303.13379. [Google Scholar]
  8. Lim, W.M.; Gunasekara, A.; Pallant, J.L.; Pallant, J.I.; Pechenkina, E. Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 2023, 21, 100790. [Google Scholar] [CrossRef]
  9. Graham, F. Daily briefing: Will ChatGPT kill the essay assignment? Nature 2022. [Google Scholar] [CrossRef]
  10. Stokel-Walker, C. AI bot ChatGPT writes smart essays-should academics worry? Nature 2022. [Google Scholar] [CrossRef]
  11. Friederichs, H.; Friederichs, W.J.; März, M. ChatGPT in medical school: How successful is AI in progress testing? Med. Educ. Online 2023, 28, 2220920. [Google Scholar] [CrossRef] [PubMed]
  12. Kasai, J.; Kasai, Y.; Sakaguchi, K.; Yamada, Y.; Radev, D. Evaluating gpt-4 and chatgpt on japanese medical licensing examinations. arXiv 2023, arXiv:2303.18027. [Google Scholar]
  13. Wang, Y.M.; Shen, H.W.; Chen, T.J. Performance of ChatGPT on the Pharmacist Licensing Examination in Taiwan. J. Chin. Med. Assoc. 2023, 86, 653–658. [Google Scholar] [CrossRef] [PubMed]
  14. Chan, C.K.Y.; Hu, W. Students’ Voices on Generative AI: Perceptions, Benefits, and Challenges in Higher Education. arXiv 2023, arXiv:2305.00290. [Google Scholar] [CrossRef]
  15. Ahmad, N.; Murugesan, S.; Kshetri, N. Generative Artificial Intelligence and the Education Sector. Computer 2023, 56, 72–76. [Google Scholar] [CrossRef]
  16. van der Vorst, T.; Jelicic, N. Artificial Intelligence in Education: Can AI bring the full potential of personalized learning to education? In Proceedings of the 30th European Conference of the International Telecommunications Society (ITS), Helsinki, Finland, 16–19 June 2019; International Telecommunications Society (ITS): Calgary, AB, Canada, 2019. [Google Scholar]
  17. Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2023, 1–15. [Google Scholar] [CrossRef]
  18. Houben, G.; Lenie, K.; Vanhoof, K. A knowledge-based SWOT-analysis system as an instrument for strategic planning in small and medium sized enterprises. Decis. Support Syst. 1999, 26, 125–135. [Google Scholar] [CrossRef]
  19. Helfferich, C. Die Qualität Qualitativer Daten; VS Verlag für Sozialwissenschaften: Wiesbaden, Germany, 2011. [Google Scholar] [CrossRef]
  20. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 1–27. [Google Scholar] [CrossRef]
  21. Kazemitabaar, M.; Chow, J.; Ma, C.K.T.; Ericson, B.J.; Weintrop, D.; Grossman, T. Studying the effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–23. [Google Scholar]
  22. Chen, L.; Chen, P.; Lin, Z. Artificial Intelligence in Education: A Review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  23. Balacheff, N.; Ludvigsen, S.; De Jong, T.; Lazonder, A.; Barnes, S.A.; Montandon, L. Technology-Enhanced Learning; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  24. Sovrano, F.; Ashley, K.D.; Brusilovsky, P.; Vitali, F. YAI4Edu: An Explanatory AI to Generate Interactive e-Books for Education. In Proceedings of the iTextbooks@ AIED, Durham, UK, 27–31 July 2022; pp. 31–39. [Google Scholar]
  25. Chassignol, M.; Khoroshavin, A.; Klimova, A.; Bilyatdinova, A. Artificial Intelligence trends in education: A narrative overview. Procedia Comput. Sci. 2018, 136, 16–24. [Google Scholar] [CrossRef]
  26. Seneviratne, I.; Perera, B.; Fernando, R.; Siriwardana, L.; Rajapaksha, U. Student and Lecturer Performance Enhancement System using Artificial Intelligence. In Proceedings of the 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), Palladam, India, 3–5 December 2020; pp. 88–93. [Google Scholar] [CrossRef]
  27. Caulfield, J. University Policies on AI Writing Tools. Overview and List. Scribbr 2023. [Google Scholar]
  28. Sanders, T. Nearly 400 Uni Students Investigated for Using ChatGPT to Plagiarise Assignments. Metro 2023. Available online: https://metro.co.uk/2023/07/05/nearly-400-caught-using-chatgpt-to-plagiarise-uni-assignments-19075163/ (accessed on 17 October 2023).
  29. Allen, I.E.; Seaman, C.A. Likert scales and data analyses. Qual. Prog. 2007, 40, 64–65. [Google Scholar]
  30. Streiner, D.L.; Norman, G.R. Correction for Multiple Testing: Is There a Resolution? Chest 2011, 140, 16–18. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Percentage of participants who mentioned that they have heard of the listed ABTs (n = 297). Tools were provided for selection.
Figure 1. Percentage of participants who mentioned that they have heard of the listed ABTs (n = 297). Tools were provided for selection.
Higheredu 02 00039 g001
Figure 2. Percentage of participants who mentioned that they have used the listed ABTs (n = 297). Tools were provided for selection.
Figure 2. Percentage of participants who mentioned that they have used the listed ABTs (n = 297). Tools were provided for selection.
Higheredu 02 00039 g002
Figure 3. Percentage and total number of participants within fields of study who mentioned that they had used ABTs in exams.
Figure 3. Percentage and total number of participants within fields of study who mentioned that they had used ABTs in exams.
Higheredu 02 00039 g003
Figure 4. Word cloud generated from the free-text answers on the usage of ABTs in other contexts.
Figure 4. Word cloud generated from the free-text answers on the usage of ABTs in other contexts.
Higheredu 02 00039 g004
Figure 5. Judgments of lecturers and students in percentage to the statement “The following applications of AI-based tools will have an impact on future education”.
Figure 5. Judgments of lecturers and students in percentage to the statement “The following applications of AI-based tools will have an impact on future education”.
Higheredu 02 00039 g005
Figure 6. Judgments of lecturers and students in percentage on the statement “To what extent will AI-based tools have an impact on the following aspects of teaching and education?” (n = 297).
Figure 6. Judgments of lecturers and students in percentage on the statement “To what extent will AI-based tools have an impact on the following aspects of teaching and education?” (n = 297).
Higheredu 02 00039 g006
Figure 7. Percentage of participants who selected specific skills needed to use AI-based tools successfully.
Figure 7. Percentage of participants who selected specific skills needed to use AI-based tools successfully.
Higheredu 02 00039 g007
Figure 8. Methods for universities to deal with upcoming ABTs (n = 297).
Figure 8. Methods for universities to deal with upcoming ABTs (n = 297).
Higheredu 02 00039 g008
Table 1. Questions driving the SWOT analysis.
Table 1. Questions driving the SWOT analysis.
Internal FeaturesStrengthsWeaknesses
  • What are advantages of using ABTs in higher education?
  • What are the greatest achievements of ABTs in higher education?
  • In which tasks could ABTs support lecturers and students in teaching and learning?
  • How might exams, teaching and learning change with ABTs in future?
  • What are disadvantages of ABT usage in higher education?
  • Are ABTs useful for students and lecturers, and are they accepted by their users?
  • What needs improvement in the context of ABT usage in higher education?
External FeaturesOpportunitiesThreats
  • Which external changes in the context of higher education will bring opportunities?
  • What are the current trends supporting ABT usage in higher education?
  • Are there any gaps in teaching and learning that can be filled by ABTs?
  • Can higher education benefit from ABTs?
  • What are the current trends preventing ABT usage in higher education?
  • Which ethical concerns including biases could prevent ABT usage?
  • Which data privacy aspects could threaten ABT usage?
  • Are there serious concerns that impair or prevent the actual implementation of ABTs in higher education?
  • Which competencies are required for applying ABTs for learning and teaching?
  • Which challenges for universities regarding the use of ABTs exist?
Table 2. Summary of participants’ characteristics. Education levels are primary education (ISCED 1), lower secondary education (ISCED 2), upper secondary education (ISCED 3), post-secondary education (ISCED 4), short-cycle tertiary education (ISCED 5), bachelors (ISCED 6), masters (ISCED 7), doctoral (ISCED 8). Cases were ignored in which participants stated they were neither a lecturer nor a student (n = 8). Percentages denote the relative amount of answers per group (lecturer/student).
Table 2. Summary of participants’ characteristics. Education levels are primary education (ISCED 1), lower secondary education (ISCED 2), upper secondary education (ISCED 3), post-secondary education (ISCED 4), short-cycle tertiary education (ISCED 5), bachelors (ISCED 6), masters (ISCED 7), doctoral (ISCED 8). Cases were ignored in which participants stated they were neither a lecturer nor a student (n = 8). Percentages denote the relative amount of answers per group (lecturer/student).
GroupLecturerStudentTotal
N125172297
Age group
Age: 18–244 (3.2%)104 (60.5%)108
Age: 25–3422 (17.6%)58 (33.7%)80
Age: 35–4429 (23.2%)9 (5.2%)38
Age: 45–5448 (38.4%)1 (0.6%)49
Age: 55–6421 (16.8%)0 (0%)21
Age: 65+1 (0.8%)0 (0%)1
Education level
ISCED 11 (0.8%)2 (1.2%)3
ISCED 21 (0.8%)1 (0.6%)2
ISCED 31 (0.8%)70 (40.7%)71
ISCED 42 (1.6%)26 (15.1%)28
ISCED 50 (0%)13 (7.6%)13
ISCED 610 (8.0%)51 (29.7%)61
ISCED 744 (35.2%)8 (4.7%)52
ISCED 866 (52.8%)1 (0.6%)67
Field of study
Generic programs and qualifications2 (1.6%)1 (0.6%)3
Education8 (6.4%)10 (5.8%)18
Arts and humanities8 (6.4%)14 (8.1%)22
Social sciences, journalism and information5 (4.0%)9 (5.2%)14
Business, administration and law10 (8.0%)12 (7.0%)22
Natural sciences, mathematics and statistics15 (12%)10 (5.8%)25
Information and Communication Technologies49 (39.2%)53 (30.8%)102
Engineering, manufacturing and construction12 (9.6%)12 (7.0%)24
Agriculture, forestry, fisheries and veterinary1 (0.8%)5 (2.9%)6
Health and welfare15 (12.0%)45 (26.2%)60
Services0 (0%)1 (0.6%)1
Table 3. Tasks for which ABTs have been used within project work. Percentages of total numbers of responses refer to n = 297.
Table 3. Tasks for which ABTs have been used within project work. Percentages of total numbers of responses refer to n = 297.
TaskLecturersStudentsTotal
Translation75 (40.5%)110 (59.5%)185 (63.6%)
Writing text52 (46.4%)60 (53.6%)112 (38.5%)
Writing code21 (32.3%)44 (67.7%)65 (22.3%)
Summarizing31 (42.5%)42 (57.5%)73 (25.1%)
Topic analysis29 (50.9%)28 (49.1%)57 (19.6%)
Image generation27 (50.9%)26 (49.1%)53 (18.2%)
Image editing10 (50.0%)10 (50.0%)20 (6.9%)
Image classification13 (72.2%)5 (27.8%)18 (6.2%)
Image segmentation11 (73.3%)4 (26.7%)15 (5.2%)
Image captioning10 (52.6%)9 (47.4%)19 (6.5%)
Table 4. Results of hypothesis tests regarding the perceived future impact of ABTs, comparing levels of highest education. Significance values of pairwise comparison adjusted by the Bonferroni correction. Groups/comparison group (Cmp. Grp.): IA: ISCED level 1–3, IB: ISCED level 4–6, IC: ISCED level 7–8. Adjusted significance (Adj. Sig.).
Table 4. Results of hypothesis tests regarding the perceived future impact of ABTs, comparing levels of highest education. Significance values of pairwise comparison adjusted by the Bonferroni correction. Groups/comparison group (Cmp. Grp.): IA: ISCED level 1–3, IB: ISCED level 4–6, IC: ISCED level 7–8. Adjusted significance (Adj. Sig.).
ImpactGroupCmp. Grp.ImpactAdj. Sig.Kruskal–Wallis Test
Impact on future education due to applying ABTs…
to create textICIAhigher0.000H = 25.63, p < 0.001
to create textICIBhigher0.000H = 25.63, p < 0.001
to answer questionsIBIClower0.017H = 7.85, p = 0.020
for programming/ code feedbackIBIAlower0.035H = 10.52, p = 0.005
for programming/ code feedbackIBIClower0.008H = 10.52, p = 0.005
for code creationIBIAlower0.037H = 10.11, p = 0.006
for code creationIBIClower0.011H = 10.11, p = 0.006
Perceived impact of ABTs…
on examsIBIAlower0.020H = 13.89, p < 0.001
on examsIBIClower0.001H = 13.89, p < 0.001
on teachingICIAhigher0.010H = 12.80, p = 0.002
on teachingICIBhigher0.006H = 12.80, p = 0.002
on learningIBIClower0.039H = 6.38, p = 0.041
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Denecke, K.; Glauser, R.; Reichenpfader, D. Assessing the Potential and Risks of AI-Based Tools in Higher Education: Results from an eSurvey and SWOT Analysis. Trends High. Educ. 2023, 2, 667-688. https://doi.org/10.3390/higheredu2040039

AMA Style

Denecke K, Glauser R, Reichenpfader D. Assessing the Potential and Risks of AI-Based Tools in Higher Education: Results from an eSurvey and SWOT Analysis. Trends in Higher Education. 2023; 2(4):667-688. https://doi.org/10.3390/higheredu2040039

Chicago/Turabian Style

Denecke, Kerstin, Robin Glauser, and Daniel Reichenpfader. 2023. "Assessing the Potential and Risks of AI-Based Tools in Higher Education: Results from an eSurvey and SWOT Analysis" Trends in Higher Education 2, no. 4: 667-688. https://doi.org/10.3390/higheredu2040039

APA Style

Denecke, K., Glauser, R., & Reichenpfader, D. (2023). Assessing the Potential and Risks of AI-Based Tools in Higher Education: Results from an eSurvey and SWOT Analysis. Trends in Higher Education, 2(4), 667-688. https://doi.org/10.3390/higheredu2040039

Article Metrics

Back to TopTop