Next Article in Journal
Consumer Orientation and Market-Driven Strategies for Promoting Low-Carbon Innovation in Supply Chains: Pathways to Sustainable Development
Previous Article in Journal
Negotiating Wellbeing and Tourism: A Reorientation Process in the Cook Islands
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attitude Mining Toward Generative Artificial Intelligence in Education: The Challenges and Responses for Sustainable Development in Education

1
School of Public Administration, Yanshan University, Qinhuangdao 066004, China
2
Graduate School, Yanshan University, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(3), 1127; https://doi.org/10.3390/su17031127
Submission received: 20 December 2024 / Revised: 21 January 2025 / Accepted: 27 January 2025 / Published: 30 January 2025

Abstract

:
Generative artificial intelligence (GenAI) technologies based on big language models are becoming a transformative power that reshapes the future shape of education. Although the impact of GenAI on education is a key issue, there is little exploration of the challenges and response strategies of GenAI on the sustainability of education from a public perspective. This data mining study selected ChatGPT as a representative tool for GenAI. Five topics and 14 modular semantic communities of public attitudes towards using ChatGPT in education were identified through Latent Dirichlet Allocation (LDA) topic modeling and the semantic network community discovery process on 40,179 user comments collected from social media platforms. The results indicate public ambivalence about whether GenAI technology is empowering or disruptive to education. On the one hand, the public recognizes the potential of GenAI in education, including intelligent tutoring, role-playing, personalized services, content creation, and language learning, where effective communication and interaction can stimulate users’ creativity. On the other hand, the public is worried about the impact of users’ technological dependence on the development of innovative capabilities, the erosion of traditional knowledge production by AI-generated content (AIGC), the undermining of educational equity by potential cheating, and the substitution of students by the passing or good performance of GenAI on skills tests. In addition, some irresponsible and unethical usage behaviors were identified, including the direct use of AIGC and using GenAI tool to pass similarity checks. This study provides a practical basis for educational institutions to re-examine the teaching and learning approaches, assessment strategies, and talent development goals and to formulate policies on the use of AI to promote the vision of AI for sustainable development in education.

1. Introduction

Education and technology have long been constantly coupling, interacting, and reshaping [1]. Advances in technology can significantly contribute to changes in education in terms of teaching methods and learning paradigms [2]. Since the emergence of artificial intelligence (AI) technology, with its characteristics of autonomy, interactivity, and adaptability, it has gradually penetrated all aspects of education [3], including teaching scenarios [4], learning scenarios [5], and decision-making scenarios for the management of education institutions [6]. In 2015, the United Nations adopted the 2030 Agenda for Sustainable Development and established 17 goals for sustainable development [7]. Among these, the vision of providing inclusive and equitable quality education with lifelong learning opportunities for all, i.e., Sustainable Development Goal 4 (SDG4), is based on the fundamental principle that education is a basic right of human beings. From a technological perspective, AI can support ubiquitous learning environments and has shown great potential to reduce the digital divide and ensure quality education, contributing to the achievement of SDG4. In 2019, UNESCO published a report on the challenges and opportunities of AI for sustainable development in education and argued for strategies and pathways to use AI to improve learning and promote inclusive and equitable education based on examples from different countries [8]. Academics are also paying more attention to the ways and means of AI technology to promote the sustainable development of education, pointing out that AI will empower the sustainable development of education and realize the all-round transformation of teaching models, learning environments, learning contents, and assessment mechanisms. Examples include AI-embedded design of sustainability teaching and learning frameworks [9], AI-driven construction of sustainability virtual learning environments [10], AI-enabled personalized and inclusive learning content experiences [11], and AI-based sustainable education technology-assisted assessment and monitoring of student learning levels [12].
On 30 November 2022, OpenAI launched ChatGPT based on the GPT-3.5 architecture and quickly completed the upgrade iteration. ChatGPT 4.0 was released with better performance on 14 March 2023. Unlike general databases and search engines, the tool breaks through the technical bottleneck of natural language processing, relying on powerful algorithms, computing power and training on large datasets, realizing the autonomy and creativity of content generation, and possessing powerful text generation ability. The tool has been widely discussed because of its excellent performance in several assessment efforts, such as the Uniform Bar Exam (UBE) and Law School Admission Test (LSAT) [13,14].
The rapid optimization and development of generative artificial intelligence (GenAI), such as ChatGPT, in terms of information processing, learning ability, and problem-solving ability, make it a new trend in technology-empowered education [15]. Education institutions are beginning to commit themselves to actively exploring effective paths for the integration of GenAI and education. Some potential scenarios are gradually being recognized regarding GenAI in education applications. In terms of student learning, GenAI can provide study tutoring and homework assistance [16], help customize personalized learning programs [17] and take on the role of a virtual teaching assistant [18]. In terms of teacher instruction, GenAI can provide instructional support to help design instructional programs, create syllabi [19] and assist with the assessment of homework and writing [20]. Some case studies of using GenAI in education are emerging to show the potential of GenAI tools in promoting writing, academic research, and language learning. For example, GenAI can facilitate the emergence of writing ideas [21,22], improve the efficiency of academic writing [23] and enhance the listening, speaking, reading, and writing skills of language learners [24].
While recognizing the benefits of GenAI for education, the crisis in education due to the tool is gradually being exposed. Some scholars have summarized it as the subversion of education [25,26,27]. They think the education system is facing a disruptive challenge with GenAI’s full-scale assault on the boundaries of education. Technological change leads to the alienation of knowledge, subjects, and processes of education and the ethical and governance risks of education. N. Wang et al. summarized the challenges of applying GenAI in education in five areas: crisis of academic integrity, response texts with errors and biases, risk of over-reliance, increasing digital divide, and privacy and security [28].
Undoubtedly, the embedding of GenAI in educational scenarios has triggered education institutions to think from the philosophy of education (what kind of people to train), teaching and learning processes (how to train people), assessment strategies (how to detect unethical use) to the existence (the significance of the existence of education institutions and educators) [29]. Education institutions must adopt more proactive and informed strategies to deal with the changing challenges in the field of education due to GenAI tools [30].
The social context in which a technology is situated is one of the important factors in determining whether and to what extent it is applied [31,32]. Identifying public attitudes toward technology can provide a reference and orientation basis for policy-making by workers in related fields [33,34,35]. Some results exploring public attitudes toward the use of GenAI such as ChatGPT in education are emerging. It focuses on two main areas. One is to use the technology acceptance model (TAM) or the unified theory of acceptance and use of technology (UTAUT) as a guiding framework to explore students’ [36,37] or teachers’ [38] attitudes towards the acceptance and use of GenAI tools in education. The second is to explore students’ or teachers’ attitudes towards GenAI tools in education-specific areas such as physics courses [39] and essay writing [40] based on interviews or designing relevant experiments.
Previous studies have consistently shown that the public has a positive attitude toward the use of GenAI in education [41,42]. However, most of these studies are explorations of student and teacher attitudes, presenting often fragmented perspectives within specific populations of specific scenarios. Revealing the general attitudes of the public towards the use of GenAI in the education system becomes more important. Some scholars have noted that exploring public attitudes on social media towards using emerging technologies in education can provide educators with management experiences. Adeshola and Adepoju present a web mining and machine learning approach to exploring the impact of ChatGPT on education [43]. So et al. explored public opinions on using GenAI in education by analyzing public comments under the Korean Broadcasting Corporation’s YouTube posts related to the integration of GenAI into education and noting that the technology directly impacts changes in the way learning and assessment are implemented [44]. Adeshola and Adepoju analyzed 3870 tweets through topic modelling to explore the impact of GenAI on academia, highlighting that educational institutions should develop policies or guidelines to reduce the potentially disruptive impact of the technology on the academic field and preserve the academic environment [45]. These studies explored public attitudes towards using GenAI technologies in education through topic modelling, placing greater emphasis on the explanation of the topic and ignoring the public debates and complex attitudes within the semantic community surrounding the topic. The debates and behaviors of the public on the topic reflect the complexities of GenAI embedded in education. Analyzing these can be useful in providing a more comprehensive practical basis for educational institutions to respond to GenAI for sustainable development in education. This study aims to explore public attitudes towards using ChatGPT in education by analyzing public comment texts on Chinese social media through LDA topic modelling and semantic network analysis methods. It also seeks to summarize the challenges posed by GenAI technology to the sustainable development of education and to explore governance strategies to respond to the challenges of the technology. Firstly, social media platforms allow the public to express public insights [46]. The public may choose to conceal some ChatGPT usage and attitudes in interview and questionnaire studies due to social expectations and pressures [47]. Social media, because of its anonymity, privacy protection, and real-time exchange of feedback, makes it easier for the public to express their views on the use of certain features, even if the use of that feature in that situation is ethically wrong. Secondly, text mining techniques are one of the effective means to explore public attitudes towards new technologies [48,49]. Li et al. noted that using text mining methods, such as sentiment analysis, topic modeling, and social network analysis methods, to analyze GenAI and education-related texts posted by the public was able to identify the complex attitudes of the public regarding their support for and concerns about GenAI in education [50]. This knowledge can assist those involved in making guidelines for GenAI applications. Finally, research within a Chinese context can enrich the latest developments and unique challenges of GenAI’s impact on education in different cultural contexts. This is due to the strict firewall policy that China has been enforcing restrictions on access to foreign websites and services such as ChatGPT [51]. Even so, there has been a groundswell of discussion about the use of ChatGPT in China [52]. Consideration of this unique contextual study can provide a deeper understanding of the complex dynamics at play in the interplay of cultural contexts and policy regulations.
This study collected public comment texts from December 2022 to February 2024. It aims to map the public’s attitudes about the application of GenAI such as ChatGPT in the education system through LDA topic modeling and semantic network community discovery under specific topics. Interpreting public attitudes toward the uses of GenAI in education provides an important basis for education institutions to limit the technological boundaries of the use of GenAI such as ChatGPT in education and to promote a benign transformation of the education system with the assistance of technology. The main research questions of this study are as follows:
RQ1: What are the main aspects of public attitudes about the use of GenAI such as ChatGPT in the education?
RQ2: What is being discussed by the public under specific topics? What are the trends in the evolution of these discussions?
RQ3: How should education institutions develop and adapt policies to respond to public attitudes about the use of GenAI in education?

2. Data, Method and Research Design

2.1. Data

2.1.1. Data Collection

This study mined public attitudes towards GenAI by obtaining and analyzing the comment texts published by the public on the use of ChatGPT in education for the following two reasons. On the one hand, in terms of technological applications, the public is more inclined to express its opinion about a specific tool. On the other hand, ChatGPT is one of the internationally renowned GenAI tools that reached 300 million weekly active users in December 2024.
The public has different preferences for using different types of social media. To more completely understand the attitudes of the Chinese public towards the use of GenAI in the education system, Python (version 3.8.10) code was written to crawl the texts related to ChatGPT and education posted by the public on four mainstream social media platforms in China, namely, Bilibili, Jinritoutiao, Weibo, and Zhihu, with the keywords of “ChatGPT and education” as the search terms. The data include usernames, posting times, and text information from Bilibili’s video guides, Jinritoutiao’s information, Weibo’s posts, and Zhihu’s questions and answers (Q&A). Bilibili is a video platform that had more than 100 million daily active users in the third quarter of 2024. The platform can upload long, medium, and short videos and is highly inclusive. The platform has become one of the effective channels for users to learn how to use the tool since the emergence of ChatGPT. Jinritoutiao is an information-based fusion media platform that allows official media accounts to publish relevant media information. Since the launch of ChatGPT, some official media have published discussions and opinions about GenAI on the platform. Weibo is an instantaneous and open platform for information interaction that allows the public to discuss topics of interest in real time. Since the appearance of ChatGPT, hashtags containing ChatGPT have been on the hot-search list many times, and the public has expressed their opinions about the tool on this platform. Zhihu is a Q&A community platform. ChatGPT, as a typical representative of disruptive and innovative technological tools, has aroused the curiosity and questions of the public, and these discussions have formed a Q&A community with ChatGPT as the topic on the Zhihu platform. Analyzing texts related to the use of ChatGPT in education on these platforms in the form of instructional video messages, reporting information, hashtag posts, and Q&A communities can reveal general public attitudes towards the tool. The timeframe for this analysis is limited to December 2022 through February 2024 (15 months), during which 53,282 texts were collected. December 2022 was chosen as the starting month for this study because ChatGPT was launched by OpenAI on 30 November 2022, and subsequently, there has been a lot of interest and discussion about the applicability of GenAI in education. GPT-4 was released in March 2023. This model is capable of handling more subtle instructions than GPT-3.5. It allows the user to specify any visual or verbal task and has demonstrated human-level performance on a variety of professional benchmarks. Its outstanding functional performance and potential application risks have aroused more profound public discussions. In general, the development status of emerging technology in the first year of its release often becomes an important basis for policy formulation, as the technological characteristics and market response during this period will directly affect the orientation and strength of future policies. Therefore, this study used February 2024 as the cut-off date, about 12 months after GPT-4 was released. Further textual information was screened. On the one hand, the same comment information from the same user was removed. On the other hand, textual information unrelated to the application of ChatGPT in the education system was manually marked and deleted. Finally, 40,179 valid data were obtained.

2.1.2. Data Pre-Processing

Constructing the stop words list in this study was based on the stop words list of Harbin Institute of Technology, the stop words list of Baidu, the stop words list of Chinese, the stop words list of the Machine Intelligence Laboratory of Sichuan University, and the customized stop words list to reduce the effect of noise on the analysis results. The dictionary of long words for this study was constructed to ensure that long words with research significance such as “digital technology” and “human-machine cooperation” were not segmented. Considering the different expressions of the public for synonyms such as GenAI, Generative Artificial Intelligence, Generative AI, and GAI, the synonym dictionary for this study was constructed to improve the interpretability of the text-mining results. On this basis, using Jieba segmentation, the 40,179 valid data were subjected to the steps of word segmentation, filtering of stop words, removal of special symbols, substitution of similar words, and extraction of keywords to form the initial corpus of the study.

2.2. Text Mining Methods

2.2.1. LDA Topic Modeling

LDA topic modeling is an inductive, unsupervised machine learning method that is able to infer potential topics in large amounts of unstructured text data. Its generative probabilistic model contains three analysis layers: the corpus (the collection of documents used for analysis), each document in the corpus, and the list of words in each document. The topic model assumes that the entire document set is a probability distribution of topics, and each topic is a probability distribution of feature words [53]. The joint distribution formula for the model is shown as Equation (1):
p ( θ , z , w α , β ) = p ( θ α ) n = 1 N p ( z n θ ) p ( w n z n , β )
where θ is the topic distribution, α and β are a priori parameters estimated based on actual experience, default values are generally used, and topic z and topic word w can be obtained by the Gibbs sampling algorithm.
In this study, potential topics of public attitudes towards the application of ChatGPT in the education system in the initial corpus were explored by importing LdaModel from gensim. Three results were obtained by analyzing the initial corpus through LDA topic modeling. One was a set of topics representing public attitudes towards the application of ChatGPT in the education system. The second was a list of the most important words for specific topics. The third was the most important topic for each public comment text in the corpus. The first two results were used to explain and construct which topics the public was concerned about the application of ChatGPT in the education system, and the last result was used to explore the evolutionary trend of the public’s attitude towards specific topics.
Topic perplexity and coherence scores are often used to determine the optimal number of topics [54]. Topic perplexity measures the predictive power of a topic, but a model with a low topic perplexity score does not mean that the topic can be interpreted well by humans [55]. Topic coherence measures the interpretability of a topic by calculating the semantic similarity of high-probability words in the topic [56]. Cao et al. noted that better topic modeling results in a topic structure for each topic that is comprehensible, meaningful, and semantically aggregated [57]. In this study, the optimal number of topics for the initial corpus was determined through topic coherence:
c o h e r e n c e ( V ) = ( v i , v j ) V log D ( v i , v j ) + ϵ D ( v j )
where D ( v i , v j ) refers to the number of documents containing the words v i and v j ,   D ( v j ) is the number of documents containing the word v j , and ϵ is a smoothing factor.
The change in the topic consistency score is shown in Figure 1. Both have high topic consistency scores when the number of topics is 5 or 6. When the number of topics is 5, Topic 4 and Topic 5 represent public attitudes toward ChatGPT in teaching and research, respectively. The partial overlap between the two topics is reasonable. When the number of topics is 6, there are varying degrees of overlap between Topics 1 and 2 and between Topics 4 and 5. Topics 1, 2, and 3 are more closely linked to each other. This study determined the optimal number of topics is 5 by the comparison of pyLDAvis visualization results.

2.2.2. Semantic Network Construction and Community Discovery for Texts Under Specific Topics

In order to capture the underlying meanings of key phrases and explore the ideological, rhetorical, and public attitudinal dynamics of the text under the topic, this study identified the semantic structure of public attitudes on a specific topic by constructing a semantic network of feature words for documents under that topic, where the network nodes are feature words and the node and node label size reflect the betweenness centrality of that node. The betweenness centrality reflects the degree of importance of the node in the semantic network [58]:
B C i = s i t n s t i g s t
where B C i denotes the betweenness centrality of node i , n s t i represents the number of paths that pass node i and are shortest paths, and g s t refers to the number of shortest paths connecting s and t .
The Louvain community discovery algorithm was used to measure text communities in the semantic network under specific topics. By finding the maximum modularity of the network, it is possible to identify community structures where the nodes in a cluster have tight links and the connections between cluster groups are looser [59]:
Q = 1 2 m i , j [ A i j k i k j 2 m ] δ ( c i , c j )
where Q is the modularity degree, 1 2 m means the reciprocal of two times the number of all sides, A i j represents whether nodes i and j are directly connected or not, k i and k j are the degrees of nodes i and j , and δ ( c i , c j ) is the indicator function whose value is 1 if and only if node i and node j belong to the same community, and 0 otherwise.
In this study, low-frequency feature words were filtered to sparsify the network, thereby presenting a clearer and more interpretable semantic network and community structure. The top 5 feature words in each community ranked by betweenness centrality were identified. The frequency of these feature words in all documents in the month was calculated and normalized, which was written as follows:
f w = n w m d
where f w refers to the relative frequency of the feature word w , n w is the number of times the feature word w occurs in all documents, and m d represents the number of documents. Min-Max normalization was used to map f w to the interval [0, 1]:
f w = f w M I N M A X M I N
Further, the time-series heat map of the key feature words of the community was drawn to explore the evolutionary trend of the text communities under specific topics.

2.3. Research Design

Figure 2 shows the research framework of this study. This study used Python to write the web crawler code and crawled the text messages about the application of ChatGPT in the education system coming in four selected social media. The initial corpus was formed after data filtering and pre-processing of these texts. Public attitudes topics about the application of ChatGPT in education were explored using LDA topic modeling analysis for an initial corpus. The level of public interest in specific topics was explored through the time-series analysis of the number of texts under the topics and the word-cloud analysis of the feature words. What the public focuses on under the topic and the evolutionary trends of that content were explored through semantic network construction and community analysis of the texts under the topic. The results of public attitudes and semantic network community discovery provide an empirical basis for education institutions to respond to societal concerns and guide GenAI to be used ethically in the education system.

3. Results

Figure 3 shows the results of the topic visualization of the comment text about the application of ChatGPT in the education system through the pyLDAvis library developed by Sievert and Shirley [60]. PyLDAvis maps topics from the original dimension of the number of words to two dimensions by principal component analysis. The size of the circles in the left panel reflects the topic’s popularity (the overall weight of feature words assigned to the topic in the initial corpus), and the distance between the circles reflects the similarity between the topics. The right panel reflects the 30 most salient feature words of the initial corpus. Among them, “ChatGPT”, “AI”, “write”, “paper”, “chat”, and “study” were the six most salient feature words in the initial corpus. This emphasized the public concern about the application of ChatGPT in writing and learning.

3.1. Topic 1: Public Attitudes Toward the Impact of ChatGPT on Innovation Ability

Figure 4a shows the top 10 feature words ranked by weight in Topic 1. Among them, four feature words, i.e., “ability”, “innovation”, “human”, and “model”, had weights greater than or equal to 0.010. This reflected public concern about the impact of ChatGPT on the development of innovation ability.
Figure 4b illustrates the trend of the number of texts for Topic 1 over time and the word cloud map of the feature words for the corresponding months. Public interest in the topic peaked in February 2023. It shows that the public was concerned about the impact on students’ creativity when ChatGPT was used for writing by analyzing the word cloud map for that month.
Figure 4c shows the semantic networks and text communities under Topic 1. The topic identified three modular communities. The top five feature words ranked by betweenness centrality in the orange community were “ChatGPT”, “ability”, “study”, “innovation”, and “human”. They form a close community with the words “understand”, “knowledge”, “content”, “promote”, “judge”, “challenge”, “risk” and so on. The community reflected the public debate on whether ChatGPT can contribute to the development of innovation ability. Some publics believed that communicating and interacting with ChatGPT can stimulate students’ thinking and creativity (“Students can ask questions, discuss ideas, and solve problems through conversations with ChatGPT to develop critical thinking and creative skills”, TZ012023070034). In particular, ChatGPT made human beings more concerned about knowledge innovation (“ChatGPT frees people from the brain work of repetitive labor and focuses more on the innovative areas of knowledge”, TZ012023060070). In contrast, some publics believed that ChatGPT can quickly generate detailed answers and explanations based on user questions. It directly led to the phenomenon of questions being more important than answers and the potential for the human mind to degenerate and become subservient to AI (“The mature development of GenAI tools like ChatGPT will cripple the thinking ability of common people”, TB012023050019; “Students may habitually seek answers to models without thinking about and exploring the questions themselves. This will result in students’ thinking becoming simplistic and mechanized, and they will not be able to develop independent thinking and innovative skills”, TJ012023030044). The top five feature words ranked by betweenness centrality in the green community were “AI”, “tool”, “information”, “future”, and “education”. This community reflected public thinking about the future of education innovation development under the influence of ChatGPT. Since the process of ChatGPT embedded in education is irreversible, future education should fully utilize ChatGPT as a tool and emphasize the cultivation of thinking skills and innovation abilities (“The future direction of education should be for children to learn to exploration, thinking, innovation, problem-solving skills, and to develop their creativity”, TW012023020226). The top five feature words ranked by betweenness centrality in the purple community were “technology”, “generative”, “data”, “train”, and “model”. The public of this community emphasized more the fact that the output content generated by ChatGPT is predicted, reorganized and constructed based on the original training data and cannot innovate knowledge (“ChatGPT is just a machine. Humans instill into it the knowledge that already exists. It is not able to innovate”, TW012023020152; “ChatGPT is essentially a language model. Its ability comes from fitting to training data, and there is no logical reasoning or judgment in its operating mechanism”, TW012023040035).
Figure 4d shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 1. The high frequency of feature words in the orange community reflected the ongoing public debate about whether ChatGPT enhances innovation. The frequency of feature words for green communities showed a low level. It reflected the public focus on the impact of ChatGPT on innovativeness in the present and a lack of foresight on how education can be reformed in the future to respond to the negative impact of ChatGPT on knowledge and innovation. The purple community had a relatively average level of feature word frequency. It reflected that the public has recognized the essential difference between ChatGPT’s content output logic and knowledge innovation logic.

3.2. Topic 2: Public Attitudes Toward the Applicability of ChatGPT’s Basic Functions to Education System

Figure 5a illustrates the top 10 feature words ranked by weight in Topic 2. The four feature words “provide”, “respond”, “understand”, and “generative” reflect the basic functions of ChatGPT. The feature words “language” and “article” reflect the applicable fields of ChatGPT. Topic 2 shows the public’s concern about the applicability of ChatGPT’s basic functions to education.
Public discussion of the applicability of ChatGPT’s basic functions to education centered around the first half of 2023 (Figure 5b). It reflected that the public pays more attention to the application of ChatGPT in areas such as educational tutoring, information retrieval, knowledge comprehension, and article writing by analyzing the word cloud map for six months.
Figure 5c shows the semantic networks and text communities under Topic 2. The topic identified three modular communities. The top five feature words ranked by betweenness centrality in the orange community were “ChatGPT”, “AI”, “provide”, “respond”, and “generative”. These feature words formed a tight textual network with terms such as “teacher”, “student”, “auxiliary”, and “teaching assistant”. It demonstrated that ChatGPT can function in roles such as a teacher or teaching assistant (“ChatGPT can be used as an intelligent teaching assistant to provide students with personalized study guidance and question-answering services”, TJ022023020022). The top five feature words ranked by betweenness centrality in the green community were “write”, “content”, “article”, “suggestion”, and “word”. This community reflected the public’s exploration of the use of ChatGPT in article writing, especially in providing writing advice and adjusting and optimizing the structure of writing (“ChatGPT can provide students with feedback on their written work, including suggestions for improvement, corrections to grammar, and suggestions for structure”, TZ022023050135). The top five feature words ranked by betweenness centrality in the purple community were “way”, “need”, “tip”, “thinking”, and “inquiry”. The community explored more varied functions of ChatGPT in education, such as giving tips on problem-solving, assisting in knowledge querying, providing content explanations, and role-playing (“We can also assign ChatGPT an identity and allow it to provide professional responses in a role-playing format”, TZ022023050106).
Figure 5d shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 2. The frequency of feature words in the orange community was consistently high. This reflected the continued public interest in the possible application areas of ChatGPT in teaching and learning. The intensity of feature words in the green community gradually increased over time. It indicated a growing recognition of ChatGPT’s value in article writing. Comparatively, the public showed a relatively low level of interest in the applications of ChatGPT’s basic functions in educational areas beyond teaching and writing.

3.3. Topic 3: Public Attitudes Toward the Response Measures Taken by Stakeholders Regarding ChatGPT

Figure 6a illustrates the top 10 feature words ranked by weight in Topic 3. These feature words included the verbs “ban” and “bring” and nouns referring to stakeholders such as “school”, “user”, and “university”. The topic reflected public concerns about stakeholders’ responses to ChatGPT.
Public discussion on this topic peaked in February 2023 (Figure 6b). By analyzing the word cloud map for that month, the public paid more attention to some universities’ strategies to ban the use of ChatGPT during this period.
Figure 6c shows the semantic networks and text communities under Topic 3. The topic identifies three modular communities. The top five feature words ranked by betweenness centrality in the orange community were “ChatGPT”, “AI”, “research”, “university”, and “write”. The community reflected public concern about the education institution’s position on ChatGPT. Education institutions had different positions regarding whether to allow GenAI technologies such as ChatGPT to provide services for education. Some schools adopted temporary prohibitions aimed at preventing plagiarism and fairness issues arising from the use of ChatGPT in violation of the rules (“Universities such as the University of Hong Kong and the Paris Institute of Political Studies, and school districts in the United States, such as New York City and Seattle, banned the use of ChatGPT in education and teaching to avoid the phenomenon of using GenAI to complete assignments or cheating on examinations”, TJ032023050004). Some schools were exploring pathways to promote ethical use of GenAI in education with an embracing attitude (“The Hong Kong University of Science and Technology announced that teachers could decide whether to grant students access to ChatGPT”, TW032023030065). The top five feature words ranked by betweenness centrality in the green community were “content”, “chat”, “generative”, “respond”, and “data”. The community emphasized the relevant measures taken by the government and the education sector to guarantee data security in the use of the ChatGPT process (“The education sector needs to ensure that all schools comply with relevant data protection regulations, and safeguard students’ personal data when using GenAI tools such as ChatGPT”, TJ032023050005; “Some European countries previously banned ChatGPT due to data security concerns. There are risks associated with ChatGPT’s data scraping practices, including the unauthorized collection, use, and disclosure of personal information”, TJ032023060007). The top five feature words ranked by betweenness centrality in the purple community were “technology”, “development”, “innovation”, “world”, and “field”. The community demonstrated a scenario in which the boom in GenAI technology products and the gradual deepening of their impact on education have occurred since the launch of the ChatGPT (“China’s big language model products have been released intensively, such as Baidu’s ERNIE Bot, Alibaba’s Qwen, iFLYTEK’s SparkDesk, JD’s Yanxi, HUAWEI’s PanguLM, and so on”, TW032023050010; “Microsoft’s Copilot is a compelling product. It can auto-write articles, auto-summarize, auto-generate charts, and even auto-program and auto-develop”, TW032023060080).
Figure 6d shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 3. The frequency of feature words in the orange community was at a high level. It reflected the continued concern of the public regarding the school’s position on ChatGPT. The low frequency of feature words in the green community reflected the lack of public attention to data security. The frequency of feature words in the purple community was at a relatively average level. It reflected public interest in the potential of other GenAI products developed by the market to be used in education.

3.4. Topic 4: Public Attitudes Toward the Use of ChatGPT in Teaching and Learning

Figure 7a illustrates the top 10 feature words ranked by weight in Topic 4. Among them, the weights of these five feature words “ChatGPT”, “AI”, “teacher”, “student”, and “study” were all greater than 0.020. It reflected the public’s concern about the use of ChatGPT in teaching and learning.
In February 2023 and June 2023, the public had a high level of concern for using ChatGPT in teaching and learning (Figure 7b). The public focused on the use of ChatGPT in assignments in February 2023. June 2023 was the time for the Chinese College Entrance Examination, and the public was focused on the performance of the ChatGPT in different subject examinations.
Figure 7c shows the semantic networks and text communities under Topic 4. The topic identifies two modular communities. The top five feature words ranked by betweenness centrality in the orange community were “ChatGPT”, “AI”, “chat”, “teacher”, and “examination”. These feature words formed a tight textual network with terms such as “homework” and “test question”. Some public concerned about ChatGPT’s performance in different tests (“ChatGPT was used to answer questions from examinations in subjects such as English, Geography, Politics, History, Mathematics, Physics, Chemistry, and Biology. Its accuracy reached 76% and its score rate reached 67%. In particular, it was noted that it did relatively well in the liberal arts exam”, TJ042023020184; “The English essay written by ChatGPT is too much in line with the essay requirements in the IELTS exam. AI is amazing”, TW042024010054). Some behaviors using ChatGPT to cheat have been revealed. (“ChatGPT’s biggest help to me is probably that I’ll spend less time to write paper assignments for those boring courses”, TW042023030181). The top five feature words ranked by betweenness centrality in the green community were “learn”, “student”, “education”, “think”, and “teach”. This community reflected the public’s thinking about the future of teaching and learning reforms under the influence of the ChatGPT (“If it’s easy to get answers by asking ChatGPT questions, then anyone can do it, not just qualification holders. School entrance exams and skill tests will face fundamental changes”, TZ042023070024; “There’s going to be a big change in education. Education institutions need to think about what makes sense to teach”, TZ042023020311).
Figure 7d shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 4. The frequency of feature words in the orange community was at a high level. This reflected the continued interest of the public in ChatGPT’s performance on assignments and exams. There was a decreasing trend in the frequency of feature words in the green community over time. It reflected the gradual dilution of public thinking about future changes in teaching and learning over time.

3.5. Topic 5: Public Attitudes Toward the Use of ChatGPT in Scientific Research

Figure 8a illustrates the top 10 feature words ranked by weight in Topic 5. The weights of the three feature words “ChatGPT”, “write” and “paper” were more than 0.040. It reflected the public’s interest in the application of ChatGPT in the field of scientific research, especially focusing on the functional performance of ChatGPT in assisting the writing of academic papers.
Figure 8b illustrates the trend of the number of texts for Topic 5 over time and the word cloud map of the feature words for the corresponding months. Although the trend in the number of texts showed some volatility, its overall level remained at a relatively high level. It reflected the fact that the application of ChatGPT in scientific research has been a hot topic of public discussion.
Figure 8c shows the semantic networks and text communities under Topic 5. The topic identifies three modular communities. The top five feature words ranked by betweenness centrality in the orange community were “ChatGPT”, “write”, “AI”, “paper”, and “chat”. These feature words formed a tight textual network with terms such as “SCI”, “code”, and “data analysis”. The community reflected public concerns about the use of ChatGPT throughout the academic paper writing process (“Many students, both national and international, have begun to use ChatGPT to assist with academic paper writing tasks”, TJ052023120015), such as paper topic selection (“ChatGPT can choose a topic and write a paper outline and paper content based on the topic”, TJ052024010018) and data analysis (“ChatGPT can do data analysis, automatically generate complex charts and analyze complex relationships between data and topics”, TJ052024010023). The top five feature words ranked by betweenness centrality in the green community were “article”, “modification”, “content”, “academic”, and “thesis writing”. This community reflected the public’s interest in the application of ChatGPT in thesis writing. Some irresponsible and unethical usage was hyped up on social media (“It is so convenient for college and graduate students to use AI to complete their thesis”, TB052024020311; “Master’s and PhD thesis writing is so easy by using ChatGPT. From topic selection to experiments, surveys, and SPSS data analysis, ChatGPT can help you complete a roughly 80,000-words first draft of your thesis, including references and appendixes, in three days”, TJ052023120019). The top five feature words ranked by betweenness centrality in the purple community were “thesis”, “reduce similarity”, “plagiarism check”, “report”, and “graduation”. The community reflected new thinking about academic ethics and how to detect new types of academic misconduct. On the one hand, the public believes that ChatGPT’s ability to generate text based on user prompts can assist articles in passing similarity checks despite the behavior is unethical and academically inappropriate. (“ChatGPT can be used to reduce the similarity of articles”, TZ052023120006). On the other hand, the need to detect the AI content of articles is gradually increasing as the impact of ChatGPT on article writing deepens (“How to use GPTZero to recognize whether an article was written by AI or by a human”, TB052023070595). However, the public also recognizes that, even with the use of AI content detection tools, there is a risk that it may not be possible to accurately determine whether an article uses GenAI-generated content (“It is meaningless to use an AI content tool to check AI-generated content. It’s adversarial training. When GPTZero incorrectly recognizes most human-authored papers as authored by an AI, it shows that ChatGPT’s expressive descriptions are already as good as humans’”, TW052023050101). In particular, if the sole judgment of whether or not a student has committed academic misconduct during the writing process is based on the results of the AI content detection tool, it can easily lead to the “erosion of trust” between teachers and students (“It’s so wrong, Articles written entirely by myself are recognized by GPTZero as being written by AI”, TW052023040039).
Figure 8d shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 5. The frequency of feature words in the orange community was at a high level. It reflects the continued public interest in ChatGPT’s performance in academic paper writing. The overall level of frequency of green and purple community feature words was low, but there was a slight upward trend in frequency over time. It reflected that the public began to gradually pay attention to the use of ChatGPT in thesis writing and thinking about the new type of academic misconduct caused by ChatGPT and the strategy to identify it.

4. Discussion

By analyzing the 40,179 texts posted by the public on social media about the application of ChatGPT in the education system, it was found that the public was able to realize the complexity of ChatGPT’s impact on education, and their attitudes did not completely accept or reject the use of ChatGPT as an emerging technological tool in education [54]. The public focuses on five aspects of the application of ChatGPT in education, i.e., the impact of ChatGPT on the development of creative ability, the adaptation of ChatGPT’s basic functions to education, the changes in the external ecology of education due to ChatGPT, the application of ChatGPT in teaching and learning, and the application of ChatGPT in scientific research. The topic popularity shows a deeper debate among the public about the impact of ChatGPT on innovation ability. Comparing the number of texts under the topic finds more public concern about the use of ChatGPT in scientific research. The time series graph of the number of texts within specific topics shows that discussions about the application of ChatGPT in education began to gain popularity in China after February 2023. RQ1 and RQ2 of this study were responded to in the Results Section. This section will respond to RQ3. Exploring the management implications for educational institutions responding to GenAI through public attitude analysis results and past research findings.

4.1. The Opportunities for GenAI in Education

Topic 2 highlights public concerns about the applicability of ChatGPT’s basic functions to education. More evidence shows the potential benefits that GenAI technologies can bring to education, such as in information querying [61,62], personalized learning [63], language learning [64], scientific research [65], virtual mentoring [39], and article writing [35]. Teachers are also exploring and designing effective frameworks for incorporating GenAI, such as ChatGPT, into their teachings [66]. From this perspective, despite the obvious weaknesses and inherent limitations of GenAI such as ChatGPT, education institutions should explore effective paths due to its significant educational advantages to ensure that a human-centered approach is applied to the use of the tool in education [67].

4.2. The Challenges for GenAI in Education

4.2.1. Challenge 1: GenAI and the Sustainability of Innovation Ability

The public’s debate on the impact of ChatGPT on innovation ability in Topic 1 reveals the developmental paradox of innovation ability in the context of GenAI embedded in education. On the one hand, GenAI tools such as ChatGPT have shown great potential for cognitive development and knowledge innovation [68,69,70]. On the other hand, the convenience feature of accessing knowledge through ChatGPT can potentially reinforce the use reliance of the public [71]. Frequent, unrestricted, and uncritical use of ChatGPT-generated content may hinder the innovative development of knowledge [72,73].
GenAI outputs text that seems to match the prompts and is coherent in content by abstracting the massive amount of data into features through predefined algorithms and rules and calculating the probability of words, phrases, or sentences that could be used as responses based on the user’s prompts [74]. In terms of technological logic, firstly, the content generated by GenAI is just a process of extraction, reorganization, matching, and recommendation based on the original training data. This process is only an inductive process, not a reasoning or dialectical process [75], and does not lead to innovation in knowledge. Secondly, the textual information is abstracted into symbolic features disconnected from the real world, leading to the cutting and fragmentation of the original body of knowledge [76]. It hinders the construction of the system of knowledge innovation. Thirdly, GenAI is embedded with reinforcement learning techniques based on human feedback, and the output prefers content that rewards high scores. Thus, the output inevitably prioritizes consistency, leading to stylistic homogenization [77]. Finally, the original data used to train GenAI may be biased, discriminatory, and stereotypical, and thus, the generated content may be factually incorrect or misleading to some degree [78]. In the application logic, acquiring knowledge and problem-solving answers becomes more convenient under GenAI embedding, and people may belittle the accumulation of knowledge. When students are accustomed to seeking answers through efficiency-oriented direct questioning with GenAI, any deep thinking, trial and error, exploring, correcting, white space, and other ways of thinking and cognizing may be discarded due to inefficiency. The way of convenient and efficient access to knowledge is likely to reinforce learners’ reliance behavior [79]. Learners do not need to understand and use knowledge deeply, and habitual mechanical requirements can replace higher-order cognitive constructs. It will gradually lead to learners being trapped in superficial learning, with shallow knowledge structures and limited knowledge frameworks. The opportunity for advanced cognitive skill development will be denied [28].

4.2.2. Challenge 2: GenAI and the Reshaping of the Educational External Context

In Topic 3, the public focuses on the stakeholder responses to ChatGPT’s education since its launch. This emphasizes the pressure of ChatGPT to reshape the external ecology of the education system and to change education internally. Changes in the external ecology of education are expressed in three aspects. One is that, as shown by the purple community in Topic 3, various types of GenAI products are emerging, and the trend of integrating GenAI tools into education scenarios has become inevitable. Many technology companies have embedded GenAI technology cores and concepts into their products to improve the quality of service of their existing technology products [80]. Some companies have also developed ChatGPT-like tools. For example, Microsoft developed the Microsoft Copilot tool, and Google launched the Google Gemini chatbot. It reflects the reality that education institutions are no longer able to achieve their banning purposes by prohibiting the use of a particular product. The future should turn to strategic approaches such as how to regulate use and anticipatory supervision. Secondly, the external ecology in which the education system is situated, such as the job market and talent demand, is gradually being reshaped by GenAI such as ChatGPT [81]. On the one hand, unlike the replacement of manual labor by previous AI tools, with the rapid iteration and development of GenAI tools, some basic mental labor with regularity may also be replaced. It requires education institutions to rethink what kind of people they should be training. What is the meaning of education if the knowledge and skills that students have learned over many years can be easily replaced by GenAI? How to train students to adapt to the future development of GenAI? On the other hand, acquiring the skills to proficiently and critically use GenAI tools has the potential to enhance the competitive advantage of student individuals in the future job market. Education institutions must pay close attention to the harmonization of education development with the needs of employers. Thirdly, the government is formulating policies to safeguard data security and privacy and to promote the safe and regulation-compliant use of GenAI products in education. The Italian Data Protection Authority banned ChatGPT because of worries about data breaches. ChatGPT resumes service in Italy after adding privacy disclosure and control features [82]. The Department for Education of the UK has published generative artificial intelligence in education to clearly define its position on the legalized development of GenAI in education [83]. In these contexts, the response of education institutions to GenAI tools such as ChatGPT should shift towards encouraging and facilitating the ethical use of the tool. For example, to publish relevant guidelines to clearly define the principles and scope of use of GenAI as a learning support tool for students, a teaching aid for teachers, and a management tool for education administrators. In fact, education institutions’ strategies for responding to the ChatGPT have gone through a process of changing from a range of bans on its use to conditional use since its launch. Some schools and university groups have already published relevant guidelines for the use of GenAI [84], such as Harvard University, Stanford University, and Yale University in the US, and the Russell Group of Universities in the UK. However, many institutions have adopted a wait-and-see strategy when faced with this pressure for change, which is not conducive to the transformational development of education in the long run [25].

4.2.3. Challenge 3: GenAI and the Need for Change in Education Assessment

Topic 4 shows the public’s longstanding concerns about ChatGPT’s performance on the test. In fact, ChatGPT not only performs well in China’s college entrance examinations and homework tests. The results of more studies show that ChatGPT has been performing passably or well on more examinations. Kung et al. noted that ChatGPT performance on the United States Medical Licensing Exam (USMLE) was near passing levels [85]. Katz et al. found that GPT-4 passed the Uniform Bar Examination (UBE) and outperformed human performance in five subjects through a designed experiment [86]. One study noted that ChatGPT could pass the civil services examinations in India with little additional training [87]. Education institutions need to reflect on whether traditional assessment and testing methods are still suitable for testing students’ learning outcomes and skill levels when various assessments and tests can be answered and passed by using GenAI tools such as ChatGPT.

4.2.4. Challenge 4: GenAI and the Ensuring of Academic Integrity

Topic 5 emphasizes the public’s concern about the use of ChatGPT in academic papers and thesis writing. However, an analysis of the content of the texts related to this topic reveals that the way some public use GenAI in scientific research is unregulated and unethical. A study noted that the vast majority of undergraduate dissertations written using ChatGPT were able to reach or come close to a pass level under the dissertation assessment metrics often used in the social sciences [15]. Education institutions need to pay more attention to academic ethical issues in light of GenAI’s ability to assist students in completing thesis papers that can be passed. The academic misconduct triggered by GenAI is different from previous textual cheating (mainly plagiarizing existing public material). The generated text from GenAI is produced based on training on large publicly available data. On the one hand, the generated text is completely new and cannot be recognized by relying on traditional similarity checks, despite the lack of knowledge innovation. On the other hand, the generated textual material is difficult to distinguish from human-authored material [45]. This situation creates challenges for maintaining academic integrity.

4.3. The Response Strategy for GenAI in Education

4.3.1. Response Strategy 1: Developing Students’ Advanced Cognitive Thinking Skills

It is a question for education institutions to consider how to ethically embed ChatGPT, an exogenous technology, into the field of application in education to promote innovation ability. Education institutions should be wary of the negative impact of GenAI ’s application in education on the development of innovative abilities due to technological shortcomings and dependency behaviors that may be triggered. Emphasizing the development and shaping of critical thinking in future education is an effective solution to respond to the crisis of the shallow cognitive structure and homogenization of cognitive thinking that may result from the embedding of ChatGPT in educational contexts [88]. Critical thinking is necessary for the pursuit of truth and the creation of knowledge. The focus of education institutions should not only be on acquiring knowledge, summarizing, and generalizing, which are the parts that can be most easily replaced by GenAI tools such as ChatGPT. Education institutions should turn to the cognitive dimension to develop students’ critical thinking and creativity, explore inquiry-based teaching methods and transform students’ methodologies for knowing, understanding, observing and interpreting things [89] to adapt to the reform needs of education institutions under the embedding of GenAI. Specifically, in the technological scenario in which GenAI is embedded in education, students are required to have critical thinking skills in the following two aspects. One is that students should have the ability to critically question, judge and sift when faced with output information with unclear evidence and mixed quality [90]. Secondly, students should be able to think critically and independently in the face of the information cocoon constructed by efficient and homogenized output results [91]. In summary, education institutions should explore effective ways to achieve a balance between utilizing the benefits of GenAI and developing students’ advanced cognitive thinking skills [92].
The development of advanced cognitive thinking skills becomes particularly important in the context of GenAI influencing the sustainability of education, and some scholars have pointed out the importance of incorporating knowledge of large language models into the curriculum development system [78]. According to the theory of constructivism learning, learners can build their understanding by linking what they learn to what they have already acquired [93]. Only when technical knowledge of GenAI, such as big language modelling, is incorporated into the curriculum, are students likely to evaluate the output of GenAI tools more critically and focus on the truthfulness and accuracy of their output to avoid the cognitive impact of biased information. In addition, the way teachers instruct students to verify accuracy and make factual judgements about GenAI outputs can foster critical thinking [94].

4.3.2. Response Strategy 2: Updating Educational Objectives and Strategies

The boom in GenAI products and the existence of the tool’s potential to replace basic brain work require educational institutions to adjust their educational goals and strategies. Cox noted that as GenAI gradually deepens its impact on social systems, there should be a focus on virtue-based character development [29]. Wang et al. suggested that strengthening students’ AI literacy and improving their skill level in using AI could help them benefit in the job market [95]. Based on a quasi-experimental approach, Zhong et al. emphasized the importance of developing integration skills with GenAI embedded in education [96]. Despite their differing views, these scholars agree that the shift in educational goals should focus on areas that cannot be replaced by GenAI. With the continuous development of GenAI technology, tasks that rely on memory and simple logical reasoning will gradually be automated. Educational institutions should take into account the characteristics of their educational stages and update their educational training objectives to adapt to the impact of GenAI on the external environment of education.

4.3.3. Response Strategy 3: Revisiting Teaching and Learning Assessment Methods

Under the influence of GenAI, the validity and fairness of traditional assessment methods have been questioned. Assessment in the form of written assessment assignments no longer provides a true picture of a student’s mastery of certain knowledge, especially when the tool is capable of generating assignments of relatively good quality [97]. An effective way to respond to the impact of GenAI on teaching and learning assessment is to consider ways of redesigning assessment by learning alternative measures. Face-to-face exams or presentation formats are emerging as the form of assessment that teachers trust and are most effective at preventing students from using GenAI tools to write answers for them, although this may be a step backwards [98]. It has been noted that adjusting the way questions are designed, such as increasing or raising the number of multiple-choice questions, can test students’ mastery of complex problems [99]. In addition, assessments such as group project presentations and peer-to-peer evaluations are coming into focus [15]. Some more comprehensive and innovative assessment strategies are being explored [100]. For example, consider implementing a comprehensive assessment module from behavior to process to outcome, focusing on competencies that GenAI cannot perform, including students’ ability to critically analyze, evaluate and synthesize information for a holistic assessment of students.

4.3.4. Response Strategy 4: Carefully Developing Guidelines to Maintain Academic Integrity

Determining whether an article is an original work or generated by GenAI is key to maintaining academic integrity. Although there have been organizations that have developed AI content detection tools such as Winston AI, Originality AI, and GPTZero, their results are not completely reliable [101]. Of course, AI content detection tools can be used as one of the measures to identify the originality of an article and assist educators in determining whether there is plagiarism of AI-generated content in a paper. However, it is necessary to point out that if the detection results of AI content detection tools are solely relied upon as a judgment of whether cheating exists, this will lead to a vicious cycle of checking to counter-checking in actual teaching practice [72]. This is manifested by students trying to select a GenAI tool to write their papers, educators seeking an AI content detection tool to identify originality, and students continuing to look for an anti-AI content detection tool to humanize the original GenAI-generated text to pass the AI content detection tool’s check report.
Instead of being treated as gatekeepers against academic misconduct, educators should return to the role of education itself and explore effective paths to guide students to use GenAI ethically, to take advantage of its applications and to reduce the risk of ethical misconduct [102]. Students should also not be shaped as being prone to cheating; in fact, the vast majority of students do not want to cheat [98]. The potential for cheating can be avoided by ensuring fairness and clarifying the grey areas of GenAI use. Education institutions need to communicate the consequences of academic dishonesty [97] and argue the grey areas of the application of ChatGPT in writing. Specifically, certain examples are provided to help students differentiate between AI-assisted and AI-generated content and to guide students in academic and applied practices consistent with academic integrity. It should be made clear that treating GenAI-generated or GenAI-derived output as one’s own will constitute academic misconduct. Students are required to promptly disclose or cite the use of GenAI tools and application plug-ins.

5. Conclusions

This study analyzed 40,179 public comment texts related to the use of ChatGPT in education from December 2022 to February 2024 based on text data mining methods. The study found that the public tended to explore the impact of ChatGPT on innovation ability in more words. This reflected the development paradox of innovation ability with GenAI embedded in the education system. There is no doubt that GenAI shows great potential for knowledge innovation development. However, GenAI ’s inherent technical shortcomings and dependency behaviors in its use may hinder the development of innovative ability. It requires that education institutions should explore effective paths to achieve a balance between utilizing the benefits of GenAI and developing students’ advanced cognitive thinking skills.
The study also found that a large portion of the public tends to focus on the use of ChatGPT in scientific research. Some unethical usage behaviors were found in the comment texts. It provokes deeper thinking about the ethics of scientific research. Although the AI content detection tool can assist educators in identifying the originality of articles, the tool is not completely accurate and can easily lead to the erosion of trust between teachers and students. This study emphasizes the return of educators to the role of education itself, rather than assuming the role of gatekeepers against academic misconduct. Teachers and students should not be caught in a vicious cycle of checking to avoid checking. Guaranteeing fairness of use and clarifying the grey areas of GenAI use are more conducive to the rational use of GenAI in scientific writing.
The study pointed out that the ChatGPT basic function is adapted to many aspects of education. Its significant educational advantages require education institutions to explore effective pathways to ensure that the tool is used in a human-centered approach to education. However, many education institutions are still adopting a wait-and-see strategy. Education institutions should promote the ethical use of this tool in education with an embracing and positive attitude.
In addition, the study found that the external ecology in which the education system is situated, such as the job market and the demand for talent, is gradually being reshaped by GenAI tools such as ChatGPT. There is a need for education institutions to think deeply about how to train students to enable them to adapt to the future development of GenAI, harmonizing in real time between educational development and employer needs.
Finally, public attitudes toward the use of ChatGPT in teaching and learning are also noteworthy. ChatGPT showed passing and good levels on multiple exams. It highlights the pressure to change traditional assessment and examination methods. Education institutions are required to use more comprehensive and innovative assessment strategies to achieve a holistic assessment of the educated.
Although this study provides a basis of reference and orientation for education institutions to respond to the pressures of change in GenAI technology and to formulate relevant policies by analyzing the attitudes of the public towards the application of ChatGPT in the education system, it is important to acknowledge some limitations. On the one hand, the study may not be able to map the international context of public attitudes toward ChatGPT due to the particular cultural context of the data source. Further research may benefit from analyzing the texts of public comments related to ChatGPT and education on internationally renowned social media such as Twitter and YouTube, thus revealing the prevailing attitudes of the international public towards the use of ChatGPT in the education system. On the other hand, some research-worthy textual information that is highly relevant to the external environment may be ignored because text mining methods analyze text by abstracting it into word features. The context in which ChatGPT is embedded in education is complex and dynamic. It is further necessary to take a qualitative analysis to provide theoretical insights for education institutions to respond to the technological change pressures of GenAI tools such as ChatGPT. In addition, this study used ChatGPT as an example to explore the impact of GenAI technology on the future sustainability of education. Further research would benefit from exploring innovative pathways for using GenAI such as text generation type or video generation type tools to enhance educational sustainability.

Author Contributions

Conceptualization, Y.W., X.Z. and Y.Z.; methodology, Y.W. and X.Z.; data curation, Y.Z.; formal analysis, Y.W. and X.L.; investigation, Y.W.; visualization, Y.W.; writing—original draft, Y.W.; supervision, X.Z., X.L. and Y.Z.; funding acquisition, X.Z., X.L. and Y.Z.; validation, X.Z. and Y.Z.; resources, X.L.; project administration, X.L. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Social Science Foundation of China (grant number: BGA230252).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in OSF at [https://osf.io/zftr2/] (accessed on 22 December 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Atchley, P.; Pannell, H.; Wofford, K.; Hopkins, M.; Atchley, R.A. Human and AI Collaboration in the Higher Education Environment: Opportunities and Concerns. Cogn. Res. 2024, 9, 20. [Google Scholar] [CrossRef] [PubMed]
  2. Essien, A.; Bukoye, O.T.; O’Dea, C.; Kremantzis, M. The Influence of AI Text Generators on Critical Thinking Skills in UK Business Schools. Stud. High. Educ. 2024, 49, 865–882. [Google Scholar] [CrossRef]
  3. Qin, F.; Li, K.; Yan, J. Understanding User Trust in Artificial Intelligence-based Educational Systems: Evidence from China. Brit. J. Educ. Technol. 2020, 51, 1693–1710. [Google Scholar] [CrossRef]
  4. Popenici, S.A.D.; Kerr, S. Exploring the Impact of Artificial Intelligence on Teaching and Learning in Higher Education. Res. Pract. Technol. Enhanc. Learn. 2017, 12, 22. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, Y.; Jensen, S.; Albert, L.J.; Gupta, S.; Lee, T. Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success. Inf. Syst. Front. 2023, 25, 161–182. [Google Scholar] [CrossRef]
  6. Ahmad, S.F.; Alam, M.M.; Rahmat, M.K.; Mubarik, M.S.; Hyder, S.I. Academic and Administrative Role of Artificial Intelligence in Education. Sustainability 2022, 14, 1101. [Google Scholar] [CrossRef]
  7. United Nations. Martin The Sustainable Development Agenda. Available online: https://www.un.org/sustainabledevelopment/development-agenda (accessed on 15 January 2025).
  8. Pedro, F.; Subosa, M.; Rivas, A.; Valverde, P. Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development; United Nations Educational, Scientific and Cultural Organization: Paris, France, 2019. [Google Scholar]
  9. Henriksen, D.; Mishra, P.; Stern, R. Creative Learning for Sustainability in a World of AI: Action, Mindset, Values. Sustainability 2024, 16, 4451. [Google Scholar] [CrossRef]
  10. Mutambik, I. The Use of AI-Driven Automation to Enhance Student Learning Experiences in the KSA: An Alternative Pathway to Sustainable Education. Sustainability 2024, 16, 5970. [Google Scholar] [CrossRef]
  11. Abulibdeh, A.; Zaidan, E.; Abulibdeh, R. Navigating the Confluence of Artificial Intelligence and Education for Sustainable Development in the Era of Industry 4.0: Challenges, Opportunities, and Ethical Dimensions. J. Clean. Prod. 2024, 437, 140527. [Google Scholar] [CrossRef]
  12. Bagunaid, W.; Chilamkurti, N.; Veeraraghavan, P. Aisar: Artificial Intelligence-Based Student Assessment and Recommendation System for e-Learning in Big Data. Sustainability 2022, 14, 10551. [Google Scholar] [CrossRef]
  13. Bhullar, P.S.; Joshi, M.; Chugh, R. ChatGPT in Higher Education—A Synthesis of the Literature and a Future Research Agenda. Educ. Inf. Technol. 2024, 29, 21501–21522. [Google Scholar] [CrossRef]
  14. OpenAI. GPT-4. Available online: https://openai.com/index/gpt-4-research/ (accessed on 10 October 2024).
  15. Royer, C. Outsourcing Humanity? ChatGPT, Critical Thinking, and the Crisis in Higher Education. Stud. Philos. Educ. 2024, 43, 479–497. [Google Scholar] [CrossRef]
  16. Ibrahim, H.; Asim, R.; Zaffar, F.; Rahwan, T.; Zaki, Y. Rethinking Homework in the Age of Artificial Intelligence. IEEE Intell. Syst. 2023, 38, 24–27. [Google Scholar] [CrossRef]
  17. Dalgıç, A.; Yaşar, E.; Demir, M. ChatGPT and Learning Outcomes in Tourism Education: The Role of Digital Literacy and Individualized Learning. J. Hosp. Leis. Sport Tour. Educ. 2024, 34, 100481. [Google Scholar] [CrossRef]
  18. Ahn, J.; Lee, J.; Son, M. ChatGPT in ELT: Disruptor? Or Well-Trained Teaching Assistant? ELT J. 2024, 78, 345–355. [Google Scholar] [CrossRef]
  19. ElSayary, A. An Investigation of Teachers’ Perceptions of Using ChatGPT as a Supporting Tool for Teaching and Learning in the Digital Era. J. Comput. Assist. Lear. 2024, 40, 931–945. [Google Scholar] [CrossRef]
  20. Banihashem, S.K.; Kerman, N.T.; Noroozi, O.; Moon, J.; Drachsler, H. Feedback Sources in Essay Writing: Peer-Generated or AI-Generated Feedback? Int. J. Educ. Technol. High. Educ. 2024, 21, 23. [Google Scholar] [CrossRef]
  21. Su, Y.; Lin, Y.; Lai, C. Collaborating with ChatGPT in Argumentative Writing Classrooms. Assest. Writ. 2023, 57, 100752. [Google Scholar] [CrossRef]
  22. Almassaad, A.; Alajlan, H.; Alebaikan, R. Student Perceptions of Generative Artificial Intelligence: Investigating Utilization, Benefits, and Challenges in Higher Education. Systems 2024, 12, 385. [Google Scholar] [CrossRef]
  23. Dergaa, I.; Chamari, K.; Zmijewski, P.; Saad, H.B. From Human Writing to Artificial Intelligence Generated Text: Examining the Prospects and Potential Threats of ChatGPT in Academic Writing. Biol. Sport 2023, 40, 615–622. [Google Scholar] [CrossRef] [PubMed]
  24. Monika, M. A Study on Analyzing the Role of ChatGPT in English Acquisition among ESL Learners during English Language Classroom. Bodhi Int. J. Res. Hum. Arts Sci. 2024, 8, 75–84. [Google Scholar] [CrossRef]
  25. Korseberg, L.; Elken, M. Waiting for the Revolution: How Higher Education Institutions Initially Responded to ChatGPT. High. Educ. 2024, 1–16. [Google Scholar] [CrossRef]
  26. Rana, S. AI and GPT for Management Scholars and Practitioners: Guidelines and Implications. FIIB Bus. Rev. 2023, 12, 7–9. [Google Scholar] [CrossRef]
  27. Yu, H. The Application and Challenges of ChatGPT in Educational Transformation: New Demands for Teachers’ Roles. Heliyon 2024, 10, e24289. [Google Scholar] [CrossRef]
  28. Wang, N.; Wang, X.; Su, Y.-S. Critical Analysis of the Technological Affordances, Challenges and Future Directions of Generative AI in Education: A Systematic Review. Asia Pac. J. Educ. 2024, 44, 139–155. [Google Scholar] [CrossRef]
  29. Cox, G.M. Artificial Intelligence and the Aims of Education: Makers, Managers, or Inforgs? Stud. Philos. Educ. 2024, 43, 15–30. [Google Scholar] [CrossRef]
  30. Perkins, M.; Roe, J. Decoding Academic Integrity Policies: A Corpus Linguistics Investigation of AI and Other Technological Threats. High. Educ. Policy 2023, 37, 633–653. [Google Scholar] [CrossRef]
  31. Frewer, L.J.; Howard, C.; Shepherd, R. Understanding Public Attitudes to Technology. J. Risk Res. 1998, 1, 221–235. [Google Scholar] [CrossRef]
  32. Liu, Y.; Lyu, Z. Changes in Public Perception of ChatGPT: A Text Mining Perspective Based on Social Media. Int. J. Hum.-Comput. Interact. 2024, 1–15. [Google Scholar] [CrossRef]
  33. Chan, C.K.Y.; Hu, W. Students’ Voices on Generative AI: Perceptions, Benefits, and Challenges in Higher Education. Int. J Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  34. Wang, C.; Li, X.; Liang, Z.; Sheng, Y.; Zhao, Q.; Chen, S. The Roles of Social Perception and AI Anxiety in Individuals’ Attitudes Toward ChatGPT in Education. Int. J. Hum.-Comput. Interact. 2024, 1–18. [Google Scholar] [CrossRef]
  35. Demirel, S.; Kahraman-Gokalp, E.; Gündüz, U. From Optimism to Concern: Unveiling Sentiments and Perceptions Surrounding ChatGPT on Twitter. Int. J. Hum.-Comput. Interact. 2024, 1–23. [Google Scholar] [CrossRef]
  36. Duong, C.D.; Vu, T.N.; Ngo, T.V.N. Applying a Modified Technology Acceptance Model to Explain Higher Education Students’ Usage of ChatGPT: A Serial Multiple Mediation Model with Knowledge Sharing as a Moderator. Int. J. Manag. Educ. 2023, 21, 100883. [Google Scholar] [CrossRef]
  37. Budhathoki, T.; Zirar, A.; Njoya, E.T.; Timsina, A. ChatGPT Adoption and Anxiety: A Cross-Country Analysis Utilising the Unified Theory of Acceptance and Use of Technology (UTAUT). Stud. High. Educ. 2024, 49, 831–846. [Google Scholar] [CrossRef]
  38. Strzelecki, A.; Cicha, K.; Rizun, M.; Rutecka, P. Acceptance and Use of ChatGPT in the Academic Community. Educ. Inf. Technol. 2024, 29, 22943–22968. [Google Scholar] [CrossRef]
  39. Ding, L.; Li, T.; Jiang, S.; Gapud, A. Students’ Perceptions of Using ChatGPT in a Physics Class as a Virtual Tutor. Int. J. Educ. Technol. High. Educ. 2023, 20, 63. [Google Scholar] [CrossRef]
  40. Barrett, A.; Pack, A. Not Quite Eye to A.I.: Student and Teacher Perspectives on the Use of Generative Artificial Intelligence in the Writing Process. Int. J. Educ. Technol. High. Educ. 2023, 20, 59. [Google Scholar] [CrossRef]
  41. Sedlbauer, J.; Cincera, J.; Slavik, M.; Hartlova, A. Students’ Reflections on Their Experience with ChatGPT. J. Comput. Assist. Learn. 2024, 40, 1526–1534. [Google Scholar] [CrossRef]
  42. Tu, Y.-F.; Hwang, G.-J. University Students’ Conceptions of ChatGPT-Supported Learning: A Drawing and Epistemic Network Analysis. Interact. Learn. Environ. 2023, 32, 6790–6814. [Google Scholar] [CrossRef]
  43. Rejeb, A.; Rejeb, K.; Appolloni, A.; Treiblmaier, H.; Iranmanesh, M. Exploring the Impact of ChatGPT on Education: A Web Mining and Machine Learning Approach. Int. J. Manag. Educ. 2024, 22, 100932. [Google Scholar] [CrossRef]
  44. So, H.-J.; Jang, H.; Kim, M.; Choi, J. Exploring Public Perceptions of Generative AI and Education: Topic Modelling of YouTube Comments in Korea. Asia Pac. J. Educ. 2023, 44, 61–80. [Google Scholar] [CrossRef]
  45. Adeshola, I.; Adepoju, A.P. The Opportunities and Challenges of ChatGPT in Education. Interact. Learn. Environ. 2023, 32, 6159–6172. [Google Scholar] [CrossRef]
  46. Wen, Y.; Zhao, X.; Zang, Y.; Li, X. How the Crisis of Trust in Experts Occurs on Social Media in China? Multiple-Case Analysis Based on Data Mining. Hum. Soc. Sci. Commun. 2024, 11, 1093. [Google Scholar] [CrossRef]
  47. Nguyen, H.M.; Goto, D. Unmasking Academic Cheating Behavior in the Artificial Intelligence Era: Evidence from Vietnamese Undergraduates. Educ. Inf. Technol. 2024, 29, 15999–16025. [Google Scholar] [CrossRef]
  48. Qi, W.; Pan, J.; Lyu, H.; Luo, J. Excitements and Concerns in the Post-Chatgpt Era: Deciphering Public Perception of Ai through Social Media Analysis. Telemat. Inform. 2024, 92, 102158. [Google Scholar] [CrossRef]
  49. Wang, S.; Liang, Z. What Does the Public Think about Artificial Intelligence? An Investigation of Technological Frames in Different Technological Context. Gov. Inform. Q. 2024, 41, 101939. [Google Scholar] [CrossRef]
  50. Li, L.; Ma, Z.; Fan, L.; Lee, S.; Yu, H.; Hemphill, L. ChatGPT in Education: A Discourse Analysis of Worries and Concerns on Social Media. Educ. Inf. Technol. 2024, 29, 10729–10762. [Google Scholar] [CrossRef]
  51. Zou, W.; Liu, Z. Unraveling Public Conspiracy Theories Toward ChatGPT in China: A Critical Discourse Analysis of Weibo Posts. J. Broadcast. Electron. 2023, 68, 1–20. [Google Scholar] [CrossRef]
  52. Lian, Y.; Tang, H.; Xiang, M.; Dong, X. Public Attitudes and Sentiments toward ChatGPT in China: A Text Mining Analysis Based on Social Media. Technol. Soc. 2024, 76, 102442. [Google Scholar] [CrossRef]
  53. Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent Dirichlet Allocation. J. mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar] [CrossRef]
  54. Yu, J.H.; Chauhan, D. Trends in NLP for Personalized Learning: LDA and Sentiment Analysis Insights. Educ. Inf. Technol. 2024. [Google Scholar] [CrossRef]
  55. Abdelrazek, A.; Eid, Y.; Gawish, E.; Medhat, W.; Hassan, A. Topic Modeling Algorithms and Applications: A Survey. Inform. Syst. 2023, 112, 102131. [Google Scholar] [CrossRef]
  56. Stevens, K.; Kegelmeyer, P.; Andrzejewski, D.; Buttler, D. Exploring Topic Coherence over Many Models and Many Topics. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natura Language Learning, Jeju Island, Republic of Korea, 12–14 July 2012; pp. 952–961. [Google Scholar]
  57. Cao, J.; Xia, T.; Li, J.; Zhang, Y.; Tang, S. A Density-Based Method for Adaptive LDA Model Selection. Neurocomputing 2009, 72, 1775–1781. [Google Scholar] [CrossRef]
  58. Brandes, U. A Faster Algorithm for Betweenness Centrality*. J. Math. Sociol. 2001, 25, 163–177. [Google Scholar] [CrossRef]
  59. Blondel, V.D.; Guillaume, J.-L.; Lambiotte, R.; Lefebvre, E. Fast Unfolding of Communities in Large Networks. J. Stat. Mech.-Theory E 2008, 2008, P10008. [Google Scholar] [CrossRef]
  60. Sievert, C.; Shirley, K. LDAvis: A Method for Visualizing and Interpreting Topics. In Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces, Baltimore, MD, USA, 27 June 2014; pp. 63–70. [Google Scholar]
  61. Foroughi, B.; Senali, M.G.; Iranmanesh, M.; Khanfar, A.; Ghobakhloo, M.; Annamalai, N.; Naghmeh-Abbaspour, B. Determinants of Intention to Use ChatGPT for Educational Purposes: Findings from PLS-SEM and fsQCA. Int. J. Hum.-Comput. Interact. 2023, 40, 4501–4520. [Google Scholar] [CrossRef]
  62. Rahman, M.M.; Watanobe, Y. ChatGPT for Education and Research: Opportunities, Threats, and Strategies. Appl. Sci. 2023, 13, 5783. [Google Scholar] [CrossRef]
  63. Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT Analysis of ChatGPT: Implications for Educational Practice and Research. Innov. Educ. Teach. Int. 2023, 61, 460–474. [Google Scholar] [CrossRef]
  64. Cai, Q.; Lin, Y.; Yu, Z. Factors Influencing Learner Attitudes Towards ChatGPT-Assisted Language Learning in Higher Education. Int. J. Hum.-Comput. Interact. 2023, 40, 7112–7126. [Google Scholar] [CrossRef]
  65. Al-Zahrani, A.M. The Impact of Generative AI Tools on Researchers and Research: Implications for Academia in Higher Education. Innov. Educ. Teach. Int. 2023, 61, 1029–1043. [Google Scholar] [CrossRef]
  66. Kong, S.-C.; Yang, Y. A Human-Centred Learning and Teaching Framework Using Generative Artificial Intelligence for Self-Regulated Learning Development through Domain Knowledge Learning in K–12 Settings. IEEE Trans. Learn. Technol. 2024, 17, 1562–1573. [Google Scholar] [CrossRef]
  67. Holmes, W.; Miao, F. Guidance for Generative AI in Education and Research; UNESCO Publishing: Paris, France, 2023. [Google Scholar]
  68. Clegg, S.; Sarkar, S. Artificial Intelligence and Management Education: A Conceptualization of Human-Machine Interaction. Int. J. Manag. Educ. 2024, 22, 101007. [Google Scholar] [CrossRef]
  69. Suriano, R.; Plebe, A.; Acciai, A.; Fabio, R.A. Student Interaction with ChatGPT Can Promote Complex Critical Thinking Skills. Learn. Instr. 2025, 95, 102011. [Google Scholar] [CrossRef]
  70. Ruiz-Rojas, L.I.; Salvador-Ullauri, L.; Acosta-Vargas, P. Collaborative Working and Critical Thinking: Adoption of Generative Artificial Intelligence Tools in Higher Education. Sustainability 2024, 16, 5367. [Google Scholar] [CrossRef]
  71. Ye, J.-H.; Zhang, M.; Nong, W.; Wang, L.; Yang, X. The Relationship between Inert Thinking and ChatGPT Dependence: An I-PACE Model Perspective. Educ. Inf. Technol. 2024. [Google Scholar] [CrossRef]
  72. Sibilin, C.S. Education and the Epistemological Crisis in the Age of ChatGPT. Crit. Rev. 2023, 35, 414–425. [Google Scholar] [CrossRef]
  73. Zou, M.; Huang, L. The Impact of ChatGPT on L2 Writing and Expected Responses: Voice from Doctoral Students. Educ. Inf. Technol. 2024, 29, 13201–13219. [Google Scholar] [CrossRef]
  74. Wu, T.; He, S.; Liu, J.; Sun, S.; Liu, K.; Han, Q.-L.; Tang, Y. A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development. IEEE/CAA J. Automatic. 2023, 10, 1122–1136. [Google Scholar] [CrossRef]
  75. Song, P.; Wang, C. Can ChatGPT Replace Scientists? Sci. Bull. 2023, 68, 2128–2131. [Google Scholar] [CrossRef]
  76. Watson, S.; Romic, J. ChatGPT and the Entangled Evolution of Society, Education, and Technology: A Systems Theory Perspective. Eur. Educ. Res. J. 2024, 1–20. [Google Scholar] [CrossRef]
  77. Mannuru, N.R.; Shahriar, S.; Teel, Z.A.; Wang, T.; Lund, B.D.; Tijani, S.; Pohboon, C.O.; Agbaji, D.; Alhassan, J.; Galley, J.; et al. Artificial Intelligence in Developing Countries: The Impact of Generative Artificial Intelligence (AI) Technologies for Development. Inform. Dev. 2023, 39, 1–19. [Google Scholar] [CrossRef]
  78. Lin, Z. Why and How to Embrace AI Such as ChatGPT in Your Academic Life. R. Soc. Open Sci. 2023, 10, 230658. [Google Scholar] [CrossRef]
  79. Kasneci, E.; Seßler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E. ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  80. Wheatley, A.; Hervieux, S. Comparing Generative Artificial Intelligence Tools to Voice Assistants Using Reference Interactions. J. Acad. Librar. 2024, 50, 102942. [Google Scholar] [CrossRef]
  81. Hui, X.; Reshef, O.; Zhou, L. The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market. Organ. Sci. 2024, 35, 1977–1989. [Google Scholar] [CrossRef]
  82. Stracqualursi, L.; Agati, P. Twitter Users Perceptions of AI-Based e-Learning Technologies. Sci. Rep. 2024, 14, 5927. [Google Scholar] [CrossRef] [PubMed]
  83. GOV.UK. Generative Artificial Intelligence (AI) in Education. Available online: https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education (accessed on 5 November 2024).
  84. Luo, J. A Critical Review of GenAI Policies in Higher Education Assessment: A Call to Reconsider the “Originality” of Students’ Work. Assess. Eval. High. Educ. 2024, 49, 651–664. [Google Scholar] [CrossRef]
  85. Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J. Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models. PLoS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef] [PubMed]
  86. Katz, D.M.; Bommarito, M.J.; Gao, S.; Arredondo, P. GPT-4 Passes the Bar Exam. Philos. Trans. R. Soc. A 2024, 382, 20230254. [Google Scholar] [CrossRef]
  87. Bhardwaj, R.G.; Bedi, H.S. ChatGPT as an Education and Learning Tool for Engineering, Technology and General Studies: Performance Analysis of ChatGPT 3.0 on CSE, GATE and JEE Examinations of India. Interact. Learn. Environ. 2024. [Google Scholar] [CrossRef]
  88. Lim, W.M.; Gunasekara, A.; Pallant, J.L.; Pallant, J.I.; Pechenkina, E. Generative AI and the Future of Education: Ragnarök or Reformation? A Paradoxical Perspective from Management Educators. Int. J. Manag. Educ. 2023, 21, 100790. [Google Scholar] [CrossRef]
  89. Dobber, M.; Zwart, R.; Tanis, M.; van Oers, B. Literature Review: The Role of the Teacher in Inquiry-Based Education. Educ. Res. Rev. 2017, 22, 194–214. [Google Scholar] [CrossRef]
  90. Tang, K.-S.; Cooper, G. The Role of Materiality in an Era of Generative Artificial Intelligence. Sci. Educ. 2024. [Google Scholar] [CrossRef]
  91. Kartal, G. The Influence of ChatGPT on Thinking Skills and Creativity of EFL Student Teachers: A Narrative Inquiry. J. Educ. Teach. 2024, 50, 627–642. [Google Scholar] [CrossRef]
  92. Zhai, X.; Nyaaba, M.; Ma, W. Can Generative AI and ChatGPT Outperform Humans on Cognitive-Demanding Problem-Solving Tasks in Science? Sci. Educ. 2024. [Google Scholar] [CrossRef]
  93. Bada, S.O.; Olusegun, S. Constructivism Learning Theory: A Paradigm for Teaching and Learning. Int. J. Res. Method Educ. 2015, 5, 66–70. [Google Scholar] [CrossRef]
  94. Lo, C.K.; Hew, K.F.; Jong, M.S. The Influence of ChatGPT on Student Engagement: A Systematic Review and Future Research Agenda. Comput. Educ. 2024, 219, 105100. [Google Scholar] [CrossRef]
  95. Wang, F.; King, R.B.; Chai, C.S.; Zhou, Y. University Students’ Intentions to Learn Artificial Intelligence: The Roles of Supportive Environments and Expectancy–Value Beliefs. Int. J. Educ. Technol. High Educ. 2023, 20, 51. [Google Scholar] [CrossRef]
  96. Zhong, T.; Zhu, G.; Hou, C.; Wang, Y.; Fan, X. The Influences of ChatGPT on Undergraduate Students’ Demonstrated and Perceived Interdisciplinary Learning. Educ. Inf. Technol. 2024, 29, 23577–23603. [Google Scholar] [CrossRef]
  97. Cotton, D.R.E.; Cotton, P.A.; Shipway, J.R. Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT. Innov. Educ. Teach. Int. 2023, 62, 228–239. [Google Scholar] [CrossRef]
  98. Lodge, J.M.; Thompson, K.; Corrin, L. Mapping out a Research Agenda for Generative Artificial Intelligence in Tertiary Education. Australas. J. Educ. Technol. 2023, 39, 1–8. [Google Scholar] [CrossRef]
  99. Newton, P.; Xiromeriti, M. ChatGPT Performance on Multiple Choice Question Examinations in Higher Education. A Pragmatic Scoping Review. Assess. Eval. High. Educ. 2023, 49, 781–798. [Google Scholar] [CrossRef]
  100. Bahroun, Z.; Anane, C.; Ahmed, V.; Zacca, A. Transforming Education: A Comprehensive Review of Generative Artificial Intelligence in Educational Settings through Bibliometric and Content Analysis. Sustainability 2023, 15, 12983. [Google Scholar] [CrossRef]
  101. Popkov, A.A.; Barrett, T.S. AI vs Academia: Experimental Study on AI Text Detectors’ Accuracy in Behavioral Health Academic Writing. Account. Res. 2024, 1–17. [Google Scholar] [CrossRef] [PubMed]
  102. Luo, J. How Does GenAI Affect Trust in Teacher-Student Relationships? Insights from Students’ Assessment Experiences. Teach. High Educ. 2024, 1–16. [Google Scholar] [CrossRef]
Figure 1. The score of topic coherence.
Figure 1. The score of topic coherence.
Sustainability 17 01127 g001
Figure 2. The research framework of this study.
Figure 2. The research framework of this study.
Sustainability 17 01127 g002
Figure 3. The results of topic visualization.
Figure 3. The results of topic visualization.
Sustainability 17 01127 g003
Figure 4. The results of the text mining visualization for Topic 1: (a) is the feature words and their weights for Topic 1, (b) reflects the trends over time in the number of documents that belong to Topic 1, (c) is the semantic network community under Topic 1, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 1.
Figure 4. The results of the text mining visualization for Topic 1: (a) is the feature words and their weights for Topic 1, (b) reflects the trends over time in the number of documents that belong to Topic 1, (c) is the semantic network community under Topic 1, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 1.
Sustainability 17 01127 g004
Figure 5. The results of the text mining visualization for Topic 2: (a) is the feature words and their weights for Topic 2, (b) reflects the trends over time in the number of documents that belong to Topic 2, (c) is the semantic network community under Topic 2, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 2.
Figure 5. The results of the text mining visualization for Topic 2: (a) is the feature words and their weights for Topic 2, (b) reflects the trends over time in the number of documents that belong to Topic 2, (c) is the semantic network community under Topic 2, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 2.
Sustainability 17 01127 g005
Figure 6. The results of the text mining visualization for Topic 3: (a) is the feature words and their weights for Topic 3, (b) reflects the trends over time in the number of documents that belong to Topic 3, (c) is the semantic network community under Topic 3, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 3.
Figure 6. The results of the text mining visualization for Topic 3: (a) is the feature words and their weights for Topic 3, (b) reflects the trends over time in the number of documents that belong to Topic 3, (c) is the semantic network community under Topic 3, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 3.
Sustainability 17 01127 g006
Figure 7. The results of the text mining visualization for Topic 4: (a) is the feature words and their weights for Topic 4, (b) reflects the trends over time in the number of documents that belong to Topic 4, (c) is the semantic network community under Topic 4, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 4.
Figure 7. The results of the text mining visualization for Topic 4: (a) is the feature words and their weights for Topic 4, (b) reflects the trends over time in the number of documents that belong to Topic 4, (c) is the semantic network community under Topic 4, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 4.
Sustainability 17 01127 g007
Figure 8. The results of the text mining visualization for Topic 5: (a) is the feature words and their weights for Topic 5, (b) reflects the trends over time in the number of documents that belong to Topic 5, (c) is the semantic network community under Topic 5, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 5.
Figure 8. The results of the text mining visualization for Topic 5: (a) is the feature words and their weights for Topic 5, (b) reflects the trends over time in the number of documents that belong to Topic 5, (c) is the semantic network community under Topic 5, and (d) shows the evolutionary trends of the top five feature words ranked by betweenness centrality within each community of Topic 5.
Sustainability 17 01127 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wen, Y.; Zhao, X.; Li, X.; Zang, Y. Attitude Mining Toward Generative Artificial Intelligence in Education: The Challenges and Responses for Sustainable Development in Education. Sustainability 2025, 17, 1127. https://doi.org/10.3390/su17031127

AMA Style

Wen Y, Zhao X, Li X, Zang Y. Attitude Mining Toward Generative Artificial Intelligence in Education: The Challenges and Responses for Sustainable Development in Education. Sustainability. 2025; 17(3):1127. https://doi.org/10.3390/su17031127

Chicago/Turabian Style

Wen, Yating, Xiaodong Zhao, Xingguo Li, and Yuqi Zang. 2025. "Attitude Mining Toward Generative Artificial Intelligence in Education: The Challenges and Responses for Sustainable Development in Education" Sustainability 17, no. 3: 1127. https://doi.org/10.3390/su17031127

APA Style

Wen, Y., Zhao, X., Li, X., & Zang, Y. (2025). Attitude Mining Toward Generative Artificial Intelligence in Education: The Challenges and Responses for Sustainable Development in Education. Sustainability, 17(3), 1127. https://doi.org/10.3390/su17031127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop