Next Article in Journal
A Modifiable Blockchain Based on the RE-TNG Node Selection Method
Next Article in Special Issue
Shaping the Future of Higher Education: A Technology Usage Study on Generative AI Innovations
Previous Article in Journal
Data Augmentation in Earth Observation: A Diffusion Model Approach
Previous Article in Special Issue
Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk

by
Stefanos Balaskas
1,*,
Vassilios Tsiantos
1,
Sevaste Chatzifotiou
2 and
Maria Rigou
3
1
Department of Physics, School of Sciences, Democritus University of Thrace, Kavala Campus, 65404 Kavala, Greece
2
Department of Social Work, School of Social, Political and Economic Sciences, Democritus University of Thrace, 69100 Komotini, Greece
3
Department of Management Science and Technology, University of Patras, 26334 Patras, Greece
*
Author to whom correspondence should be addressed.
Information 2025, 16(2), 82; https://doi.org/10.3390/info16020082
Submission received: 18 December 2024 / Revised: 16 January 2025 / Accepted: 20 January 2025 / Published: 22 January 2025
(This article belongs to the Special Issue Generative AI Technologies: Shaping the Future of Higher Education)

Abstract

:
Generative AI, particularly tools like ChatGPT, is reshaping higher education by enhancing academic engagement, streamlining processes, and fostering innovation. This study investigates the determinants of ChatGPT adoption intentions (CGPTAIs) by extending the Technology Acceptance Model (TAM) to include the mediating roles of perceived trust (PT) and perceived risk (PR). Using a quantitative cross-sectional design, the data from 435 participants were analyzed using structural equation modeling (SEM) to explore the relationships among the perceived ease of use (PE), perceived intelligence (PI), perceived usefulness (PUSE), PT, and PR. Τhe findings reveal that the perceived ease of use (PE) and perceived intelligence (PI) significantly drive adoption intentions, while perceived usefulness (PUSE) plays a limited role. PR fully mediates the relationship between PUSE and CGPTAI and partially mediates PE and PI, while PT fully mediates PUSE and partially mediates PE, but not PI. Multi-group analysis highlights demographic differences, such as age and prior AI experience, in adoption pathways. These results challenge traditional TAM assumptions, advancing the model to account for the interplay of usability, intelligence, trust, and risk. Practical insights are provided for fostering ethical and responsible ChatGPT integration, safeguarding academic integrity, and promoting equitable access in higher education.

Graphical Abstract

1. Introduction

Artificial intelligence, or AI, is the intelligence of machines or software and, therefore, forms one of the most dynamic fields of computer science [1,2]. The term describes suites of algorithms designed to solve one or more problems using computer systems that emulate capabilities of human-like cognition. AI is also described as the capability of computers or machines to perform tasks that need human intelligence. Its growing prominence is attributed to its transformational potential in many spheres of life, including education [1,2,3]. AI applications today are applied widely in solving some of the complex challenges facing scientific disciplines.
AI can be envisioned as an advanced computational system with added capabilities for adaptability and sensor integration to replicate human cognitive and functional capabilities. In AI, a recent subset called Generative AI, or GenAI in short, is the new emerging technology to create content such as text and images by learning from data [4,5]. While systems traditionally find information for users, GenAI concentrates on the generation of synthetic but realistic data. This unique ability influences many fields significantly. However, while GenAI can generate innovative and often impressive outputs, it is limited in its capacity to create novel ideas or solutions to real-world problems due to a lack of contextual understanding and social awareness. Furthermore, there are concerns about the reliability of the outputs from GenAI, as even the developers of ChatGPT admit that its seemingly logical responses could be flawed [6,7,8].
In education, AI applications are extended from traditional computing systems to integrated platforms that upgrade learning outcomes. AI now increasingly plays a vital role in customizing teaching and offering personalized feedback to both instructors and students [9,10,11]. Generative AI tools, such as ChatGPT, are changing educational landscapes by proposing new ways of teaching and learning. ChatGPT simplifies the creation of quizzes, assignments, and other interactive materials that differentiate instruction to meet particular needs and learning styles. Such instant feedback from ChatGPT, right when clarification by students is needed, sets up a more interactive atmosphere in learning and enhances the learners’ motivation and information comprehension. This frees up time from administrative work, such as grading, scheduling, and writing letters to students, while much time is spent on more productive teaching activities and structuring curricula [11,12,13].
While the aforementioned advantages are true, incorporating ChatGPT into higher education poses some challenges [9,10]. Its misuse as a tool for completing assignments or essays raises questions regarding plagiarism and how it could affect critical thinking and problem-solving skills development in students. Its application further complicates the ability to introduce biases or inaccuracies in AI-generated content, probably misleading students or reinforcing inequalities [2,14,15]. The excessive use of ChatGPT will further reduce the likelihood that students engage deeply in complex issues, relying instead on summaries or solutions presented by AI. This tension in the integration of ChatGPT into higher education balances the potential enhancement of learning opportunities with safeguards to uphold academic standards [3,4,16].
A thoughtful integration of ChatGPT into pedagogical strategy lets higher education realize the benefit of potentiality in creative, critical analytical, and personalized learning with a reduction in the risks associated with its misuse [3,5,17]. This kind of balance will let AI support human ingenuity to make the process of learning more enriching for students and educators alike. AI-powered technologies, such as ChatGPT, are increasingly changing the face of teaching and learning. For this reason, the understanding of user adoption behavior has become increasingly important [18,19]. Generative AI, a subfield of artificial intelligence, grants unparalleled creation capabilities, which enable educators and students to work with technology in ways that previously could not have been envisioned. Various empirical research on the adoption of different technologies using different kinds of theoretical frameworks and methodologies have been done over these years [1,2,4]. Among these, the Technology Acceptance Model emerged as one foundational framework to understand technology adoption [18,19,20,21,22,23]. Based on the Theory of Reasoned Action, TAM investigates users’ intentions with regard to the adoption of technology, based on the perceived ease of use and usefulness [24,25]. While TAM originally focused on an individual’s cognitive evaluation of the technology, its application now increasingly embraces other variables such as trust and perceived risk. The reasoning is understandable, in view of current artificial intelligence applications, which are burdened by significant moral and practical risks [5,16,17]. These developments underline the relevance of TAM in the study of ChatGPT adoption by providing a systematic way to look at how users evaluate and engage with generative AI tools.
Despite the hype surrounding generative AI tools such as ChatGPT, previous studies examining their adoption within higher education contexts have thus far tended to rely on a TAM perspective focused on perceptions of ease of use and perceived usefulness [22,23,24,25]. Such prior studies, however, tend to neglect the important roles of mediating factors of trust and risk in shaping intentions to adopt AI technologies by users [18,19,20,21,22,23]. Moreover, few studies investigate how these factors interact across different educational settings, such as discipline, level, or cultural attitude toward AI adoption. The ethical considerations and possible misuses of AI tools are recognized within the broader context but are underinvestigated at the level of empirical research. This leaves a significant gap in the literature regarding how these factors interact with each other to influence ChatGPT and other generative AI adoptions in education. The study contributes to the literature by extending TAM with the inclusion of trust and risk as mediating variables, providing an in-depth analysis of factors that determine ChatGPT adoption in higher education [26,27,28]. This will also contribute to the discussion of practical implications by providing insights for educational institutions on how to balance the benefits of generative AI tools with safeguards against the compromising of academic integrity. These findings also contribute to the theoretical development by extending TAM and accounting for unique challenges and opportunities that generative AI creates, providing a holistic framework to understand technology adoption in modern educational settings.
This study identifies the perceived ease of use and perceived intelligence as the key drivers of ChatGPT adoption intentions (CGPTAIs), while perceived trust (PT) and perceived risk (PR) act as mediators. The direct effects indicate that the perceived ease of use (PE), perceived intelligence (PI), perceived trust, and perceived risk significantly drive CGPTAI. While PE and PI represent usability and system intelligence, respectively, PT addresses privacy and ethical concerns. PR reflects users’ trade-offs, which indicate that despite risks, users emphasize benefits such as efficiency and support. Of interest is that perceived usefulness (PUSE) did not significantly predict CGPTAI, underlining trust and risk with intelligence in the choice to adopt or not. PR fully mediates the relationship between PUSE and CGPTAI and partially mediates PI and PE. PT completely mediates PUSE, and partially mediates PE, while it does not mediate PI. These results thus indicate the unique roles PR and PT play in the adoption paths. Multi-group analysis demonstrates demographic differences, where trust in the ease of use from younger users contrasts with risk in explaining variance in older users, implying the need for targeted strategies toward varied end-user adoption behaviors.
The article proceeds as follows: Section 2 reviews the relevant literature on the factors that influence the adoption of ChatGPT in higher education, focusing particularly on constructs emanating from the Technology Acceptance Model and the mediating roles of trust and risk. Section 3 outlines the conceptual framework and describes the research model developed and tested in this study. Section 4 describes the methodology, covering data collection, sampling, and analytical techniques, while Section 5 presents the results and discusses the findings of the most important. Section 6 continues with recommendations that are actionable by both educators and policymakers. Section 7 concludes and outlines the limitations of this study and ways in which future research could improve upon it.

2. Literature Review

2.1. AI and ChatGPT Adoption in Education

Artificial intelligence tools such as ChatGPT are beginning to revolutionize education, thus attracting much research interest in the realms of adoption and usage. Many scholars have studied various determinants of behavioral intention and actual use, including the perceived ease of use, usefulness, trust, and risk [29,30,31,32]. These studies also look at the theoretical underpinning and practical implications of integrating ChatGPT into educational contexts. This review synthesizes the key findings from recent literature, including contributions, gaps, and challenges in the understanding of ChatGPT adoption in higher education [31,32,33,34].
The reviewed studies focus on generative AI tools, namely ChatGPT, the adoption of which in the higher education context is under investigation along the lines of behavioral intention and usage determinants. Stanislav Ivanov et al. [8] uses the TPB, investigating the perceived strengths and weaknesses, as well as potential risks of AI tools to lecturers and students in terms of attitudes, subjective norms, and perceived behavioral control. It identifies positive links between perceived strengths and adoption intention, which underlines differences in the risk perceptions of user groups. Similarly, Akhmad Habibi et al. [35], through the framework of UTAUT2, found that facilitating conditions strongly predict behavioral intention, which eventually determines ChatGPT use among Indonesian students, while effort expectancy was found to be insignificant. Expanding on adoption predictors, Bernard Yaw Sekyi Acquah et al. [36] emphasize the role of social influence, performance expectancy, and effort expectancy as key factors in pre-service teachers’ AI adoption for lesson planning, revealing hedonic motivation as insignificant.
Ahnaf Chowdhury Niloy et al. [31] applied a triangulated approach to investigating factors that affect ChatGPT adoption among students, finding strong positive relations of six variables with behavioral intention. It also brought out a new factor which enriched the theoretical framework. Wen-Ling Hsu et al. [37] combined UTAUT and PMT to deal with the paradox of benefits and risks associated with ChatGPT in Taiwanese higher education. The findings show that perceived threats decrease intention, while coping mechanisms in the form of self-efficacy significantly enhance usage. On the other hand, Benicio Gonzalo Acosta-Enriquez et al. [38] investigated attitudes toward ChatGPT with Mitcham’s philosophical framework and found that cognitive and affective components drive behavioral attitudes, whereas demographic factors such as gender and age are insignificant moderators.
The studies reviewed have several strengths, such as strong theoretical underpinning and methodological rigor. The use of TPB by [8] and the integration of UTAUT and PMT by [37] offer comprehensive frameworks for understanding the adoption of ChatGPT, while the IPMA used by [35] gives practical value in identifying the most impactful predictors. These studies cumulatively emphasize behavioral intention as a crucial determinant for ChatGPT adoption, with facilitating conditions [35], social influence [36], and coping mechanisms [37] playing vital roles. However, variations exist in the significance of effort expectancy (i.e., between [35,36]), while hedonic motivation is significant in [36]. Ethical issues, such as academic integrity, are explicitly mentioned only by [31], hence its unique contribution to the challenges of adopting ChatGPT. Ethical considerations, while briefly addressed by Ahnaf Chowdhury Niloy et al. [31], deserve much more attention throughout the studies, given that the application of AI tools, such as ChatGPT, increases critical concerns related to privacy and academic integrity. The triangulated approach used by Ahnaf Chowdhury Niloy et al. [31] enriches the findings through a blend of qualitative and quantitative insights, making it methodologically innovative. However, the weaknesses are that cultural nuances have not been explored much, especially in studies like [36,39], which could have benefited from a deeper analysis of how sociocultural factors influence adoption. More precisely, some studies, like that of Benicio Gonzalo Acosta-Enriquez et al. [38], underutilize possible moderators, such as demographic variability, even when there is a recognition of their insignificance in the context being examined. The integration of frameworks such as UTAUT, TPB, and PMT across studies strengthens their theoretical underpinnings, although certain gaps remain in terms of explaining cultural nuances and long-term adoption outcomes. Whereas Wen-Ling Hsu et al. [37] and Stanislav Ivanov et al. [8] discuss the implications for policy, Benicio Gonzalo Acosta-Enriquez et al. [38] present results on psychological mechanisms leading to adoption. These studies contribute to a significant extent to the literature by identifying adoption determinants, validating theoretical models, and putting forward actionable insights.
This research, therefore, contributes to the literature by exploring how trust and risk mediate the intention to adopt ChatGPT in higher education. It integrates an enriched analysis of awareness and the ease of use that have remained underexplored in prior studies. In addition to focusing on an academic context, the paper addresses the gaps in perceived risk and trust interaction with other determinants in the shaping of adoption behaviors while providing recommendations for fostering ethical and effective ChatGPT integration [31,32,33,34]. These contributions go toward both the theoretical and practical understanding of AI adoption and offer a framework adaptable to diverse educational systems. Taken together, our findings aim to provide a robust foundation for understanding ChatGPT adoption, providing both practical and theoretical guidance for policymakers and educators while simultaneously showing areas for future research.

2.2. Technology Adoption Models in Higher Education

The adoption of ChatGPT has been widely explored in educational settings using various versions of the Technology Acceptance Model and the Unified Theory of Acceptance and Use of Technology (UTAUT2), where the core investigated constructs are behavioral intention and usage behavior [8,29,30,31,32,35]. In this study, Hayder Albayati [40] combined TAM with external constructs such as privacy, security, social influence, and trust and found that these factors significantly affected user acceptance of ChatGPT among undergraduate students. Their findings also bring out the importance of trust and security in fostering adoption, with some actionable recommendations for developers and educators on how to design user-friendly and secure systems.
Samsudeen Sabraz Nawaz et al. [41] applied UTAUT2 in order to analyze the adoption of ChatGPT by students in Sri Lanka and Nepal, respectively. They found that the main determinants of behavioral intention were habit, performance expectancy, and hedonic motivation. Surya Bahadur G. C. et al. [42] also found habit and learning value as strong predictors but noted the lack of significant influence by effort expectancy and facilitating conditions, thus introducing information accuracy as a new moderator. Both studies emphasize that individual behavior and contextual variables are critical to adoption. Artur Strzelecki [39] further validates UTAUT2 by emphasizing habit and performance expectancy as critical drivers of behavioral intention among Polish students, with behavioral intention strongly predicting use behavior. A study by Abu Elnasr E. Sobaih et al. [33] in Saudi Arabia reveals similar trends but notes cultural and infrastructural barriers, with facilitating conditions negatively affecting behavioral intention due to limited resources and institutional support. These findings are supported by Francisco David Guillén-Gámez et al.’s [34] analysis, which identifies system quality, credibility, and satisfaction as strong drivers of adoption but also acknowledges concerns about academic dishonesty due to a lack of adequate plagiarism policies and a lack of academic skills. Usani Joseph Ofem et al. [43] and Michael Eppler et al. [44] turn their attention to ethical considerations, studying the positive perceptions of ChatGPT in its use for purposes of deception in Nigeria and the ethical considerations raised by urologists worldwide. Usani Joseph Ofem et al. [43] note significant age and gender-based disparities in the pattern of use, while Eppler discusses guidelines to ensure safe and effective implementation, especially in academic and clinical settings. Similarly, the introduction of personal innovativeness as a moderating factor by Samsudeen Sabraz Nawaz et al. [41] shows a considerable sense of individual differences in adoption.
Collectively, these studies suggest the robustness of both TAM and UTAUT2 in explaining ChatGPT adoption and awareness while giving variations in determinant significance across different cultural and institutional contexts. Consistently, habits, performance expectancy, and behavioral intention emerge as key predictors, but facilitating conditions and effort expectancy indeed vary in their roles [31,32,33,34]. Ethical concerns are raised regarding misuse and academic cheating, with various studies having, among other things, called for mechanisms of regulation. Limitations relate to the underexploited longitudinal effects and the inconsistent integration of moderating factors such as information accuracy and personal innovativeness [9,10,11].
In light of these insights, this present research will utilize the Technology Acceptance Model, extending it to incorporate trust and risk as mediators, for the investigation into ChatGPT adoption in higher education. TAM was adopted since its focus on individual behavioral intentions through the perceived ease of use and usefulness fit directly with the study’s variables, while more extended constructs like UTAUT give less targeted findings. While trust and risk are central yet understudied in TAM-based research, their inclusion becomes imperative to understand nuanced psychological and contextual factors that influence adoption. This study integrates such mediators into the research model, hence addressing theoretical gaps and developing practical insights for institutions to effectively and ethically implement ChatGPT, to ensure user confidence while mitigating barriers, which we expand upon in the next sections.

2.2.1. ChatGPT Adoption and Awareness: The Case of the TAM Model

Adapting generative AI tools such as ChatGPT has been greatly researched in higher education using several versions of the Technology Acceptance Model (TAM) [21,22,26,28,30]. Saeed Awadh Bin-Nashwan et al. [19] examined motives for the use of ChatGPT and reported time-saving, self-efficacy, and perceived stress, whereas peer influence and academic integrity were the factors which negatively impacted usage. His findings indeed underline the ethical dimensions of integrating AI and the necessity for strict guidelines to control academic integrity issues. Abdulla Al Darayseh [22] applies the TAM to explore the factors that affect AI adoption in science education, with the ease of use, self-efficacy, and the use’s expected benefit strongly predicting the behavioral intention, and the insignificant factors were anxiety and stress.
Mark Anthony Camilleri [30] extends the TAM to investigate the role of trustworthiness and interactivity as fundamental predictors of actual use and user engagement with AI-powered chatbots. The authors identify performance expectancy as an important determinant of habitual use but criticizes the limitations of chatbots such as misinformation and social biases, informing ways in which system quality may be improved. Tarek Shal et al. [20] extend the TAM to study leadership styles and relates transformational leadership to openness to AI in academic libraries. Their study delves into how these leadership approaches affect attitudes and the ease of adoption, a perspective unique from other studies that concentrated on just the individual behavioral factors. Rania A.M. Abdalla [26] investigated the TAM for the adoption of ChatGPT among students, adding personalization as a moderator. The results indicated that both the perceived ease of use and perceived usefulness are significant predictors in the intention to adopt, though personalization moderates the former but not the latter. On the other hand, Chandan Kumar Tiwari et al. [18] found that social presence, enjoyment, and legitimacy strongly influence attitudes toward ChatGPT, and the perceived ease of use does not have a significant effect on behavioral intention, which is in line with Rania A.M. Abdalla’s [26] results regarding its moderated role. Chengming Zhang et al.’s [23] investigation into pre-service teachers in Germany points to perceived usefulness and ease of use as leading predictors for AI adoption, in line with the roots of TAM; however, their contributions, in particular, are their attempt to introduce significant influences such as AI anxiety and enjoyment as gender-specific effects. Behzad Foroughi et al. [27], in turn, extend the TAM with the inclusion of IS success factors, reporting that system quality positively impacts the perceived ease of use and usefulness. Interesting results showing how trust mediates these relationships negatively suggest that over-reliance on information trustworthiness could hinder adoption. Nisar Ahmed Dahri et al. [21] investigate the use of ChatGPT as a tool for metacognitive learning, with the findings indicating that self-regulated learning, trust, and personal competency play important roles in determining the acceptance of ChatGPT. The mixed-methods approach enhances the insight into ChatGPT adoption for educational purposes, underlining the utility in improving teaching processes. Last but not least, Jesús J. Cambra-Fierro et al. [28] integrate faculty wellbeing into the TAM, showing that the use of ChatGPT increases happiness and reduces stress among educators, evidence of broader implications of AI adoption beyond academic performance.
Collectively, the studies above confirm the applicability of the TAM in the assessment of AI adoption in the higher education sector, where the perceived ease of use, usefulness, and self-efficacy come out as important predictors [40,41,42,43]. However, the results vary in terms of which of the factors—trust, personalization, and social influence—play a more imperative role. Saeed Awadh Bin-Nashwan et al. [19] and Ahnaf Chowdhury Niloy et al. [31] underline ethical concerns; Chengming Zhang et al. [23] and Jesús J. Cambra-Fierro et al. [28] underline the relevance of contextual factors like gender and well-being. Critical gaps include an extremely limited exploration of long-term adoption implications, cultural variability, and the integration of broader psychosocial factors. Based on this, the current study further extends the application of TAM by adding trust and risk as mediating variables to investigate ChatGPT adoption intention in higher education institutions. This research aims to contribute to filling this gap by investigating these mediators, along with constructs like awareness, the ease of use, and usefulness, which will help explain how trust and risk shape the academic adoption behaviors. Alongside these studies, our attempt provides critical information for understanding AI adoption and offers practical and theoretical insights while continuing to refine the TAM’s application to the evolving dynamics of AI in education, as we aim to encourage a valid and effective integration of ChatGPT in terms of the practical and theoretical dimensions of adoption.

2.2.2. Expanding on the TAM: The Mediating Roles of Risk and Trust

The mediating roles of risk and trust are important in understanding the adoption of ChatGPT in higher education, as several recent studies have shown [6,7,45,46,47,48,49]. Abeer S. Almogren et al. [50] explore the factors that shape behavioral intention and the actual adoption of ChatGPT for smart education, with a focus on the perceived ease of use, usefulness, feedback quality, and social norms. It was expected that trust would relate to usefulness, but this hypothesis was not supported, indicating the complex nature of the relationships among these constructs. Preeti Bhaskar et al. [51] confirm the importance of trust and usefulness and locate perceived risk as a negative moderator between trust, usefulness, and adoption intention for Indian educators. These results have important implications for the duality of trust as an enhancer and risk as a reducer of adoption behaviors.
On the other hand, Md Al Amin et al.’s [52] research incorporates trust into the UTAUT model and verifies its role as a mediator between performance expectancy, social influence, and behavioral intention within Bangladeshi settings. Although effort expectancy was insignificant, facilitating conditions and perceived knowledge did indeed have a positive impact on adoption. Greeni Maheshwari [49], however, presents a different argument by proving that perceived usefulness indirectly influences the intention to adopt through personalization and interactivity, while trust and perceived intelligence play insignificant roles. These findings support existing frameworks by emphasizing contextual determinants such as the importance of personalization in Vietnam’s educational context. These studies also point to cross-cultural differences in the role of mediators and adoption factors: for example, Chung Yee Lai et al. [46] identify trust as the strongest predictor of behavior intention among Hong Kong students, with moral obligation and perceived risk serving as inhibitors, while Greeni Maheshwari [49] finds a reduced mediating influence of trust. These findings are supported by the work of Ghadah Al Murshidi et al. [47] in the UAE, demonstrating that risk awareness and behavioral intention relate positively, hence awareness of risks may lead to informed adoption rather than deterrence. These studies cumulatively contribute to the theoretical application of TAM and its extensions through a more sensitive analysis of how risk and trust mediate adoption behaviors. Long-run changes in these perceptions, however, remain an important gap, as well as how these changes influence continued adoption. Further, inconsistencies in the importance of effort expectancy and underexplored interactions of cultural context with mediating variables indicate other ways in which this research might be extended [48,51,52].
Drawing from this work, the present study extends TAM by investigating trust and risk as mediators in the adoption of ChatGPT, narrowing it to the setting of higher education. The paper aims to fill certain gaps in understanding psychological and contextual drivers of adoption by integrating these constructs alongside behavioral intention and actual use. This allows for institutions to balance the perceived risks with trust-enhancing strategies in a manner that can be ethical and effective in integrating ChatGPT into educational frameworks. This paper’s contributions may refine extant models, while equally suggesting how these models can usefully underpin practical solutions in various contexts. The present study attempts to determine the critical psychological, perceptual, and contextual predictors of ChatGPT adoption in higher education to add to the growing body of knowledge concerning AI acceptance, embedded within constructs from the Technology Acceptance Model (TAM) and with the addition of perceived trust and perceived risk for the presentation of a complete framework for understanding the factors affecting the intention to adopt. Unlike previous studies, most of which focus exclusively on the core constructs of the TAM, this study has widened the model by addressing aspects of trust and risk interrelating in such contexts as generative AI educational utility. It also points to practical significance for the effects that the perceived ease of use and perceived usefulness have upon trust and risk in influencing users. In sum, the paper contributes to the literature through the integrative model by balancing facilitators and inhibitors of ChatGPT adoption. Thus, the following hypotheses were formulated:
H1. 
The perceived ease of use (PE) directly influences ChatGPT adoption intention (CGPTAI) in higher education.
H2. 
Perceived intelligence (PI) directly influences ChatGPT adoption intention (CGPTAI) in higher education.
H3. 
Perceived usefulness (PUSE) directly influences ChatGPT adoption intention (CGPTAI) in higher education.
H4a. 
Perceived risk (PR) directly influences ChatGPT adoption intention (CGPTAI) in higher education.
H4b. 
Perceived trust (PT) directly influences ChatGPT adoption intention (CGPTAI) in higher education.
H5a. 
Perceived trust (PT) mediates the relationship between perceived usefulness (PUSE) and behavioral intention (BI) to adopt ChatGPT in higher education.
H5b. 
Perceived risk (PR) mediates the relationship between perceived usefulness (PUSE) and ChatGPT adoption intention (CGPTAI) in higher education.
H6a. 
Perceived trust (PT) mediates the relationship between the perceived ease of use (PE) and ChatGPT adoption intention (CGPTAI) in higher education.
H6b. 
Perceived risk (PR) mediates the relationship between the perceived ease of use (PE) and ChatGPT adoption intention (CGPTAI) in higher education.
H7a. 
Perceived trust (PT) mediates the relationship between perceived intelligence (PI) and ChatGPT adoption intention (CGPTAI) in higher education.
H7b. 
Perceived risk (PR) mediates the relationship between perceived intelligence (PI) and ChatGPT adoption intention (CGPTAI) in higher education.

3. Research Methodology

3.1. Conceptual Model and Rationale

This study explores the implementation of ChatGPT in higher education, incorporating important key constructs: the perceived ease of use, perceived usefulness, and perceived trust. The study leverages the TAM and its theoretical extensions to explain how these interrelating variables influence and shape intentions for students. This framework addresses significant gaps in the literature by offering a granular understanding of the psychological and contextual drivers of the adoption of generative AI in educational settings, while contributing to the bourgeoning body of research concerning technology acceptance and use in higher education.
This study is anchored in the TAM, a widely recognized framework for understanding technology adoption behaviors, with extensions to incorporate trust and risk constructs that are particularly significant in the context of ChatGPT. The perceived ease of use and perceived usefulness are core TAM constructs that consistently demonstrate their influence on users’ attitudes and behavioral intentions toward new technologies [53,54,55]. In this study, the perceived ease of use describes the degree to which users perceive ChatGPT as user-friendly, while perceived usefulness refers to the degree to which the use of ChatGPT enhances the performance of academic tasks such as writing, summarizing, or generating ideas. Both of the above are hypothesized to influence adoption intentions positively, based on prior literature on educational technologies [53,54,56]. Building on the TAM, perceived trust is proposed as an important determinant of generative AI adoption. Trust in ChatGPT refers to the confidence users have in the accuracy, reliability, and ethical use of data by the tool. Previous research has shown that trust has a positive effect on technology adoption by reducing user concerns and increasing perceived value [50,51,52,53,57]. Given the prominence of ethics concerns with AI tools, that is, plagiarism and data privacy, in academia, trust has a significant role in shaping behavioral intentions to use AI. On the other hand, perceived risk is included to capture the possible barriers to adoption. Perceived risk involves concerns over misinformation, ethical misuse, and dependency on AI in academic contexts [45,46,51]. Perceived risk has been found in several studies to negatively mediate the relationship between perceived usefulness and adoption intentions, since higher risks may dampen users’ willingness to engage with the technology [47,51,58]. This study’s incorporation of both trust and risk provides a balanced view of the facilitators and inhibitors of ChatGPT adoption.
Consequently, integrating trust and risk in the TAM framework offers key opportunities to fill gaps within an individual’s, and more specifically a student’s, understanding of complex technology adoptions in the current setting of generative AI use in education. Essentially, while the TAM can set a firm foundation for technology adoption research, the trust and risk constructs reflect the unique obstacles and opportunities posed by AI technologies like ChatGPT [6,49,50]. This is all the more pertinent given the dual perceptions that ChatGPT presents as a valuable academic tool, yet is also used in ways that fundamentally undermine academic integrity [34,36,37]. By investigating the interactions of these constructs, the research advances the theoretical understanding but also draws practical implications for educational institutions, policymakers, and AI developers. For example, factors that increase trust can be used to address perceived risks while integrating ChatGPT into effective and ethical educational practices. This conceptual model extends the TAM framework by incorporating key variables reflecting the unique dynamics of AI in higher education, providing a rich exploration of the psychological, contextual, and ethical determinants of ChatGPT adoption, an underexplored gap in the existing literature.
The proposed model is illustrated in Figure 1, which delineates the hypothesized relationships between constructs and offers a visual representation of the study’s conceptual framework.

3.2. Data Collection and Sampling

The current research adopts a quantitative cross-sectional study design, suitable for investigating the associations between variables expected to influence ChatGPT adoption in higher education, namely the perceived ease of use, perceived usefulness, perceived trust, and perceived risk [59,60,61]. A cross-sectional approach was preferred to collect data at one single point in time, to provide a comprehensive snapshot of the factors affecting intentions to adopt ChatGPT without the need for any longitudinal follow-up. Such design fits the purpose of this study and its objectives, the exploring of adoption behaviors and their determinants within an educational context [62,63,64]. A stratified random sampling strategy was adopted to ensure appropriate representation across key subgroups, including educational levels, academic disciplines, and prior awareness of AI tools [65,66]. Stratification allows for a balanced representation and facilitates meaningful comparisons across subgroups [65,66]. Complementary to this, snowball sampling was employed to access underrepresented or hard-to-reach participants, such as postgraduate students and participants from less prominent faculties [67,68]. Although snowball sampling is a non-probability technique, it has been used here strategically for improving diversity and capturing the broadest possible range of perspectives [68,69]. This hybrid approach gives a robust and inclusive dataset that is apt for studying the adoption of Generative AI tools in diversified educational settings. Data were obtained from a structured online questionnaire based on constructs from the TAM and other related frameworks. The questionnaire was hosted on various publicly accessible platforms, including Google Forms, and shared within institutional mailing lists, through social media networks, and through professional contacts. This method allowed for a broader demographic coverage and access to a substantial population of university students and faculty members. The survey took place over three months, thereby maximizing response rates.
Given the exploratory nature of this study, the self-reporting instrument was chosen for the purpose of systematically investigating the nature of these determinants’ interrelationships. The questionnaire was developed and adapted from previously validated scales that best fit the context of this study and comprised a total of 23 items (see Appendix A, Table A1). The instrument consisted of two sections: demographic information and scale-based measures of key constructs. Each of these constructs was measured through scales already validated and then adapted for the ChatGPT educational context. The items were rated with a five-point Likert scale response format, with responses ranging from 1 = strongly disagree to 5 = strongly agree. To ensure clarity, reliability, and cultural relevance, a pilot test was done with a small sample prior to full deployment. Based on the feedback from the subjects, some minor revisions were made to fine-tune the questions.
With 23 items in the measurement model, the target sample size was estimated at 300 participants based on the guidelines for SEM analysis of Llewellyn E. Van Zyl et al. [70] and Ralf Wagner et al. [71]. The general guideline in SEM, for a robust model evaluation to be conducted, is at least 10 respondents per parameter estimate [71]. This sample size has adequate power to detect meaningful relationships and evaluate model fit while not being overly restrictive given the exploratory nature of this study [72,73,74]. Therefore, the sample size of 435 participants is sufficient for the estimation of SEM and subgroup analyses. Diversity within the sample was attained through stratified random sampling and snowball methods to ensure the generalization of the findings in the higher education context.

3.3. Measurement Scales

To measure the constructs in this study, we adapted validated scales to ensure reliability and contextual relevance. The perceived ease of use (PE) was assessed using a 6-item scale adapted from Muhammad Farrukh Shahzad et al. [7], capturing the simplicity and usability of ChatGPT. Perceived usefulness (PUSE) was evaluated using a 4-item scale focusing on ChatGPT’s role in enhancing learning quality and providing immediate information access [7]. Perceived intelligence (PI) was measured with a 3-item scale assessing ChatGPT’s ability to deliver teacher-like responses [7]. Perceived trust (PT) employed a 4-item scale examining data security and confidentiality [7]. Perceived risk (PR) was captured using a 3-item scale adapted from Chung Yee Lai et al. [46], covering concerns about plagiarism, privacy, and potential negative outcomes of ChatGPT use. ChatGPT adoption intention (CGPTAI) was measured with a 3-item scale combining elements from [29,46], emphasizing future use intentions and perceived long-term benefits.
All of the items in the questionnaire were designed to be clear and easy to interpret, and no reverse-scored items were used. Although some items, such as those in the perceived risk construct, may have had negative connotations, no items were reverse-scored to avoid confusing the participants and affecting data quality. Pilot testing confirmed the clarity and reliability of the items.

3.4. Sample Profile

The sample consisted of 435 participants with a balanced gender distribution (48.7% female and 51.3% male). Most participants fell in the age group of 18–25 years (49.4%), followed by 26–30 years (30.1%), and 31–40 years (20.5%). Educationally, the majority of the participants held a Master’s degree (52.9%), Bachelor’s degree holders comprised 36.1%, while smaller proportions were PhD candidates (5.3%) and doctoral degree holders (5.7%). Prior experience with AI tools was moderate for 34.3%, minimal for 28.0%, extensive for 22.1%, and none for 15.6%. Familiarity with ChatGPT was mixed: not familiar at all (24.8%), not very familiar (32.6%), somewhat familiar (22.3%), and very familiar (20.2%). As for the primary use of ChatGPT in academia, the results showed that it is used daily for academic purposes by 33.3%, followed by monthly for 22.3%, rarely for 20.2%, weekly for 6.2%, and never for 17.9%. The main purpose for using ChatGPT was research assistance at 33.3%, followed by learning new concepts at 22.3%, and problem-solving at 20.2%. This diverse sample provides a solid foundation for examining ChatGPT adoption in higher education (Table 1).

4. Data Analysis and Results

The analysis in this study was performed in Smart-PLS4, version 4.1.0.0, which utilizes structural equation modeling as a key methodological approach. SEM is recognized for its efficacy in variance-based analysis, especially within management and social science studies, as noted by Christian Nitzl et al. [75]. The reasoning behind choosing PLS-SEM is that it can analyze causal models aimed at maximizing explained variance in dependent latent constructs [76,77]. Furthermore, MGA allowed for the testing of differences across sub-groups, thereby allowing for the detection of variations in the relationships within diverse contexts that are not usually addressed by traditional regression methods [78,79,80]. The analytical procedure followed the guidelines from Ken Kwong-Kay Wong [81] to accurately estimate beta coefficients, standard errors, and reliability metrics. The criteria for the reflective measurement model required indicators to show appropriate associations with their respective latent constructs through outer loadings greater than 0.7 for their assessment to be considered acceptable.

4.1. Common Method Bias

In order to check the validity and reliability of the results, a systematic assessment of CMB was performed following the methodological framework of Philip M. Podsakoff et al. [82]. In particular, Harman’s single-factor test was used to observe if a single factor accounted for the majority of variance in the model. Results from the unrotated principal factor analysis showed that the general factor explained 35,531 % of the total variance, which was far below the critical threshold of 50%. Addressing CMB, though not a concern in this study, lends validity to the relationships of variables and enhances confidence in the findings by mitigating potential biases [82,83].

4.2. Measurement Model

The first step when implementing partial least squares structural equation modeling is the critical assessment of the measurement model, in which constructs are represented by reflective indicators. Thus, the following assessments that have been recommended by Joe F. Hair et al. [84], including composite reliability, indicator reliability, convergent validity, and discriminant validity, become of great relevance. Indicator reliability, according to Wynne W. Chin [76], is a basic measure regarding the amount of variance that a variable shares with its underlying construct and is determined mainly by the magnitude of outer loadings. As Wynne W. Chin [76] and Ken Kwong-Kay Wong [81] note, these loadings should be greater than 0.70. However, as V. Esposito Vinzi et al. [85] argue, in social studies, often outer loadings of less than 0.70 are found. Low-loading items should be removed, but any such decisions have to be considered alongside their impacts on composite reliability and convergent validity so as not to prematurely exclude indicators. According to Joseph F. Hair et al. [86], indicators with loadings between 0.40 and 0.70 must only be removed when their removal considerably increases composite reliability or the AVE of the construct. Following the suggestions of David Gefen et al. [87], three indicators, PE6, PUSE4, and PT4, were deleted because their factor loadings were below 0.500 after the optimization of the measurement model, as shown in Table 2.
In this study, reliability was assessed by Cronbach’s alpha, rho_A, and composite reliability for the first-order constructs. Thus, CGPTAI, PE, PI, PR, PT, and PUSE showed their reliability above the minimum threshold of 0.700, with the remaining reflecting a range from moderate to high reliability, which corroborates the results of Molly McLure Wasko et al. [88] and is further supported by references [89,90,91,92]. The rho_A statistic, conceptually between Cronbach’s alpha and composite reliability, in most cases was found to be above the 0.7 threshold provided by F. Joseph et al. [90] and thus assured the criteria of reliability as described in Jörg Henseler et al. [93]. The convergent validity was adequate since, on average, the AVE for most constructs exceeded the recommended 0.50 threshold provided by Claes Fornell et al. [77]. Further, Claes Fornell et al. [77] point out that convergent validity is acceptable even if the AVE is less than 0.50 as long as composite reliability is greater than 0.60. Discriminant validity was evaluated by comparing the inter-construct correlations to the square root of the AVE using the approach of Claes Fornell et al. [77], and also by using the heterotrait–monotrait ratio (HTMT) proposed by Jörg Henseler et al. [94]. The strict threshold of 0.85 was never surpassed, as seen from Table 3 and Table 4; hence, discriminant validity has been established.

4.3. Structural Model

The R2 and Q2 values were evaluated to test the structural model of the proposed research framework, in addition to assessing the path coefficients’ significances [95]. Thus, R2 in this research was 0.49 for ChatGPT adoption intention, 0.167 for perceived risk, and 0.247 for perceived trust, confirming that they fell within the expected range between zero and one. Q2 demonstrated moderate to high predictive relevance for the model, with values of 0.385 for ChatGPT adoption intention, 0.149 for perceived risk, and 0.234 for perceived trust. Nevertheless, this model was further supported through hypothesis testing, which also verified the significance of the relationships among the constructs. The path coefficients were examined by relying on the bootstrapping method based on the suggestion by [90]. Moreover, the mediation analysis was done by following the approach of Kristopher J. Preacher et al. [96] and the bias-corrected, one-tailed bootstrap approach of Sandra Streukens et al. [97], with a sample size of 10,000.
The results of the structural model analysis are presented in Table 5. The direct effects analysis revealed that the perceived ease of use (PE) significantly predicted ChatGPT adoption intention (CGPTAI), β = 0.272, SD = 0.049, t = 5.505, p < 0.001, supporting Hypothesis 1. Similarly, perceived intelligence (PI) demonstrated a significant positive relationship with CGPTAI, β = 0.239, SD = 0.045, t = 5.278, p < 0.001, providing support for Hypothesis 2. However, perceived usefulness (PUSE) did not exhibit a significant effect on CGPTAI, β = 0.008, SD = 0.042, t = 0.181, p = 0.428, leading to the rejection of Hypothesis 3. Both perceived risk (PR) and perceived trust (PT) significantly influenced CGPTAI. Specifically, PR was positively associated with CGPTAI, β = 0.206, SD = 0.045, t = 4.532, p < 0.001, supporting Hypothesis 4a. Similarly, PT had a significant positive impact on CGPTAI, β = 0.204, SD = 0.044, t = 4.674, p < 0.001, confirming Hypothesis 4b. These findings highlight the critical role of the perceived ease of use, intelligence, trust, and risk in influencing the adoption of ChatGPT in higher education, while perceived usefulness did not emerge as a significant predictor.

4.3.1. Mediation Analysis

The mediation analysis was conducted to assess whether perceived trust (PT) and perceived risk (PR) mediate the relationships between the key predictors of the perceived ease of use (PE), perceived usefulness (PUSE), and perceived intelligence (PI) and ChatGPT adoption intention (CGPTAI). For PUSE, the indirect effect via PT was significant (β = 0.044, SD = 0.013, t = 3.342, p < 0.001), supporting H5a. The indirect effect via PR was also significant (β = 0.042, SD = 0.013, t = 3.141, p = 0.001), supporting H5b. As the direct effect of PUSE on CGPTAI was not significant (β = 0.008, SD = 0.042, t = 0.181, p = 0. 428), this indicates full mediation by both PT and PR in the relationship between PUSE and CGPTAI. For PE, the indirect effect via PT was significant (β = 0.068, SD = 0.019, t = 3.671, p < 0.001), supporting H6a. Similarly, the indirect effect via PR was significant (β = 0.038, SD = 0.014, t = 2.684, p = 0.004), supporting H6b. As the direct effect of PE on CGPTAI was significant (β = 0.272, SD = 0.049, t = 5.505, p < 0.001), this demonstrates partial mediation by both PT and PR. For PI, the indirect effect via PT was not significant (β = 0.009, SD = 0.011, t = 0.795, p = 0.213), indicating no mediation through PT (H7a not supported). However, the indirect effect via PR was significant (β = 0.025, SD = 0.015, t = 1.728, p = 0.042), supporting H7b. Alongside the significant direct effect of PI on CGPTAI (β = 0.239, SD = 0.045, t = 5.278, p < 0.001), this indicates partial mediation by PR. These results indicate not only the distinct but complementary roles of perceived trust and perceived risk on ChatGPT adoption in higher education. Although it was a full mediator from perceived usefulness to CGPTAI, perceived trust played a less influential role in other paths. Meanwhile, perceived risk is clearly identified as a consistent partial mediator, underlining its primacy in shaping the structure of adoption intentions. These findings indicate that the perception of risks associated with ChatGPT usage is a significant factor in promoting its effective and responsible integration into academic contexts. The results are summarized in Table 6.

4.3.2. Multi-Group Analysis (MGA)

We have performed multi-group analysis in order to investigate which relationships differ across groups, considering gender, age, familiarity with ChatGPT, and the frequency of ChatGPT use for academic purposes. The MGA for gender revealed that there was a significant difference in the path from PT to CGPTAI, Δβ = −0.150, p = 0.048, two-tailed, indicating that gender influences the strength of this relationship. Specifically, the influence of PT on CGPTAI appears stronger for male rather than female participants. Further, there is a marginally significant difference in the path PR and CGPTAI: Δβ = 0.127, p = 0.080, two-tailed. This indicates potential gender-based variability in the extent to which risk perceptions influence the adoption intention. For the rest of the paths, no significant gender differences have been found, at the significance level of 0.05, two-tailed.
The MGA analysis revealed several significant group differences across age categories. The relationship between the perceived ease of use (PE) and perceived trust (PT) showed a significant difference between the 18–25 and 26–30 age groups (Δβ = 0.269, p = 0.012, two-tailed) and between the 26–30 and 31–40 age groups (Δβ= −0.227, p = 0.042, two-tailed), indicating that age influences how PE impacts PT. Similarly, the relationship between perceived usefulness (PU) and ChatGPT adoption intention (CGPTAI) differed significantly between the 18–25 and 26–30 age groups (Δβ = 0.208, p = 0.021, two-tailed) and between the 26–30 and 31–40 age groups (Δβ = −0.247, p = 0.028, two-tailed). Additionally, the relationship between perceived importance (PI) and perceived risk (PR) exhibited significant differences between the 18–25 and 31–40 age groups (Δβ = 0.325, p = 0.024, two-tailed) and between the 26–30 and 31–40 age groups (Δβ = 0.313, p = 0.034, two-tailed). Moreover, the path between perceived importance (PI) and perceived trust (PT) was significantly different between the 18–25 and 31–40 age groups (Δβ = 0.306, p = 0.020, two-tailed) and the 26–30 and 31–40 age groups (Δβ = 0.386, p = 0.007, two-tailed). The other paths did not exhibit significant differences across age groups (p > 0.05). These findings indicate that trust, risk, and usefulness are perceived differently by younger and older users, suggesting that strategies to address age-specific perceptions should be developed for the promotion of ChatGPT adoption in educational settings.
The MGA analysis for familiarity with ChatGPT revealed a significant difference in the relationship between the perceived ease of use (PE) and perceived risk (PR) (Δβ = 0.229, p = 0.010, two-tailed), indicating that familiarity moderates this relationship, while the remaining paths no show significant differences across groups based on familiarity with ChatGPT (p > 0.05). This finding suggests that the efforts to address risk perceptions should be targeted in accordance with the user’s familiarity with ChatGPT. Users who have low familiarity may need comprehensive guidelines and support in order to reduce perceived risks and increase adoption. The insignificant relationship for the rest implies that familiarity with ChatGPT is not that strong in influencing the effect of other constructs toward adoption intention.
The MGA analysis of the frequency of using ChatGPT revealed a marginally significant difference in the relation of PR with CGPTAI: Δβ = −0.128, p = 0.098, two-tailed. This would suggest that the frequency of ChatGPT use may have a minor influence on how risk perceptions affect adoption intention, where the impact of risk on adoption is perceived to be lower for high-frequency users.
The multi-group analysis based on previous experience with AI tools revealed significant moderation effects for two paths: the influence of PE on PR is significantly different between high and low groups, with Δβ = 0.262, p = 0.006. This implies that users with high experience perceive a lower risk that stems from the ease of use compared to the less experienced ones. Moreover, the path from PR to CGPTAI also presented a significant group difference: Δβ = −0.175, p = 0.031; thus, the negative relation of PR to CGPTAI is weaker for the experienced users. The marginal differences were observed regarding the paths from PE to CGPTAI (Δβ = 0.131, p = 0.094) and PI to PT (Δβ = 0.141, p = 0.097), indicating that experience might slightly influence those relationships. Other paths, including those through PU, PT, and PI, were not significantly moderated by AI experience at p > 0.05. These findings underscore the role of prior experience in reducing perceived risks and strengthening confidence in AI adoption, highlighting the potential value of targeted training or familiarity programs for less experienced users to enhance their comfort and adoption intentions. The significant differences are depicted in Table 7.

5. Discussion

This research further affirms the already existing literature on technology adoption with meaningful direct relations between the perceived ease of use (PE), perceived intelligence (PI), perceived trust (PT), and perceived risk (PR) and ChatGPT adoption intention (CGPTAI). The strong positive relationship between PE and CGPTAI signifies that usability has an imperative role to play in the determination of the adoption behaviors (H1). This finding confirms TAM’s fundamental assertion that perceived ease of use favors technology acceptance by reducing complications to the engagement [37,98,99]. The participants noted that the ease of use and intuitive interface make ChatGPT easy to integrate into their academic work, especially for students and educators with complicated workflows. In a similar fashion, the positive relationship between PI and CGPTAI underlines the relevance of perceived cognitive and functional sophistication in influencing users’ adoption intentions (H2). Individuals who perceive ChatGPT as accurate, knowledgeable, and insightful are more inclined to adopt the technology, focusing on the practical relevance of the perceived capability to meet the academic requirements. This result underlines the importance of system intelligence in fostering confidence and engagement with AI tools. These findings reflect several themes from the previous literature that suggest perceptions of the sophistication of ChatGPT could well balance out concerns about the limitations, including misinformation [18,39,100].
This is further underlined by the positive direct relationship of PT and CGPTAI, hence building trust to increase its adoption (H4b). Trust reduces uncertainties related to data privacy, security, and ethical use. The participants emphasized trust as an important factor in their decision-making in light of potential academic issues with plagiarism and other misuses of AI-generated content [27,51,52]. On the other hand, the strong positive relationship between PR and CGPTAI would mean that though perceived risks—like penalties for plagiarism or dependence on AI-generated content—users are likely to adopt ChatGPT (H4a). This adoption was justified by most of the participants by weighing the perceived benefits, such as time efficiency and enhanced problem-solving capabilities, against potential drawbacks. Frequently, participants noted that the support provided by ChatGPT in simplifying complicated tasks outweighed concerns about misuse, thus showing a calculated trade-off between risks and benefits. This decision-making process underlines risks rationalized in relation to immediate utility [45,48,49]. For students and educators, the immediate utility of ChatGPT in simplifying complex tasks and providing fast, actionable insights may justify its adoption despite associated risks. This finding again supports a decision-making process where risks are weighed up against practical gains in AI use. On both dimensions of trust–risk and benefits–risk, the general theme that has emerged in this thesis is the role of ethics concerns and confidence in AI systems within successful integration [30,36,57].
Surprisingly, PUSE did not predict CGPTAI significantly, which contrasts with the traditional emphasis of the TAM on utility as one of the important determinants of adoption (H3). That is, the responses indicate that although participants acknowledge the usefulness of ChatGPT, emotional and cognitive perceptions such as trust and risk are more crucial in influencing their intentions to adopt it. These findings contribute to theoretical discussions on the evolving dynamics in the adoption of AI, where utility alone may not be sufficient without addressing users’ basic concerns [21,26,32].
The mediation analysis provided a handful of insights into the mechanisms that underlie ChatGPT adoption in higher education, emphasizing the complementary but distinct roles of PR and PT (H5a–H7b). Perceived risk stood out as a critical mediator, fully mediating the relationship between PUSE and CGPTAI (H5b). This indicates that intent to use the ChatGPT system depends on assuaging concerns of privacy, misinformation, and unethical use raised by participants who would not adopt, despite the tool’s usefulness [48,51,101]. These findings are supported by previous studies indicating that user concerns have a great influence on technology adoption and emphasize proactive strategies to mitigate such apprehensions [6,48,49,50]. PR also partially mediated the relationship between PI and CGPTAI (H7b), showing that, whereas PI exerts a direct influence on adoption intentions, its influences partially depend on the risk perceptions of users. This has brought a dual challenge for educational institutions as they have to emphasize the intelligent capabilities of ChatGPT while addressing possible inaccuracies or the misuse of the chatbot. The partial mediation of PR in these relationships emphasizes that it is a core driver of the adoption pathways, while users weigh the benefits by risks [46,47,51].
On the other hand, PT showed more selective mediation effects. PT significantly and partially mediated the relationship between PE and CGPTAI (H6a), while it fully mediated the relationship between PUSE and CGPTAI (H5a). Our participants indicate that user-friendly design builds trust, which again enhances the intention of adoption. This result points out the importance of intuitive interfaces and transparent functionality in building confidence in ChatGPT [52,53,101]. However, PT did not mediate the relationship involving PI (H7a), suggesting the insufficiency of trust in translating perception into intelligence for adoption. Therefore, the role of trust will be more situation-specific when the perceived ease is needed while using the tool. Thus, trust and perceived risk mediate the transition from user recognition of the utility of ChatGPT to actual creation intentions and the eventual adoption of that new technology [27,46,48,51,101].
The nuanced interplay between the PR and PT mediators underlines the complementary nature of the contributions both make in adoption pathways: while PR captures the users’ cautious evaluation of potential risks, PT does the same by reflecting confidence in the reliability and security of the tool. Such mediators fill in some critical gaps in the conventional TAM with both psychological and contextual dimensions relative to the generative AI domain [1,2,5,15]. These findings could lead to actionable implications for both educational institutions and developers. For efficient adoption, strategies should focus on reducing perceived risks through stringent measures of data security, guidelines, and transparency while building trust through user-centered design and dependability of performance [27,46,53]. By responding to such dual mediators, the institutions will be well-positioned to foster the responsible and successful integration of ChatGPT in academia in such a manner that its adoption at institutions worldwide aligns its user needs and ethical consideration interests.
The MGA results brought out significant demographic and contextual differences in the adoption process. Participants’ responses revealed differences in gender that showed that the influence of PT on CGPTAI was stronger for male users than for female users, suggesting that gender-specific trust-building strategies may be necessary. The age-based differences were very strong. Younger participants’ (18–25) relationships with PE, PT, and CGPTAI were significantly stronger, reflecting their preference for usability and trust. In contrast, the age groups of 26–30- and 31–40-year-olds showed a greater sensitivity towards PR, thus making more attention necessary in terms of information on risk and utility appropriate for every age group. The ChatGPT familiarity moderated the relationship between PE and PR, as unfamiliar users reported higher perceived risk. Based on the findings from our participants who had limited prior experience and expressed concerns about possible misuse or errors, the need for targeted and continuous training and support programs, necessary to build confidence and reduce risks, was confirmed [43]. Prior experience with AI tools thus influenced the pathways of adoption. It moderated the PR–CGPTAI relationship, with the result indicating that experienced users perceived lesser risks and deterrents. Similar outcomes were obtained in the moderation of PE–PR relationships, since it was found that the ease of use led to lower risk perceptions in the case of experienced users [46,47,51]. This also resonates with the literature on how self-efficacy and exposure reduce uncertainty and enhance technology acceptance. For inexperienced users to gain more confidence, the development of structured familiarization programs, simulated AI environments, and interactive training modules is recommended. These findings thus align with prior research demonstrating that experience plays an important role in shaping perceptions of technology and eventual adoption behaviors [3,14].
The findings contribute to active debates on the ethical integration of AI in education and underscore how usability, trust, and risk combine to influence adoption intentions. The nonsignificant influence of PUSE contradicts conventional TAM assumptions; suggesting that utility becomes a lower-order concern when issues of ethical and academic integrity are concerned [51,52]. This interpretation also aligns with recent literature on the evolving dynamics of technology acceptance in high-stakes environments. Stronger mediation effects of PR and PT underline the need to take care of user apprehensions and engender trust, a fact consistent with wider debates on the responsible use of generative AI [17,27,48,51]. Moreover, we enrich the TAM with the constructs of perceived trust and perceived risk to extend the model towards better representation of user concerns in generative AI adoption. The roles of trust and risk in technology acceptance bring up psychological complexities where ethical integrity and usability potentially meet. The MGA furthers our demographic and contextual variations in underpinning the varying levels of adoption behavior and individual differences, which could be useful for policymakers and educators as discussed in the next section.

6. Practical Implications

These research findings provide insight into both teachers’ and learners’ experiences regarding ethics related to adopting or integrating ChatGPT into higher education. For the latter, firstly, it did underscore strong ideas pertaining to ChatGPT because of its powers of a transformative approach toward teaching, since classes provide individual, custom-tailored, evolving learning opportunities for all participating students. With ChatGPT, educators can create interactive teaching materials, provide immediate feedback, and perform administrative tasks like grading and content creation more efficiently. These opportunities also enable educators to focus on the development of critical thinking, creativity, and deep engagement with academic content [2,4]. Training programs designed for educators can ensure that they become thoroughly capable of using ChatGPT, along with recognizing its limitations and ethical considerations that will help them avoid over-reliance on it and instances of academic dishonesty. The findings thus indicate the need to raise awareness among students about the responsible use of AI tools like ChatGPT. Institutions can provide policies and workshops that educate students on how to use ChatGPT as a tool to support learning, rather than a replacement for critical thinking and original work. In this manner, the use of ChatGPT enriches the academic experience of students through access to resources, problem-solving, and the retrieval of information in an efficient manner. In any case, students have to develop a balanced approach so that they do not sacrifice their intellectual and analytical development for the benefit of resorting to using AI tools [5,17]. The results also highlight how academic integrity and ethical conduct regarding the use of ChatGPT should be promoted. Higher learning institutions should set policies related to AI usage and codes of conduct such that students and educators alike will be well-informed on what practices are tolerated. Such a move guarantees not only that academic integrity will be upheld but that the responsible use of emergent generative AI technologies is cultivated for building trust and transparency within the academic community. Given the attention to distinct educator needs and student responsibilities, the results of this study add further knowledge about the way ChatGPT will be used within higher education, enabling innovation in teaching and learning and protecting academic integrity [50,51,52]. Finally, the additional broad ethical issues, like the bias in AI responses and inclusivity, should also be discussed concerning the institutional challenges of resource allocation and equal access. Knowing such differences would ensure that this transformative potential of ChatGPT serves diverse global contexts while considering long-term academic and intellectual growth.

7. Conclusions, Limitations and Future Directions

AI is revolutionizing the educational landscape, with innovative tools such as ChatGPT enhancing learning experiences, streamlining teaching processes, and facilitating academic growth. This study identifies the perceived ease of use and perceived intelligence as significant drivers of ChatGPT adoption intentions, while perceived trust and perceived risk are identified as mediators. PR fully mediates the relationship between perceived usefulness (PUSE) and CGPTAI, as well as partially mediating the relationships of PI and PE with CGPTAI. PT partially mediates the relation of PE with CGPTAI, and fully mediates PUSE and CGPTAI, while it does not act as a mediator in the case of the PI and CGPTAI relationship. These findings highlight the very distinct and complementary roles that PR and PT play in shaping adoption pathways.
This is study is not without limitations. The cross-sectional design limits the tracking of changes over time, suggesting that future studies could explore the potential of longitudinal studies to provide a dynamic understanding of how trust, risk, and usage intentions evolve [62,63]. Although our study tested and accounted for common method bias, which minimizes potential biases from self-reported data, such as measurement errors, self-reported data may be subject to biases such as social desirability or recall bias that future research could reduce by including behavioral data or experimental designs [2,83]. Moreover, although the study expanded the TAM to include mediators such as trust and risk, it did not account for other relevant factors, such as ethical concerns, institutional support, or technological infrastructure. Larger samples and further subgroup analyses can show discipline-specific or demographic variation in the patterns of adoption, while qualitative approaches may give deeper insights into user experiences [5,17].
This particular study focuses on ChatGPT. Although valuable, it certainly limits the generalization of results to other LLMs. Cross-tool comparisons—for example, comparative studies on Bard and Claude—can help nurture an understanding of the adoption behaviors of LLMs in higher education. Moreover, investigations into how such tools work across diverse academic disciplines or cultural contexts could offer further complexity in terms of their adoptability and impacts more broadly on teaching and learning. Future research also needs to explore the interaction of this level of adoption with differing levels of digital literacy, since highly digitally literate respondents could report a different set of adoption pathways from those who can access only very basic skills. Most importantly, extending this research into other realms of education, such as K-12, professional training, or long-term influences on learning outcomes and academic integrity, would bring substantial enhancements to understanding the role of generative AI in education. Social, ethical, and socio-economic aspects, equitable access to the AI tool and implications for a range of student populations, are only briefly touched upon and require further investigation [2,5,17]. Further research should be directed toward comparative studies across diverse cultural, institutional, or educational contexts to give a wider perspective on adoption patterns and highlight context-specific factors.
Looking ahead, as AI is continuously rebalancing the paradigms of education, institutions will have to balance their innovation with ethical integrity to harness this transformative potential. In fostering trust, mitigating risks, and aligning approaches with diverse users’ needs, generative AI tools such as ChatGPT can become important enablers of academic success, critical thinking, and learning equitably. Ensuring that AI integration supports long-term educational goals and upholds academic integrity will be the key to sustained impact in the future of education.

Author Contributions

Conceptualization, S.B. and V.T.; methodology, S.B., V.T. and S.C.; validation, S.B.; formal analysis, S.B.; data curation, S.B.; writing—original draft preparation, S.B.; writing—review and editing, S.B.; visualization, S.B.; supervision, V.T., S.C. and M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee (REC) of the University of Patras (application no. 14045, date of approval 26 August 2022). The committee reviewed the research protocol and concluded that it did not contravene the applicable legislation and complied with the standard acceptable rules of ethics in research and of research integrity as to the content and mode of conduct of this research.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Measurements used for data analysis.
Table A1. Measurements used for data analysis.
Perceived Ease of Use (PE)
PE1ChatGPT is user-friendly and easy to adapt to.Muhammad Farrukh Shahzad et al. [7]
PE2For my studies, accessing ChatGPT is straightforward.
PE3ChatGPT is easy to use and understand.
PE4Acquiring study-related information via ChatGPT is simple.
PE5Using ChatGPT simplifies completing tasks and finding answers to questions.
PE6I feel that the skills required to use ChatGPT are basic. (deleted)
Perceived Usefulness (PUSE)
PU1I believe that using ChatGPT improves my learning experience.Muhammad Farrukh Shahzad et al. [7]
PU2ChatGPT meets my questions and expectations with effective answers.
PU3ChatGPT helps me increase the quality and effectiveness of my learning.
PU4ChatGPT supports me in all of my academic work. (deleted)
Perceived Intelligence (PI)
PI1ChatGPT can teach and provide sensible answers.Muhammad Farrukh Shahzad et al. [7]
PI2I believe ChatGPT is intelligent, similar to a teacher in a classroom.
PI3ChatGPT is knowledgeable enough to answer my questions accurately.
Perceived Trust (PT)
PT1I trust that all activities I perform on ChatGPT will be confidential and secure.Muhammad Farrukh Shahzad et al. [7]
PT2I feel that ChatGPT would maintain the privacy of my personal data.
PT3I believe that ChatGPT will prevent unauthorized access to my personal information.
PT4I believe using ChatGPT for interaction is sufficiently secure. (deleted)
Perceived Risk (PR)
PR1I could receive a grade penalty for plagiarism if I use ChatGPT to complete assessments.Chung Yee Lai et al. [46]
PR2If I use ChatGPT to complete assessments, I would likely be caught.
PR3I consider the negative consequences when I use ChatGPT.
ChatGPT Adoption Intention (CGPTAI)
CGPTAI1If permitted by my university, I intend to use ChatGPT for my studies and exams in the future.Chung Yee Lai et al. [46] and Muhammad Farrukh Shahzad et al. [7]
CGPTAI2I plan to continue using ChatGPT to get answers to my study-related questions.
CGPTAI3I feel that I will continue to use ChatGPT for academic purposes moving forward.

References

  1. Zhang, X.; Zhang, P.; Shen, Y.; Liu, M.; Wang, Q.; Gašević, D.; Fan, Y. A Systematic Literature Review of Empirical Research on Applying Generative Artificial Intelligence in Education. Front. Digit. Educ. 2024, 1, 223–245. [Google Scholar] [CrossRef]
  2. García-López, I.M.; González González, C.S.; Ramírez-Montoya, M.-S.; Molina-Espinosa, J.-M. Challenges of implementing ChatGPT on education: Systematic literature review. Int. J. Educ. Res. Open 2025, 8, 100401. [Google Scholar] [CrossRef]
  3. Al-kfairy, M. Factors Impacting the Adoption and Acceptance of ChatGPT in Educational Settings: A Narrative Review of Empirical Studies. ASI 2024, 7, 110. [Google Scholar] [CrossRef]
  4. Fui-Hoon Nah, F.; Zheng, R.; Cai, J.; Siau, K.; Chen, L. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. J. Inf. Technol. Case Appl. Res. 2023, 25, 277–304. [Google Scholar] [CrossRef]
  5. Lo, C.K.; Hew, K.F.; Jong, M.S. The influence of ChatGPT on student engagement: A systematic review and future research agenda. Comput. Educ. 2024, 219, 105100. [Google Scholar] [CrossRef]
  6. Shahzad, M.F.; Xu, S.; Zahid, H. Exploring the impact of generative AI-based technologies on learning performance through self-efficacy, fairness & ethics, creativity, and trust in higher education. Educ. Inf. Technol. 2024, 1–26. [Google Scholar] [CrossRef]
  7. Shahzad, M.F.; Xu, S.; Javed, I. ChatGPT awareness, acceptance, and adoption in higher education: The role of trust as a cornerstone. Int. J. Educ. Technol. High. Educ. 2024, 21, 46. [Google Scholar] [CrossRef]
  8. Ivanov, S.; Soliman, M.; Tuomi, A.; Alkathiri, N.A.; Al-Alawi, A.N. Drivers of generative AI adoption in higher education through the lens of the Theory of Planned Behaviour. Technol. Soc. 2024, 77, 102521. [Google Scholar] [CrossRef]
  9. Luo (Jess), J. A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assess. Eval. High. Educ. 2024, 49, 651–664. [Google Scholar] [CrossRef]
  10. Law, L. Application of generative artificial intelligence (GenAI) in language teaching and learning: A scoping literature review. Comput. Educ. Open 2024, 6, 100174. [Google Scholar] [CrossRef]
  11. Ching, Y.-H.; Hsu, Y.-C.; Hung, A. Introduction to the Special Section on Integrating Generative AI in Education. TechTrends 2024, 68, 771–772. [Google Scholar] [CrossRef]
  12. Karpouzis, K.; Pantazatos, D.; Taouki, J.; Meli, K. Tailoring Education with GenAI: A New Horizon in Lesson Planning 2024. arXiv 2024. [Google Scholar] [CrossRef]
  13. Chiu, T.K.F. The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interact. Learn. Environ. 2023, 32, 6187–6203. [Google Scholar] [CrossRef]
  14. Baig, M.I.; Yadegaridehkordi, E. ChatGPT in the higher education: A systematic literature review and research challenges. Int. J. Educ. Res. 2024, 127, 102411. [Google Scholar] [CrossRef]
  15. Ayinde, L.; Wibowo, M.P.; Ravuri, B.; Emdad, F.B. ChatGPT as an important tool in organizational management: A review of the literature. Bus. Inf. Rev. 2023, 40, 137–149. [Google Scholar] [CrossRef]
  16. Lee, Y.-F.; Hwang, G.-J.; Chen, P.-Y. Impacts of an AI-based chabot on college students’ after-class review, academic performance, self-efficacy, learning attitude, and motivation. Educ. Tech. Res. Dev. 2022, 70, 1843–1865. [Google Scholar] [CrossRef]
  17. Oviedo-Trespalacios, O.; Peden, A.E.; Cole-Hunter, T.; Costantini, A.; Haghani, M.; Rod, J.E.; Kelly, S.; Torkamaan, H.; Tariq, A.; David Albert Newton, J.; et al. The risks of using ChatGPT to obtain common safety-related information and advice. Saf. Sci. 2023, 167, 106244. [Google Scholar] [CrossRef]
  18. Tiwari, C.K.; Bhat, M.A.; Khan, S.T.; Subramaniam, R.; Khan, M.A.I. What drives students toward ChatGPT? An investigation of the factors influencing adoption and usage of ChatGPT. ITSE 2024, 21, 333–355. [Google Scholar] [CrossRef]
  19. Bin-Nashwan, S.A.; Sadallah, M.; Bouteraa, M. Use of ChatGPT in academia: Academic integrity hangs in the balance. Technol. Soc. 2023, 75, 102370. [Google Scholar] [CrossRef]
  20. Shal, T.; Ghamrawi, N.; Naccache, H. Leadership styles and AI acceptance in academic libraries in higher education. J. Acad. Librariansh. 2024, 50, 102849. [Google Scholar] [CrossRef]
  21. Dahri, N.A.; Yahaya, N.; Al-Rahmi, W.M.; Aldraiweesh, A.; Alturki, U.; Almutairy, S.; Shutaleva, A.; Soomro, R.B. Extended TAM based acceptance of AI-Powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study. Heliyon 2024, 10, e29317. [Google Scholar] [CrossRef] [PubMed]
  22. Al Darayseh, A. Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Comput. Educ. Artif. Intell. 2023, 4, 100132. [Google Scholar] [CrossRef]
  23. Zhang, C.; Schießl, J.; Plößl, L.; Hofmann, F.; Gläser-Zikuda, M. Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis. Int. J. Educ. Technol. High. Educ. 2023, 20, 49. [Google Scholar] [CrossRef]
  24. Marangunić, N.; Granić, A. Technology acceptance model: A literature review from 1986 to 2013. Univ. Access Inf. Soc. 2015, 14, 81–95. [Google Scholar] [CrossRef]
  25. The Technology Acceptance Model: 30 Years of TAM | SpringerLink. Available online: https://link.springer.com/book/10.1007/978-3-030-45274-2 (accessed on 7 December 2024).
  26. Abdalla, R.A.M. Examining awareness, social influence, and perceived enjoyment in the TAM framework as determinants of ChatGPT. Personalization as a moderator. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100327. [Google Scholar] [CrossRef]
  27. Foroughi, B.; Iranmanesh, M.; Ghobakhloo, M.; Senali, M.G.; Annamalai, N.; Naghmeh-Abbaspour, B.; Rejeb, A. Determinants of ChatGPT adoption among students in higher education: The moderating effect of trust. Electron. Libr. 2024, 43, 1–21. [Google Scholar] [CrossRef]
  28. Cambra-Fierro, J.J.; Blasco, M.F.; López-Pérez, M.-E.E.; Trifu, A. ChatGPT adoption and its influence on faculty well-being: An empirical research in higher education. Educ. Inf. Technol. 2024, 1–22. [Google Scholar] [CrossRef]
  29. Shahzad, M.F.; Xu, S.; Asif, M. Factors affecting generative artificial intelligence, such as ChatGPT, use in higher education: An application of technology acceptance model. Br. Educ. Res. J. 2024, berj.4084. [Google Scholar] [CrossRef]
  30. Camilleri, M.A. Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework. Technol. Forecast. Soc. Chang. 2024, 201, 123247. [Google Scholar] [CrossRef]
  31. Niloy, A.C.; Hafiz, R.; Hossain, B.M.; Gulmeher, F.; Sultana, N.; Islam, K.F.; Bushra, F.; Islam, S.; Hoque, S.I.; Rahman, M.; et al. AI chatbots: A disguised enemy for academic integrity? Int. J. Educ. Res. Open 2024, 7, 100396. [Google Scholar] [CrossRef]
  32. Budhathoki, T.; Zirar, A.; Njoya, E.T.; Timsina, A. ChatGPT adoption and anxiety: A cross-country analysis utilising the unified theory of acceptance and use of technology (UTAUT). Stud. High. Educ. 2024, 49, 831–846. [Google Scholar] [CrossRef]
  33. Sobaih, A.E.E.; Elshaer, I.A.; Hasanein, A.M. Examining Students’ Acceptance and Use of ChatGPT in Saudi Arabian Higher Education. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 709–721. [Google Scholar] [CrossRef]
  34. Guillén-Gámez, F.D.; Sánchez-Vega, E.; Colomo-Magaña, E.; Sánchez-Rivas, E. Incident factors in the use of ChatGPT and dishonest practices as a system of academic plagiarism: The creation of a PLS-SEM model. Res. Pract. Technol. Enhanc. Learn. 2024, 20, 028. [Google Scholar] [CrossRef]
  35. Habibi, A.; Muhaimin, M.; Danibao, B.K.; Wibowo, Y.G.; Wahyuni, S.; Octavia, A. ChatGPT in higher education learning: Acceptance and use. Comput. Educ. Artif. Intell. 2023, 5, 100190. [Google Scholar] [CrossRef]
  36. Acquah, B.Y.S.; Arthur, F.; Salifu, I.; Quayson, E.; Nortey, S.A. Preservice teachers’ behavioural intention to use artificial intelligence in lesson planning: A dual-staged PLS-SEM-ANN approach. Comput. Educ. Artif. Intell. 2024, 7, 100307. [Google Scholar] [CrossRef]
  37. Hsu, W.-L.; Silalahi, A.D.K. Exploring the paradoxical use of ChatGPT in education: Analyzing benefits, risks, and coping strategies through integrated UTAUT and PMT theories using a hybrid approach of SEM and fsQCA. Comput. Educ. Artif. Intell. 2024, 7, 100329. [Google Scholar] [CrossRef]
  38. Acosta-Enriquez, B.G.; Arbulú Pérez Vargas, C.G.; Huamaní Jordan, O.; Arbulú Ballesteros, M.A.; Paredes Morales, A.E. Exploring attitudes toward ChatGPT among college students: An empirical analysis of cognitive, affective, and behavioral components using path analysis. Comput. Educ. Artif. Intell. 2024, 7, 100320. [Google Scholar] [CrossRef]
  39. Strzelecki, A. Students’ Acceptance of ChatGPT in Higher Education: An Extended Unified Theory of Acceptance and Use of Technology. Innov. High. Educ. 2024, 49, 223–245. [Google Scholar] [CrossRef]
  40. Albayati, H. Investigating undergraduate students’ perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study. Comput. Educ. Artif. Intell. 2024, 6, 100203. [Google Scholar] [CrossRef]
  41. Sabraz Nawaz, S.; Fathima Sanjeetha, M.B.; Al Murshidi, G.; Mohamed Riyath, M.I.; Mat Yamin, F.B.; Mohamed, R. Acceptance of ChatGPT by undergraduates in Sri Lanka: A hybrid approach of SEM-ANN. Interact. Technol. Smart Educ. 2024, 21, 546–570. [Google Scholar] [CrossRef]
  42. Surya Bahadur, G.C.; Bhandari, P.; Gurung, S.K.; Srivastava, E.; Ojha, D.; Dhungana, B.R. Examining the role of social influence, learning value and habit on students’ intention to use ChatGPT: The moderating effect of information accuracy in the UTAUT2 model. Cogent Educ. 2024, 11, 2403287. [Google Scholar] [CrossRef]
  43. Ofem, U.J.; Owan, V.J.; Iyam, M.A.; Udeh, M.I.; Anake, P.M.; Ovat, S.V. Students’ perceptions, attitudes and utilisation of ChatGPT for academic dishonesty: Multigroup analyses via PLS-SEM. Educ. Inf. Technol. 2024, 159–187. [Google Scholar] [CrossRef]
  44. Eppler, M.; Ganjavi, C.; Ramacciotti, L.S.; Piazza, P.; Rodler, S.; Checcucci, E.; Gomez Rivas, J.; Kowalewski, K.F.; Belenchón, I.R.; Puliatti, S.; et al. Awareness and Use of ChatGPT and Large Language Models: A Prospective Cross-sectional Global Survey in Urology. Eur. Urol. 2024, 85, 146–153. [Google Scholar] [CrossRef]
  45. Fu, C.-J.; Silalahi, A.D.K.; Huang, S.-C.; Phuong, D.T.T.; Eunike, I.J.; Yu, Z.-H. The (Un)Knowledgeable, the (Un)Skilled? Undertaking Chat-GPT Users’ Benefit-Risk-Coping Paradox in Higher Education Focusing on an Integrated, UTAUT and PMT. Int. J. Hum.–Comput. Interact. 2024, 1–31. [Google Scholar] [CrossRef]
  46. Lai, C.Y.; Cheung, K.Y.; Chan, C.S.; Law, K.K. Integrating the adapted UTAUT model with moral obligation, trust and perceived risk to predict ChatGPT adoption for assessment support: A survey with students. Comput. Educ. Artif. Intell. 2024, 6, 100246. [Google Scholar] [CrossRef]
  47. Al Murshidi, G.; Shulgina, G.; Kapuza, A.; Costley, J. How understanding the limitations and risks of using ChatGPT can contribute to willingness to use. Smart Learn. Environ. 2024, 11, 36. [Google Scholar] [CrossRef]
  48. Choi, S.; Jang, Y.; Kim, H. Influence of Pedagogical Beliefs and Perceived Trust on Teachers’ Acceptance of Educational Artificial Intelligence Tools. Int. J. Hum.–Comput. Interact. 2023, 39, 910–922. [Google Scholar] [CrossRef]
  49. Maheshwari, G. Factors influencing students’ intention to adopt and use ChatGPT in higher education: A study in the Vietnamese context. Educ. Inf. Technol. 2024, 29, 12167–12195. [Google Scholar] [CrossRef]
  50. Almogren, A.S.; Al-Rahmi, W.M.; Dahri, N.A. Exploring factors influencing the acceptance of ChatGPT in higher education: A smart education perspective. Heliyon 2024, 10, e31887. [Google Scholar] [CrossRef]
  51. Bhaskar, P.; Misra, P.; Chopra, G. Shall I use ChatGPT? A study on perceived trust and perceived risk towards ChatGPT usage by teachers at higher education institutions. Int. J. Inf. Learn. Technol. 2024, 41, 428–447. [Google Scholar] [CrossRef]
  52. Amin, M.A.; Kim, Y.S.; Noh, M. Unveiling the drivers of ChatGPT utilization in higher education sectors: The direct role of perceived knowledge and the mediating role of trust in ChatGPT. Educ. Inf. Technol. 2024, 9–37. [Google Scholar] [CrossRef]
  53. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in Online Shopping: An Integrated Model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  54. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  55. Chuttur, M. Overview of the Technology Acceptance Model: Origins, Developments and Future Directions. All Sprouts Content 2009, 9, 290. [Google Scholar]
  56. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  57. Polyportis, A. A longitudinal study on artificial intelligence adoption: Understanding the drivers of ChatGPT usage behavior change in higher education. Front. Artif. Intell. 2024, 6, 1324398. [Google Scholar] [CrossRef] [PubMed]
  58. Jo, H. From concerns to benefits: A comprehensive study of ChatGPT usage in education. Int. J. Educ. Technol. High. Educ. 2024, 21, 35. [Google Scholar] [CrossRef]
  59. Kesmodel, U.S. Cross-sectional studies—What are they good for? Acta Obstet. Gynecol. Scand. 2018, 97, 388–393. [Google Scholar] [CrossRef]
  60. Olsen, C.; St George, D.M.M. Cross-sectional study design and data analysis. Coll. Entr. Exam. Board 2004, 26, 2006. [Google Scholar]
  61. Rahman, M.M. Sample size determination for survey research and non-probability sampling techniques: A review and set of recommendations. J. Entrep. Bus. Econ. 2023, 11, 42–62. [Google Scholar]
  62. Rahman, M.M.; Tabash, M.I.; Salamzadeh, A.; Abduli, S.; Rahaman, M.S. Sampling techniques (probability) for quantitative social science researchers: A conceptual guidelines with examples. Seeu Rev. 2022, 17, 42–51. [Google Scholar] [CrossRef]
  63. Sandelowski, M. Combining qualitative and quantitative sampling, data collection, and analysis techniques in mixed-method studies. Res. Nurs. Health 2000, 23, 246–255. [Google Scholar] [CrossRef]
  64. Spector, P.E. Do not cross me: Optimizing the use of cross-sectional designs. J. Bus. Psychol. 2019, 34, 125–137. [Google Scholar] [CrossRef]
  65. Koyuncu, N.; Kadilar, C. Ratio and product estimators in stratified random sampling. J. Stat. Plan. Inference 2009, 139, 2552–2558. [Google Scholar] [CrossRef]
  66. Lynn, P. The Advantage and Disadvantage of Implicitly Stratified Sampling. Methods Data Anal. A J. Quant. Methods Surv. Methodol. (Mda) 2019, 13, 253–266. [Google Scholar] [CrossRef]
  67. Noy, C. Sampling knowledge: The hermeneutics of snowball sampling in qualitative research. Int. J. Soc. Res. Methodol. 2008, 11, 327–344. [Google Scholar] [CrossRef]
  68. Naderifar, M.; Goli, H.; Ghaljaie, F. Snowball sampling: A purposeful method of sampling in qualitative research. Strides Dev. Med. Educ. 2017, 14, 1–6. [Google Scholar] [CrossRef]
  69. Goodman, L.A. Snowball sampling. Ann. Math. Stat. 1961, 32, 148–170. [Google Scholar] [CrossRef]
  70. Van Zyl, L.E.; Ten Klooster, P.M. Exploratory Structural Equation Modeling: Practical Guidelines and Tutorial With a Convenient Online Tool for Mplus. Front. Psychiatry 2022, 12, 795672. [Google Scholar] [CrossRef] [PubMed]
  71. Wagner, R.; Grimm, M.S. Empirical Validation of the 10-Times Rule for SEM. In State of the Art in Partial Least Squares Structural Equation Modeling (PLS-SEM); Radomir, L., Ciornea, R., Wang, H., Liu, Y., Ringle, C.M., Sarstedt, M., Eds.; Springer Proceedings in Business and Economics; Springer International Publishing: Cham, Switzerland, 2023; pp. 3–7. ISBN 978-3-031-34588-3. [Google Scholar] [CrossRef]
  72. Applied Psychometrics: Sample Size and Sample Power Considerations in Factor Analysis (EFA, CFA) and SEM in General. Available online: https://www.scirp.org/journal/paperinformation?paperid=86856 (accessed on 8 December 2024).
  73. Memon, M.A.; Ting, H.; Cheah, J.-H.; Thurasamy, R.; Chuah, F.; Cham, T.H. Sample Size for Survey Research: Review and Recommendations. JASEM 2020, 4, i–xx. [Google Scholar] [CrossRef] [PubMed]
  74. Kock, N.; Hadaya, P. Minimum sample size estimation in PLS-SEM: The inverse square root and gamma-exponential methods. Inf. Syst. J. 2018, 28, 227–261. [Google Scholar] [CrossRef]
  75. Nitzl, C.; Roldan, J.L.; Cepeda, G. Mediation analysis in partial least squares path modeling: Helping researchers discuss more sophisticated models. Ind. Manag. Data Syst. 2016, 116, 1849–1864. [Google Scholar] [CrossRef]
  76. Chin, W.W. The partial least squares approach to structural equation modeling. Mod. Methods Bus. Res. 1998, 295, 295–336. [Google Scholar]
  77. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  78. Sarstedt, M.; Henseler, J.; Ringle, C.M. Multigroup analysis in partial least squares (PLS) path modeling: Alternative methods and empirical results. In Measurement and Research Methods in International Marketing; Emerald Group Publishing Limited: Bradford, UK, 2011; pp. 195–218. ISBN 1474-7979. [Google Scholar]
  79. Matthews, L. Applying multigroup analysis in PLS-SEM: A step-by-step process. In Partial Least Squares Path Modeling: Basic Concepts, Methodological Issues and Applications; Springer: Cham, Switzerland, 2017; pp. 219–243. [Google Scholar]
  80. Cheah, J.-H.; Amaro, S.; Roldán, J.L. Multigroup analysis of more than two groups in PLS-SEM: A review, illustration, and recommendations. J. Bus. Res. 2023, 156, 113539. [Google Scholar] [CrossRef]
  81. Wong, K.K.-K. Partial least squares structural equation modeling (PLS-SEM) techniques using SmartPLS. Mark. Bull. 2013, 24, 1–32. [Google Scholar]
  82. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.-Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879. [Google Scholar] [CrossRef] [PubMed]
  83. Podsakoff, P.M.; MacKenzie, S.B.; Podsakoff, N.P. Sources of method bias in social science research and recommendations on how to control it. Annu. Rev. Psychol. 2012, 63, 539–569. [Google Scholar] [CrossRef] [PubMed]
  84. Hair, J.F.; Ringle, C.M.; Sarstedt, M. PLS-SEM: Indeed a silver bullet. J. Mark. Theory Pract. 2011, 19, 139–152. [Google Scholar] [CrossRef]
  85. Vinzi, V.E.; Chin, W.W.; Henseler, J.; Wang, H. Handbook of Partial Least Squares; Springer: Berlin/Heidelberg, Germany, 2010; Volume 201. [Google Scholar]
  86. Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Danks, N.P.; Ray, S. An Introduction to Structural Equation Modeling. In Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R; Classroom Companion: Business; Springer International Publishing: Cham, Switzerland, 2021; pp. 1–29. ISBN 978-3-030-80518-0. [Google Scholar] [CrossRef]
  87. Gefen, D.; Straub, D. A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example. Commun. Assoc. Inf. Syst. 2005, 16, 5. [Google Scholar] [CrossRef]
  88. Wasko, M.M.; Faraj, S. Why should I share? Examining social capital and knowledge contribution in electronic networks of practice. MIS Q. 2005, 29, 35–57. [Google Scholar] [CrossRef]
  89. Hair, J.; Alamer, A. Partial Least Squares Structural Equation Modeling (PLS-SEM) in second language and education research: Guidelines using an applied example. Res. Methods Appl. Linguist. 2022, 1, 100027. [Google Scholar] [CrossRef]
  90. Joseph, F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); SAGE Publications, Incorporated: New York, NY, USA, 2022; ISBN 1-5443-9640-6. [Google Scholar]
  91. Reichenheim, M.E.; Hökerberg, Y.H.M.; Moraes, C.L. Assessing construct structural validity of epidemiological measurement tools: A seven-step roadmap. Cad. De Saúde Pública 2014, 30, 927–939. [Google Scholar] [CrossRef] [PubMed]
  92. Stevens, J. Applied Multivariate Statistics for the Social Sciences; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2002; Volume 4. [Google Scholar]
  93. Henseler, J.; Hubona, G.; Ray, P.A. Using PLS path modeling in new technology research: Updated guidelines. Ind. Manag. Data Syst. 2016, 116, 2–20. [Google Scholar] [CrossRef]
  94. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  95. Hair, J.F., Jr.; Sarstedt, M.; Hopkins, L.; Kuppelwieser, V.G. Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research. Eur. Bus. Rev. 2014, 26, 106–121. [Google Scholar] [CrossRef]
  96. Preacher, K.J.; Hayes, A.F. Assessing Mediation in Communication Research. In The Sage Sourcebook of Advanced Data Analysis Methods for Communication; Sage: Thousand Oaks, CA, USA, 2008. [Google Scholar]
  97. Streukens, S.; Leroi-Werelds, S. Bootstrapping and PLS-SEM: A step-by-step guide to get more out of your bootstrap results. Eur. Manag. J. 2016, 34, 618–632. [Google Scholar] [CrossRef]
  98. Sova, R.; Tudor, C.; Tartavulea, C.V.; Dieaconescu, R.I. Artificial Intelligence Tool Adoption in Higher Education: A Structural Equation Modeling Approach to Understanding Impact Factors among Economics Students. Electronics 2024, 13, 3632. [Google Scholar] [CrossRef]
  99. Niloy, A.C.; Bari, M.A.; Sultana, J.; Chowdhury, R.; Raisa, F.M.; Islam, A.; Mahmud, S.; Jahan, I.; Sarkar, M.; Akter, S.; et al. Why do students use ChatGPT? Answering through a triangulation approach. Comput. Educ. Artif. Intell. 2024, 6, 100208. [Google Scholar] [CrossRef]
  100. Valle, N.N.; Kilat, R.V.; Lim, J.; General, E.; Dela Cruz, J.; Colina, S.J.; Batican, I.; Valle, L. Modeling learners’ behavioral intention toward using artificial intelligence in education. Social Sci. Humanit. Open 2024, 10, 101167. [Google Scholar] [CrossRef]
  101. Ali, F.; Yasar, B.; Ali, L.; Dogan, S. Antecedents and consequences of travelers’ trust towards personalized travel recommendations offered by ChatGPT. Int. J. Hosp. Manag. 2023, 114, 103588. [Google Scholar] [CrossRef]
Figure 1. Conceptual model of ChatGPT adoption intentions. The model presents direct effects (in dotted lines) and mediation effects of the key constructs: perceived ease of use (PE), perceived usefulness (PU), perceived intelligence (PI), perceived trust (PT), and perceived risk (PR) on behavioral intention to adopt ChatGPT (CGPTAI). Every labelled hypothesis (H1–H7b) reflects each relationship tested in the study. The mediation effects highlight the role that PT and PR play in shaping the influences of PE, PU, and PI on CGPTAI.
Figure 1. Conceptual model of ChatGPT adoption intentions. The model presents direct effects (in dotted lines) and mediation effects of the key constructs: perceived ease of use (PE), perceived usefulness (PU), perceived intelligence (PI), perceived trust (PT), and perceived risk (PR) on behavioral intention to adopt ChatGPT (CGPTAI). Every labelled hypothesis (H1–H7b) reflects each relationship tested in the study. The mediation effects highlight the role that PT and PR play in shaping the influences of PE, PU, and PI on CGPTAI.
Information 16 00082 g001
Table 1. Sample profile.
Table 1. Sample profile.
Frequency (N)Percentage
GenderFemale21248.7%
Male22351.3%
Age18–2521549.4%
26–3013130.1%
31–408920.5%
EducationBachelor’s degree15736.1%
Master’s degree23052.9%
PhD candidate235.3%
Doctoral255.7%
Prior Experience with AI tools (e.g., ChatGPT, Google Assistant,andSiri)No experience6815.6%
Minimal experience12228.0%
Moderate experience14934.3%
Extensive experience9622.1%
Familiarity with ChatGPTNot at all familiar10824.8%
Not very familiar14232.6%
Somewhat familiar9722.3%
Very familiar8820.2%
Frequency of ChatGPT Use for Academic PurposesDaily14533.3%
Weekly276.2%
Monthly9722.3%
Rarely8820.2%
Never7817.9%
Primary Purpose for Using ChatGPT in AcademiaResearch assistance14533.3%
Writing and editing support276.2%
Learning new concepts or skills9722.3%
Problem-solving and study aid8820.2%
Other7817.9%
Table 2. Factor loading reliability and convergent validity.
Table 2. Factor loading reliability and convergent validity.
ConstructItemsFactor LoadingsCronbach’s Alpharho_ACRAVE
ChatGPT Adoption IntentionCGPTAI10.7840.7790.7990.8710.694
CGPTAI20.901
CGPTAI30.809
Perceived Ease of Use PE10.8200.8460.8530.8900.619
PE20.781
PE30.806
PE40.746
PE50.778
Perceived Intelligence PI10.9240.8710.8730.9210.796
PI20.907
PI30.844
Perceived Risk PR10.8970.8930.8990.9330.824
PR20.921
PR30.904
Perceived Trust PT10.9080.9240.9240.9520.868
PT20.941
PT30.946
Perceived Usefulness PUSE10.6230.5180.5320.7550.509
PUSE20.742
PUSE30.766
Note: CR: composite reliability; AVE: average variance extracted; rho_A: reliability coefficient between Cronbach’s alpha and CR; Factor Loadings: indicator’s reflection of its latent construct.
Table 3. HTMT ratio.
Table 3. HTMT ratio.
CGPTAIPEPIPRPTPUSE
CGPTAI
PE0.693
PI0.6310.685
PR0.5700.3900.340
PT0.5980.5080.3540.587
PUSE0.5570.6840.5530.4750.539
Note: Discriminant validity is confirmed as all HTMT values fall below the threshold of 0.85.
Table 4. Fornell and Larcker criterion.
Table 4. Fornell and Larcker criterion.
CGPTAIPEPIPRPTPUSE
CGPTAI0.833
PE0.5780.787
PI0.5270.5830.892
PR0.4820.3430.3030.908
PT0.5150.4540.3170.5320.932
PUSE0.3560.4330.3610.3290.3770.713
Note: The bold values represent the square root of the AVE.
Table 5. Hypothesis testing.
Table 5. Hypothesis testing.
HypothesisPathCoefficient (β)SDt-Valuep-ValuesResults
H1PE → CGPTAI0.2720.0495.5050.000Supported
H2PI → CGPTAI0.2390.0455.2780.000Supported
H3PUSE → CGPTAI0.0080.0420.1810.428Not Supp.
H4aPR → CGPTAI0.2060.0454.5320.000Supported
H4bPT → CGPTAI0.2040.0444.6740.000Supported
PE: perceived ease of use; PI: perceived intelligence; PUSE: perceived usefulness; PR: perceived risk; PT: perceived trust; CGPTAI: ChatGPT adoption intention; β: path coefficient; SD: standard deviation; t-value: test statistic; p-values: statistical significance (p < 0.05).
Table 6. Mediation analysis.
Table 6. Mediation analysis.
HypothesisDirect EffectsCoeff. (β)SDt-Valuep-ValuesResultsMediation Type
PE → CGPTAI0.2720.0495.5050.000
PI → CGPTAI0.2390.0455.2780.000
PUSE → CGPTAI0.0080.0420.1810.428
Total EffectsCoeff. (β)SDt-Valuep-Values
PE → CGPTAI0.1060.0244.3590.000
PI → CGPTAI0.0340.0201.6820.046
PUSE → CGPTAI0.0870.0184.7180.000
Specific Indirect EffectsCoeff. (β)SDt-Valuep-Values
H5aPUSE → PT → CGPTAI0.0440.0133.3420.000SupportedFull Mediation
H5bPUSE → PR → CGPTAI0.0420.0133.1410.001SupportedFull Mediation
H6aPE → PT → CGPTAI0.0680.0193.6710.000SupportedPartial Mediation
H6bPE → PR → CGPTAI0.0380.0142.6840.004SupportedPartial Mediation
H7aPI → PT → CGPTAI0.0090.0110.7950.213Not Supp.No Mediation
H7bPI → PR → CGPTAI0.0250.0151.7280.042SupportedPartial Mediation
Table 7. Significant MGA results with group comparisons.
Table 7. Significant MGA results with group comparisons.
PathGroup ComparisonDifference (Δβ)p-Value
Significant Results
PT → CGPTAIGender (Female vs. Male)−0.1500.048
PE → PRFamiliarity with ChatGPT (High vs. Low)0.2290.010
PE → PRPrior Exp. with AI Tools (High vs. Low)0.2620.006
PR → CGPTAIPrior Exp. with AI Tools (High vs. Low)−0.1750.031
PE → PTAge (18–25 vs. 26–30)0.2690.012
PE → PTAge (26–30 vs. 31–40)−0.2270.042
PU → CGPTAIAge (18–25 vs. 26–30)0.2080.021
PU → CGPTAIAge (26–30 vs. 31–40)−0.2470.028
PI → PRAge (18–25 vs. 31–40)0.3250.024
PI → PRAge (26–30 vs. 31–40)0.3130.034
PI → PTAge (18–25 vs. 31–40)0.3060.020
PI → PTAge (26–30 vs. 31–40)0.3860.007
Marginal Results
PR → CGPTAIGender (Female vs. Male)0.1270.080
PR → CGPTAIFrequency of ChatGPT Use (High vs. Low)−0.1280.098
PE → CGPTAIPrior Exp. with AI Tools (High vs. Low)0.1310.094
PI → PTPrior Exp. with AI Tools (High vs. Low)0.1410.097
Note: Δβ represents the difference in path coefficients for the specified groups. Values are based on Multi-group analysis (MGA), highlighting key group-level differences in the relationships between variables.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Balaskas, S.; Tsiantos, V.; Chatzifotiou, S.; Rigou, M. Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk. Information 2025, 16, 82. https://doi.org/10.3390/info16020082

AMA Style

Balaskas S, Tsiantos V, Chatzifotiou S, Rigou M. Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk. Information. 2025; 16(2):82. https://doi.org/10.3390/info16020082

Chicago/Turabian Style

Balaskas, Stefanos, Vassilios Tsiantos, Sevaste Chatzifotiou, and Maria Rigou. 2025. "Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk" Information 16, no. 2: 82. https://doi.org/10.3390/info16020082

APA Style

Balaskas, S., Tsiantos, V., Chatzifotiou, S., & Rigou, M. (2025). Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk. Information, 16(2), 82. https://doi.org/10.3390/info16020082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop