Next Article in Journal
Identification and Evaluation of Synergy Between Carbon Emissions and Air Pollutants in Inter-Industrial Trade Among Provinces in China
Previous Article in Journal
Research on the Mechanism of the Green Innovation of Enterprises Empowered by Digital Technology from the Perspective of Value Co-Creation
Previous Article in Special Issue
Evaluating Students’ Acceptance Intention of Augmented Reality in Automation Systems Using the Technology Acceptance Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Powered E-Learning for Lifelong Learners: Impact on Performance and Knowledge Application

National Institute for Lifelong Education, Seoul 04520, Republic of Korea
Sustainability 2024, 16(20), 9066; https://doi.org/10.3390/su16209066
Submission received: 5 September 2024 / Revised: 14 October 2024 / Accepted: 15 October 2024 / Published: 19 October 2024
(This article belongs to the Special Issue Sustainable E-learning and Education with Intelligence—2nd Edition)

Abstract

:
The widespread integration of artificial intelligence (AI) technologies, such as generative AI tools like ChatGPT, in education and workplaces requires a clear understanding of the factors that influence their adoption and effectiveness. This study explores how the ease of using AI tools, the ability to apply knowledge gained from them, and users’ confidence in learning with AI impact individuals’ performance and frequency of use. We also examine how these factors affect academic success and job performance among adults engaged in lifelong learning. Using data from 300 participants analyzed with Partial Least Squares Structural Equation Modeling (PLS-SEM), we found that, when AI tools are easy to use, individuals experience greater benefits and are more likely to use them regularly. Applying knowledge from AI tools enhances both personal performance and usage frequency. Additionally, having confidence in one’s ability to learn with AI leads to significant improvements in personal outcomes and an increased use of AI tools. These findings highlight the importance of designing user-friendly AI technologies, promoting the practical application of AI-generated knowledge, and building users’ confidence to maximize the benefits of AI. Educators, policymakers, and AI developers can use these insights to develop strategies that enhance academic and job performance through effective AI integration. Future research should consider other influencing factors and employ longitudinal studies to further validate these findings.

1. Introduction

In recent years, the rapid advancement of artificial intelligence (AI) technologies has brought about profound changes across multiple sectors, notably in education and the workplace [1]. Generative AI models, such as OpenAI’s ChatGPT and Google’s Gemini, exemplify this technological leap, providing sophisticated tools for various applications [2]. These models are capable of processing vast amounts of information, generating high-quality content, and automating complex tasks with remarkable efficiency [3,4]. The integration of such technologies into daily operations has the potential to drastically improve productivity by streamlining workflows [5,6] and enhancing learning outcomes through personalized educational content and automated support [7,8]. Consequently, it becomes imperative to thoroughly investigate the factors that drive the adoption and effective utilization of these advanced AI tools to maximize their benefits and address any challenges associated with their implementation.
Working adults engaged in lifelong learning are required to balance their educational pursuits with professional responsibilities, creating a unique need for tools that can facilitate both learning and knowledge formation. Generative AI, with its capacity to assist in learning and enhance knowledge creation, becomes an invaluable asset in this context. This study recognizes the dual role of lifelong learners, who must effectively integrate newly acquired knowledge into both their professional tasks and academic endeavors. Therefore, it is crucial to understand how generative AI can be leveraged to improve their performance in these areas. We aim to elucidate the mechanisms through which generative AI can enhance both job performance and academic achievements for lifelong learners. By focusing on how these individuals can use generative AI to form and apply knowledge across different settings, our research seeks to provide insights into the potential of these technologies to support continuous learning and professional development.
Extensive research has been conducted on the adoption and use of AI technologies in various contexts. The Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) are widely used frameworks that explain how users come to accept and use technology [9,10]. These models emphasize key constructs such as perceived ease of use, perceived usefulness, and user self-efficacy, which are critical in determining technology adoption. Studies have shown that, when users find technology easy to use and useful, they are more likely to integrate it into their daily activities, leading to better performance outcomes [11,12,13]. Despite the extensive research on technology adoption, there are still gaps in understanding the specific factors that influence the effective use of generative AI technologies in both educational and professional settings. Most existing studies focus on general technology adoption without delving into the unique characteristics and challenges associated with generative AI. Moreover, the interplay between technicality, knowledge application, and self-efficacy in AI learning remains underexplored [14,15]. There is a need for research that not only addresses these gaps but also investigates how these factors collectively influence individual impact and usage of AI technologies.
Investigating these factors is crucial for several reasons. First, as AI technologies become more integrated into educational and professional environments, understanding what drives their effective use can help maximize their benefits. Enhanced productivity, improved learning outcomes, and better job performance are some of the potential advantages of effectively leveraging AI tools [16,17,18]. Second, identifying and addressing the barriers to AI adoption can inform the development of strategies and interventions that support users in overcoming these challenges, thereby fostering more widespread and effective use of AI technologies. Lastly, insights from such research can guide policymakers, educators, and managers in creating environments that support continuous learning and adaptation to new technologies, ultimately leading to improved organizational and educational outcomes.
This study proposes to fill these gaps by employing a comprehensive model that integrates constructs from TAM and UTAUT with additional factors specific to generative AI technologies. Using Partial Least Squares Structural Equation Modeling (PLS-SEM), the study will analyze data collected from 300 participants engaged in lifelong learning and currently using generative AI in their work or studies. The model will examine the roles of technicality, knowledge application, and self-efficacy in AI learning on individual impact and usage, and their subsequent effects on academic and job performance. By focusing on these specific factors, the study aims to provide a detailed understanding of how generative AI technologies can be effectively utilized in various contexts.
In this study, technicality refers to the ease with which users can understand and interact with generative AI, focusing on the simplicity of use in tasks [19,20]. Knowledge application captures how effectively AI helps users access and apply information in academic or work settings [21,22]. Self-efficacy in AI learning reflects users’ confidence in using AI tools to enhance their skills and productivity [15]. Individual impact measures the positive outcomes generated from AI use, such as improved task efficiency and performance [23]. Usage assesses how frequently users rely on AI in their daily routines [21,24,25], while academic [26] and job performance [27] examine how AI contributes to success in educational achievements and workplace productivity, respectively.
The contributions of this study are manifold. Firstly, it extends existing theoretical models by incorporating constructs that are particularly relevant to generative AI technologies, thus offering a more nuanced understanding of technology adoption and usage. Secondly, the study provides empirical evidence on the relationships between technicality, knowledge application, self-efficacy, and performance outcomes, which has been underexplored in prior research [14,23]. Thirdly, the findings can inform the development of targeted interventions and training programs that enhance user engagement and proficiency with AI tools, leading to improved performance in both educational and professional settings. Finally, the study’s insights can guide policymakers and educational institutions in promoting effective AI integration, ultimately contributing to better learning and productivity outcomes. In conclusion, this study addresses significant gaps in the existing literature on AI technology adoption and use. By exploring the specific factors that influence the effective utilization of generative AI in educational and professional contexts, it offers valuable insights that can enhance our understanding of technology integration. The findings have the potential to inform both theory and practice, providing a robust framework for future research and practical applications.

2. Literature Review

2.1. Education across Lifespan

Education across the lifespan, often referred to as lifelong learning, encompasses all learning activities undertaken throughout life for personal or professional development [28]. This concept is crucial in a world characterized by rapid social, economic, and technological changes, which demand continuous adaptation and skills enhancement [29].
The concept of lifelong learning is rooted in the belief that education does not stop after formal schooling but continues through one’s entire life [30]. This approach to education emphasizes the need for continuous personal and professional development in response to evolving societal demands [31]. Lifelong learning is considered essential for fostering adaptability, resilience, and sustained employability in a knowledge-based economy [32].
Several educational frameworks and policies have been developed to promote lifelong learning. The European Commission, for instance, highlights the importance of lifelong learning in achieving social inclusion, active citizenship, and personal development [33]. Similarly, UNESCO’s Education for Sustainable Development (ESD) emphasizes the need for educational systems that support learning from childhood through adulthood, ensuring that individuals can acquire the knowledge, skills, attitudes, and values necessary to contribute to sustainable development [34].
Intergenerational learning is a significant aspect of lifelong education, involving the reciprocal exchange of knowledge and skills between different generations [35,36]. This approach not only enhances learning experiences but also fosters social cohesion and mutual understanding among different age groups. Studies have shown that intergenerational learning can lead to improved educational outcomes and greater community engagement [37]. For example, older adults can share their experiences and wisdom with younger generations, while younger individuals can introduce new technologies and modern practices to older learners.
Despite its benefits, lifelong learning faces several challenges and barriers. Accessibility to learning opportunities, financial constraints, and lack of motivation or awareness are significant hurdles. Moreover, the traditional education system often does not cater to adult learners who require flexible and tailored learning solutions [38]. To address these issues, educational institutions and policymakers must develop inclusive strategies that make lifelong learning more accessible and relevant to all age groups.
The advent of digital technologies has transformed lifelong learning, making it more accessible and flexible. Online learning platforms, Massive Open Online Courses (MOOCs), and digital resources enable individuals to learn at their own pace and convenience. These technologies have democratized education, providing opportunities for continuous learning beyond geographical and temporal constraints [39]. However, there is a need to ensure that these technologies are accessible to all, including those with limited digital literacy or access to technology.
While the literature on lifelong learning provides a solid foundation for understanding its importance, much of the existing research lacks a nuanced examination of the integration of new technologies, particularly generative AI, in facilitating lifelong education. Prior studies primarily emphasize traditional and digital learning platforms without addressing the potential of AI to reshape learning strategies across diverse age groups. The current research aims to fill this gap by exploring how AI technologies can enhance adaptability and personalized learning in lifelong education. Moreover, while the benefits of lifelong learning are well-established, challenges such as accessibility, digital literacy, and intergenerational knowledge exchange remain underexplored in the context of AI adoption. This study seeks to bridge these gaps by focusing on how AI can overcome such challenges, providing new insights into lifelong learning’s evolving landscape.

2.2. AI’s Impact on Education

AI is increasingly seen as a transformative force in the field of education, particularly in enhancing personalized learning, increasing accessibility, and facilitating lifelong education [1,40,41]. Its impact ranges from providing customized learning experiences to automating administrative tasks, thereby freeing up time for educators to focus on more critical pedagogical roles [42,43]. AI’s capabilities in adaptive learning, intelligent tutoring systems, and real-time data analytics have significantly improved how learners interact with educational content, offering tailored and accessible learning opportunities [44,45]. However, while the promise of AI in education is clear, the literature highlights both the potential benefits and the limitations and challenges that need to be addressed [7,46].
One of the primary advantages of AI in education is its ability to personalize learning experiences, offering customized instruction that caters to the unique needs of each learner [47]. AI-powered adaptive learning systems can continuously analyze a learner’s strengths, weaknesses, and progress, thereby adjusting the curriculum to optimize learning outcomes [48]. This personalized feedback is crucial in lifelong education, where learners often come from diverse backgrounds with varying levels of expertise [49]. Adaptive learning platforms can diagnose specific areas where learners struggle and automatically provide supplemental resources, thus ensuring a more efficient and targeted learning process [50].
AI also plays a significant role in bridging formal and informal education. AI-driven platforms can integrate various learning experiences, from online courses to informal tutorials, into a cohesive learning journey [40]. This integration allows individuals to develop skills continuously throughout their lives, thereby aligning with the principles of lifelong learning. By offering a flexible learning environment, AI enhances learners’ ability to access knowledge at their own pace, irrespective of geographical and temporal constraints.
Another impactful use of AI is through intelligent tutoring systems (ITSs), which simulate one-on-one tutoring experiences and offer personalized assistance to learners [51,52]. Ref. [53] highlights the potential of ITSs in democratizing access to high-quality education. These systems can provide learners with instant feedback, answer questions in real time, and adapt their teaching strategies based on the learner’s performance [54,55]. This technology is especially valuable in lifelong education, where learners might need specific, individualized support that traditional classroom settings cannot always provide. ITSs can also increase engagement by using AI to create interactive, dynamic learning environments that maintain learner interest over extended periods [51].
However, alongside these advantages, there are critical challenges associated with the adoption of AI in education. One of the most significant issues is the potential for bias in AI algorithms, which could reinforce existing educational inequalities [56,57]. As AI systems often rely on large datasets, there is a risk that these datasets may contain inherent biases, particularly if they underrepresent certain groups of learners [58]. Biased AI systems could exacerbate disparities in educational outcomes [59]. Therefore, ensuring that AI systems are designed with fairness and inclusivity in mind is essential for realizing their full potential in educational settings.
Another challenge is the ethical concerns surrounding data privacy and the use of learners’ personal information [60,61]. AI systems require vast amounts of data to function effectively, raising questions about how these data are collected, stored, and used. Research emphasizes the importance of creating transparent AI systems that allow learners to maintain control over their data [62]. Without clear data governance policies, the widespread adoption of AI in education could lead to a loss of trust among users, thereby hindering its effectiveness.
Finally, the integration of AI into educational systems faces technical challenges, including the need for reliable infrastructure and adequate training for educators. AI technologies often require substantial financial investment [63], and schools or institutions with limited resources may struggle to implement them effectively. Moreover, educators need to be trained not only to use AI tools but also to understand their pedagogical implications [64,65]. Without adequate support for educators, the potential of AI technologies in improving educational outcomes may remain unrealized [66,67].

3. Research Model and Hypothesis Development

The theoretical framework for this research integrates the TAM and the UTAUT, specifically addressing the challenges posed by generative AI technologies in educational settings. TAM, which focuses on constructs such as perceived ease of use and perceived usefulness, provides a foundational understanding of how individuals adopt new technologies [9]. UTAUT, on the other hand, incorporates broader constructs like performance expectancy and social influence, offering a more comprehensive approach to technology adoption [10]. These models have been selected to create a robust framework for investigating AI’s integration in education, as they provide insights into key drivers of technology adoption, particularly for complex AI tools like generative AI, which present unique usability challenges.
The concept of technicality aligns with TAM’s perceived ease of use, specifically in the context of AI technologies. Generative AI tools often require a higher level of technical understanding due to their complexity in handling advanced tasks, such as content generation, data analysis, and decision-making support [68]. Using tools like ChatGPT or Google’s Gemini demands familiarity with language processing capabilities and interpreting AI outputs [69]. Studies have demonstrated that higher technicality, or ease of use, results in increased user satisfaction and competence, which are critical in both educational and professional contexts where users must quickly grasp how to navigate and employ these tools effectively [19,70]. By focusing on technicality, this research expands TAM’s perceived ease of use to capture the specific challenges and nuances of learning AI-driven systems.
Knowledge application, derived from the TAM construct of perceived usefulness, focuses on how users apply knowledge gained through generative AI tools. This application directly impacts productivity, performance, and learning outcomes. In educational contexts, for example, students who use AI to gather, process, and analyze information can better perform academic tasks and apply what they have learned in real-world scenarios, thereby enhancing both academic and job performance [14,23]. Generative AI can help students automate routine tasks, allowing them to focus on higher-order thinking and innovation [41]. The practical utility of AI tools in educational settings highlights their role in improving task efficiency, thereby validating their perceived usefulness in both educational and professional settings [71].
Self-efficacy in AI learning builds on [72]’s theory of self-efficacy and is central to the UTAUT model’s focus on performance expectancy. Individuals’ belief in their ability to use generative AI tools like ChatGPT or Gemini effectively can significantly influence their usage patterns and overall impact [73]. Users with higher self-efficacy are more likely to explore the full functionality of these tools, apply them to complex tasks, and persist in their use despite challenges [74,75]. This results in higher academic and job performance, as self-efficacy encourages continued learning and the application of new skills [15,72]. The research emphasizes the importance of confidence building in using AI technologies, as it directly influences not only the frequency of use but also the depth of engagement with the tools.
The individual impact of AI tools—defined as positive changes in academic or job performance due to effective technology use—relates directly to the benefits of adopting advanced AI systems. Individuals who use generative AI to streamline work processes, such as automating routine tasks or improving content generation, often report enhanced productivity and performance [21,76]. This outcome aligns with UTAUT’s emphasis on performance expectancy and underscores the importance of regular AI tool usage to maximize impact.
In conclusion, this research presents a comprehensive framework that integrates constructs from both TAM and UTAUT, adapted to the unique challenges of generative AI in education. The framework not only addresses key aspects of AI adoption, such as technicality and knowledge application, but also considers psychological factors like self-efficacy, offering a holistic approach to understanding AI’s role in educational and professional contexts. Figure 1 illustrates the research model.

3.1. Technicality

Technicality refers to the ease with which users can understand and use technology [19]. A higher level of technicality in generative AI tools is associated with increased user competence and satisfaction, as users find it easier to navigate and leverage these tools for various tasks [70,77]. This competence and satisfaction are crucial in enhancing the perceived individual impact of AI [19,78]. Moreover, when users find AI tools technically accessible, they are more likely to incorporate these tools into their daily routines, increasing overall usage. Studies support the notion that the ease of use directly correlates with higher adoption rates [79,80,81]. Consequently, this research posits that technicality significantly influences both individual impact and usage among employees engaged in lifelong learning using AI.
H1a. 
Technicality positively influences individual impact.
H1b. 
Technicality positively influences usage.

3.2. Knowledge Application

Knowledge application refers to the process by which individuals utilize acquired information to perform tasks effectively [23]. When individuals apply knowledge gained through generative AI, they can significantly enhance their performance and productivity [73,82]. This enhancement stems from their ability to integrate new information into their work processes, leading to a greater individual impact [14]. Furthermore, effective knowledge application facilitates more frequent use of AI tools, as individuals recognize the tangible benefits these tools offer in improving task efficiency and outcomes [83,84]. The practical application of knowledge reinforces the adoption and habitual use of technology [85,86,87]. Consequently, this research posits that knowledge application significantly influences both individual impact and usage among adults studying and working with AI in universities and workplaces. Thus, this research proposes the following hypotheses.
H2a. 
Knowledge application positively influences individual impact.
H2b. 
Knowledge application positively influences usage.

3.3. Self-Efficacy in AI Learning

Self-efficacy in AI learning refers to an individual’s belief in their capability to effectively use and learn from AI technologies [15]. High self-efficacy enhances an individual’s confidence in applying AI tools to their tasks, thereby improving their performance and productivity [88,89]. This belief in one’s ability also fosters a more frequent use of AI technologies, as confident users are more likely to explore and integrate these tools into their daily routines [90,91,92], consistent with the TAM discussed [10]. Consequently, this research posits that self-efficacy in AI learning significantly influences both individual impact and usage. Thus, this research proposes the following hypotheses.
H3a. 
Self-efficacy inAI learning positively influences individual impact.
H3b. 
Self-efficacy inAI learning positively influences usage.

3.4. Individual Impact

Individual impact refers to the positive changes and improvements in a person’s work or academic outcomes due to the effective use of technology [21]. When individuals experience a significant impact from using AI, they are likely to see enhanced academic performance, as the tools aid in better understanding and knowledge application [66,93,94]. Similarly, a substantial individual impact from AI usage can lead to improved job performance by increasing productivity and efficiency [76,95,96]. Consequently, this research posits that individual impact significantly influences both academic performance and job performance. Thus, this research proposes the following hypotheses.
H4a. 
Individual impact positively influences academic performance.
H4b. 
Individual impact positively influences job performance.

3.5. Usage

Usage refers to the frequency and extent to which individuals employ AI tools in their daily tasks [21,24]. The regular and effective usage of AI technologies can enhance academic performance by providing students with advanced tools for research, problem-solving, and learning [17,97,98]. Additionally, the frequent usage of AI in the workplace can lead to improved job performance by streamlining processes, enhancing decision-making, and increasing overall efficiency [99,100,101]. Consequently, this research posits that usage significantly influences both academic performance and job performance. Thus, this research proposes the following hypotheses.
H5a. 
Usage positively influences academic performance.
H5b. 
Usage positively influences job performance.

4. Methodology

4.1. Instrument

The constructs in this study were derived from previously validated studies. Table A1 presents a list of constructs and items used in the study, detailing their descriptions and sources. The construct technicality refers to the ease of understanding and using generative AI, with items such as “I have no difficulty understanding how to use generative AI” sourced from [19,20]. Knowledge application involves utilizing generative AI to access and incorporate various types of information, with an example item being “Generative artificial intelligence provides direct access to various types of information or knowledge” from [21,22]. Self-efficacy in AI learning reflects the confidence in using AI technologies effectively, exemplified by “I am confident in using generative AI (e.g., ChatGPT) to suit my work and studies” from [15]. Individual impact measures the positive changes in performance due to AI use, such as “Generative artificial intelligence allows me to perform my studies or tasks faster” referenced from [23]. Usage refers to the frequency of employing AI tools, with items like “I think I am a person who often uses generative AI” from [21,24,25]. Academic performance captures the effectiveness in educational settings, as in “I have skillfully completed the assignments given in the course of adult education (lifelong education}” from [26]. Lastly, job performance assesses work-related outcomes, such as “Since starting lifelong education, I meet the job performance requirements at work” sourced from [27].
The questionnaire employed a seven-point Likert scale (1 = strongly disagree, 7 = strongly agree) to measure respondents’ perceptions. The questionnaire design was divided into three parts. The first part gathered information on the participants’ engagement in lifelong learning and the generative AI models they primarily use. The second part focused on users’ perceptions of the main constructs, including technicality, knowledge application, self-efficacy in AI learning, individual impact, and usage. The third part collected demographic information.
To ensure the validity of the translation, the initial questionnaire was written in English by the author. A language expert then translated it into Korean, followed by a back-translation process to English to check for consistency. This protocol ensured that the translated version accurately reflected the original content. Content validity was ensured by engaging both academic experts in information systems and research methodologies, as well as industry professionals specializing in workplace education management. These experts reviewed the questionnaire and provided feedback on its logic, order, content, and clarity, which was incorporated into the revised version. Furthermore, a pilot test was conducted with lifelong learners, who completed the questionnaire and provided feedback on unclear expressions, redundancies, and any logical issues. The author carefully integrated their suggestions to improve the overall quality and accuracy of the survey before the final data collection.

4.2. Subjects and Data Collection

This study targeted working adults in South Korea who are currently voluntarily enrolled in educational programs offered by various institutions, such as universities, including online courses and other forms of lifelong learning. The selection of this demographic is justified by the focus on individuals actively engaged in both work and education, as these participants are more likely to use generative AI tools in both academic and professional settings. While the majority of the respondents were young adults with at least a Bachelor’s degree, this reflects the common profile of individuals pursuing lifelong learning in Korea, particularly in the context of higher education and career development. Efforts were made to mitigate sample bias by targeting a wide range of participants from different industries.
The survey method employed in this study was an online survey, targeting working professionals engaged in lifelong learning who currently use generative AI in both their academic and professional activities. To ensure a representative sample and avoid selection bias, the survey was administered by Embrain, a professional survey company with a global panel of over 4 million registered participants, which includes individuals from various industries. Embrain randomly selected and sent the survey to 27,375 working adults in South Korea. By using this large and diverse panel, we mitigated potential sample selection bias and ensured broad demographic representation. Participants were invited to take part in the survey voluntarily, with the purpose and anonymity of their responses clearly communicated. The study aimed to explore how generative AI tools, such as OpenAI’s ChatGPT and Google’s Gemini, impact both academic and job performance for individuals engaged in lifelong learning. Informed consent was obtained from all participants before data collection commenced. The survey was conducted from 10 June to 12 June 2024, and distributed via email and text message to potential participants.
To ensure the eligibility of responses, participants were required to answer three key screening questions at the beginning of the survey: (1) Are you currently employed? (2) Are you enrolled in a lifelong learning program provided through universities offering online, evening, or weekend classes, including self-directed learning (such as in cyber universities)? (3) Are you using generative AI tools for both work and academic purposes? Only those who met all three criteria were allowed to proceed with the full survey. Of the total 27,375 invitations, 2115 were excluded due to ineligibility based on these screening criteria. Among the remaining respondents, 318 completed the survey, while 79 submissions were incomplete. To further ensure data integrity, responses were filtered based on completion time (with Embrain’s internal standards used to determine reasonable survey completion times) and consistency. Responses that were completed too quickly or showed uniform answers were excluded from the analysis.
Table 1 summarizes the demographic characteristics of the 300 subjects. It includes gender distribution (53.3% male, 46.7% female), age range with the majority in their 30s (37.3%), and education level, predominantly Bachelor’s degree holders (73%). It also details the generative AI models used, with ChatGPT (OpenAI) being the most popular (65%), and the years of university admission for lifelong education, with the highest percentage in 2020 or before (32.3%).

4.3. Analysis Procedure

In this study, Partial Least Squares Structural Equation Modeling (PLS-SEM) was used to examine the relationships between the constructs. PLS-SEM was chosen due to its effectiveness in handling complex models involving multiple constructs and its appropriateness for studies with small to medium sample sizes [102]. PLS-SEM is particularly effective in exploratory research and theory development [103], making it ideal for this study’s examination of AI technology adoption and its impacts on performance outcomes. The results confirm significant relationships between the constructs, supporting the proposed hypotheses. The data were processed using SmartPLS 4.0, a tool that enabled us to conduct PLS-SEM and ensure the robustness of our findings. First, a common method bias test was conducted using the variance inflation factor (VIF) values, with all VIFs below the recommended threshold of 3.3 [104], indicating no severe multicollinearity or common method bias issues. For the measurement model, internal consistency reliability, convergent validity, and discriminant validity were assessed. Cronbach’s alpha, composite reliability (CR), and average variance extracted (AVE) were used as key indicators. All constructs exceeded the recommended threshold of 0.7 for CR and 0.5 for AVE, confirming adequate reliability and validity. Factor loadings for all items were above 0.7, ensuring item reliability. For discriminant validity, the Fornell–Larcker criterion was applied, and the square root of each construct’s AVE was greater than the correlations with other constructs. In the structural model, the path coefficients were examined through bootstrapping with 5,000 resamples to test the significance of hypothesized relationships. This rigorous process ensures the model’s integrity and the reliability of the findings.

5. Research Results

5.1. Common Method Bias

To address common method bias, this study assessed the VIF values for all constructs by utilizing SmartPLS 4. The highest VIF value was 2.998, which is below the recommended threshold of 3.3 [104]. This indicates that common method bias is unlikely to be a significant issue in this study.

5.2. Reliability and Validity

The measurement model was evaluated for reliability and validity based on the constructs’ factor loadings, Cronbach’s alpha, CR, and AVE, as shown in Table 2. All constructs exhibited strong factor loadings, exceeding the threshold of 0.70, which indicates a good indicator reliability [102]. The Cronbach’s alpha values ranged from 0.843 to 0.887, demonstrating good internal consistency [105]. CR values for all constructs were above 0.844, further confirming the reliability of the constructs. The AVE values exceeded the recommended value of 0.50, indicating that the constructs captured more variance than the measurement error [106]. For instance, technicality showed a CR of 0.905 and an AVE of 0.762, while knowledge application had a CR of 0.911 and an AVE of 0.774, highlighting robust reliability and convergent validity.
Discriminant validity was assessed using the Fornell–Larcker criterion, which compares the square root of the AVE values with the correlations among constructs, as displayed in Table 3. The diagonal elements represent the square root of the AVE, and each was greater than the off-diagonal elements in their respective rows and columns, indicating good discriminant validity [106]. For example, the square root of the AVE for technicality was 0.873, which is higher than its correlations with other constructs. This demonstrates that each construct is distinct from others in the model. Overall, the measurement model showed satisfactory reliability and validity, ensuring that the constructs measured what they were intended to measure and were distinct from each other.

5.3. Hypothesis Test

The structural model was evaluated using 5000 resampling bootstrap techniques to assess path significance. The results indicate significant paths among the constructs, confirming the hypothesized relationships. The model demonstrated strong predictive relevance, thus validating the proposed theoretical framework. Figure 2 and Table 4 describe the results of SEM.

6. Discussion

The analysis results offer several insights into the relationships between technicality, knowledge application, self-efficacy in AI learning, individual impact, and usage. Each of these factors has shown significant effects on the dependent variables, and these findings are compared to previous research to highlight their contributions and implications.
The findings of this study align with prior research on the role of AI in enhancing lifelong learning by improving accessibility, personalization, and adaptability [1,41]. For instance, the positive relationship between AI usage and knowledge application in this study mirrors previous works that emphasize how adaptive learning platforms enhance educational outcomes by providing tailored learning experiences [45]. However, this study extends the literature by focusing specifically on generative AI tools, a relatively new area in educational technology, and demonstrating their potential to impact both academic and job performance. Additionally, the results highlight gaps in previous research concerning the practical challenges of AI adoption, such as technical complexity and bias, which have not been explored in depth in earlier studies.
The finding that technicality positively influences individual impact is consistent with the TAM, which suggests that the easier a technology is to use, the more likely it will positively affect user outcomes [9]. This result supports the notion that, when employees engaged in lifelong learning find AI tools easy to understand and use, they experience greater improvements in their academic and job performance. This enhancement can be attributed to reduced cognitive load and increased confidence in using AI tools, which facilitates better integration into their daily tasks. Therefore, organizations and educational institutions should prioritize user-friendly AI tools to maximize their positive impact on learners and workers. The significant relationship between technicality and usage underscores the importance of intuitive design in technology adoption. This result aligns with findings from previous studies that emphasize how user-friendly technologies are more likely to be adopted and used regularly [79,80,81]. When AI tools are perceived as easy to use, individuals are more inclined to incorporate them into their routines, thereby increasing their overall usage. This suggests that the developers of AI applications should focus on enhancing usability to encourage frequent use among users, ultimately leading to better performance outcomes.
The strong positive effect of knowledge application on individual impact highlights the critical role of applying acquired knowledge in achieving significant improvements in performance. This finding is consistent with research indicating that effective knowledge application is essential for leveraging the benefits of AI technologies [14,23]. By effectively applying knowledge gained from AI tools, individuals can enhance their academic and job performance through better problem-solving and decision-making capabilities. This result suggests that training programs should focus not only on knowledge acquisition but also on strategies for effective application to maximize the benefits of AI technologies. The positive relationship between knowledge application and usage suggests that, when individuals see the practical benefits of applying knowledge from AI tools, they are more likely to use these tools frequently. This finding supports the idea that perceived usefulness drives technology adoption and continuous use [107,108]. As users recognize the value of AI tools in improving their tasks, their engagement with these technologies increases. This implies that highlighting the practical benefits of AI applications can promote sustained usage among users.
Self-efficacy in AI learning has a significant positive effect on individual impact, indicating that individuals who believe in their ability to effectively use AI tools are more likely to experience positive outcomes. This result is in line with Bandura’s theory of self-efficacy, which posits that a higher self-efficacy leads to a greater effort and persistence, enhancing performance [15,72]. This suggests that building users’ confidence through training and support can significantly improve their performance outcomes when using AI technologies. The positive impact of self-efficacy in AI learning on usage indicates that confident users are more likely to regularly use AI tools. This finding supports previous research suggesting that self-efficacy is a critical determinant of technology usage [90,91,92]. As users feel more capable of using AI technologies, they are more inclined to integrate these tools into their daily activities, leading to a more frequent and effective use. Therefore, efforts to enhance users’ self-efficacy can drive a higher adoption and sustained usage of AI tools.
The finding that individual impact positively influences academic performance is consistent with prior research indicating that the effective use of technology can enhance learning outcomes and academic achievements [21]. When students effectively utilize AI tools, they can improve their understanding of academic materials, leading to better grades and a higher academic performance. This can be attributed to the enhanced ability to apply learned knowledge and the increased efficiency in completing academic tasks. Thus, integrating AI tools that maximize individual impact can significantly contribute to academic success. Similarly, the positive relationship between individual impact and job performance underscores the importance of technology in enhancing workplace productivity and efficiency. Previous studies have demonstrated that the effective use of AI technologies can lead to substantial improvements in job performance by streamlining tasks and providing valuable insights [76,95,96]. Employees who can effectively leverage AI tools are more likely to meet and exceed performance expectations, contributing to overall organizational success. This finding suggests that fostering individual impact through the proficient use of AI can be a key strategy for enhancing job performance.
The significant positive effect of usage on academic performance highlights the role of frequent and effective use of AI tools in improving educational outcomes. This finding aligns with research indicating that regular interaction with educational technologies enhances learning and academic performance [97]. By consistently using AI tools, students can access a wealth of resources and support that facilitate better learning experiences and outcomes. This suggests that promoting the regular use of AI in educational settings can be beneficial for academic achievement. The positive relationship between usage and job performance reinforces the idea that the frequent use of AI tools is crucial for enhancing work efficiency and productivity. Consistent with previous findings [99,100,101], this study shows that employees who regularly use AI technologies can better manage their tasks and make more informed decisions, leading to improved job performance. Encouraging the habitual use of AI tools in the workplace can thus be a valuable approach to boosting employee performance and achieving organizational goals.
Despite the potential benefits of AI in education, several limitations and challenges must be acknowledged to contextualize the study’s findings. One major issue is the potential for algorithmic bias, which can affect the quality of AI-generated recommendations and hinder the fairness of personalized learning experiences [56]. Furthermore, while AI can enhance accessibility, it may also exacerbate the digital divide, particularly in regions with limited access to technology [109]. Ethical concerns regarding data privacy and the extensive data collection required for AI systems are also critical, as learners’ personal information may be vulnerable to misuse [57,59,61]. Addressing these challenges is crucial for ensuring the responsible and equitable implementation of AI technologies in education.

7. Conclusions

7.1. Theoretical Contributions

This study offers substantial contributions to the understanding of AI technology adoption and its impact on academic and job performance. By integrating constructs from the TAM and UTAUT, this research provides a comprehensive framework that explains how technicality, knowledge application, and self-efficacy in AI learning influence individual impact and usage. The findings of this study advance the theoretical discourse by highlighting relationships that previous studies have not fully explored, thereby offering new insights for scholars in the field.
One significant contribution is the elucidation of the role of technicality in influencing both individual impact and usage. While previous studies have established the importance of perceived ease of use in technology adoption [9,10], this study provides empirical evidence that technicality not only facilitates easier adoption but also enhances performance outcomes. This dual influence underscores the critical role of user-friendly design in maximizing the benefits of AI technologies. Prior research often focused on adoption metrics without thoroughly examining how ease of use translates into improved performance. By addressing this gap, this study suggests that scholars should consider the broader implications of technicality beyond initial adoption, focusing on long-term performance impacts.
Another key contribution is the examination of knowledge application and its effects on individual impact and usage. Although the importance of knowledge application has been acknowledged in various contexts, its specific role in the context of AI learning and performance had not been comprehensively analyzed [14,23]. This study demonstrates that effective knowledge application significantly enhances both academic and job performance by improving problem-solving and decision-making capabilities. Moreover, it reveals that knowledge application drives the continuous use of AI tools, suggesting that perceived usefulness alone is not sufficient for sustained usage. This insight prompts scholars to explore how practical applications of AI knowledge can be integrated into educational and organizational strategies to foster continuous engagement with AI technologies.
Lastly, this research highlights the crucial impact of self-efficacy in AI learning on individual impact and usage. While the relationship between self-efficacy and technology adoption is well-documented [15,72], this study provides detailed evidence on how self-efficacy specifically within AI learning contexts enhances performance outcomes and usage patterns. The findings suggest that individuals with higher confidence in their AI capabilities are more likely to achieve better academic and job performance and are more inclined to regularly use AI tools. Previous studies have primarily focused on self-efficacy as a predictor of adoption, but this study extends the understanding by linking it to tangible performance improvements. Scholars are encouraged to investigate further how interventions aimed at boosting AI-related self-efficacy can be designed and implemented in both educational and professional settings to maximize the benefits of AI technologies.
In conclusion, this study contributes to the theoretical framework of AI technology adoption by integrating and expanding upon existing models. The findings emphasize the importance of technicality, knowledge application, and self-efficacy in achieving positive outcomes from AI usage. These insights provide a foundation for future research to conduct a deeper exploration into these constructs and their interactions, guiding scholars towards more holistic and impactful investigations in the field of AI technology and performance.

7.2. Practical Implications

This study offers valuable insights for practitioners in various fields, including education-related policymakers, universities, managers in workplaces, and educational AI developers. By understanding the critical factors that influence the effective use of AI technologies, practitioners can implement strategies to enhance user engagement and performance.
Firstly, education-related policymakers can leverage the findings of this study to develop guidelines and standards that promote the use of user-friendly AI tools in educational settings. Given the significant impact of technicality on both individual impact and usage, policymakers should advocate for the adoption of AI technologies that are easy to understand and use. They could mandate usability testing and certification for educational AI tools to ensure they meet specific ease-of-use criteria [110,111]. They can help ensure that students and educators have access to AI tools that enhance learning outcomes and productivity.
Universities can also benefit from these findings by integrating AI tools into their curriculum and administrative processes. The study highlights the importance of knowledge application in improving academic performance, suggesting that universities should provide training programs that focus not only on acquiring knowledge but also on effectively applying it using AI technologies. Universities could offer workshops or courses that teach students how to use AI tools for research, problem-solving, and project management [112,113]. By fostering an environment where students can practice and apply their AI skills, universities can enhance students’ learning experiences and academic achievements.
Managers in workplaces should consider implementing strategies to boost employees’ self-efficacy in AI learning. The study shows that a higher self-efficacy leads to a better performance and increased usage of AI tools. Managers can achieve this by providing continuous training and support, such as personalized coaching sessions or access to online learning platforms that focus on AI competencies. A company might offer a mentorship program where experienced AI users guide new employees through the learning process [114,115]. By investing in the development of their employees’ AI skills, organizations can improve overall productivity and job performance [15,72].
Educational AI developers are also key beneficiaries of this study’s insights. The research underscores the importance of creating AI tools that are not only effective but also easy to use. Developers should prioritize user-centered design principles and involve end-users in the development process to ensure that the tools meet their needs and preferences. Developers could conduct regular user testing sessions and gather feedback to refine the usability of their AI applications [116]. Additionally, developers should focus on creating comprehensive user guides and tutorials that help users quickly understand and apply the tools in their work or studies [117]. They can enhance user satisfaction and promote sustained engagement with their AI products.

7.3. Limitations and Further Research

This study has some limitations. First, the sample size was limited to a specific demographic, which may not represent the broader population. Future research should include larger and more diverse samples to enhance generalizability. Second, the study focused primarily on self-reported data, which can be subject to biases. Future studies could incorporate objective performance measures to validate findings. Third, this research was cross-sectional, limiting the ability to infer causality. Longitudinal studies are recommended to explore the causal relationships between variables over time. Lastly, future research should investigate additional factors, such as organizational culture and external support, to provide a more comprehensive understanding of AI adoption and its impacts.

Funding

This research received no external funding.

Institutional Review Board Statement

This study adhered to the Declaration of Helsinki guidelines. The nature of the research and the type of data collected did not involve sensitive information, as defined under Article 23 of the Personal Information Protection Act of Korea; therefore, approval from an Ethics Committee was not required.

Informed Consent Statement

Informed consent was obtained in written form from all individual participants included in the study.

Data Availability Statement

The data used in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Table A1. List of constructs and items.
Table A1. List of constructs and items.
ConstructItemDescriptionSource
TechnicalityTCH1I have no difficulty understanding how to use generative AI. [19,20]
TCH2I can handle the requirements I want using generative AI.
TCH3The use of generative artificial intelligence is easy and simple.
Knowledge
Application
KAP1Generative artificial intelligence provides direct access to various types of information or knowledge. [21,22]
KAP2Generative artificial intelligence incorporates different types of knowledge.
KAP3Generative artificial intelligence helps to learn academic materials within universities.
Self-efficacy
in
AI Learning
EFC1I am confident in using generative AI (e.g., ChatGPT) to suit my work and studies. [15]
EFC2I can use generative artificial intelligence to develop the competencies required for my studies or job.
EFC3I can acquire important information and technology through generative artificial intelligence.
Individual
Impact
IDI1Generative artificial intelligence allows me to perform my studies or tasks faster. [23]
IDI2Generative artificial intelligence increases academic/work productivity.
IDI3Generative artificial intelligence makes it easier to perform studies or tasks.
UsageUSE1I think I am a person who often uses generative AI. [21,24,25]
USE2I frequently use generative AI during work or study.
USE3I use generative AI every day.
Academic
Performance
APF1I have skillfully completed the assignments given in the course of adult education (lifelong education). [26]
APF2I learned how to perform tasks efficiently in adult education (lifelong education).
APF3My academic achievement in the adult education (lifelong education) course met expectations.
Job
Performance
JPF1Since starting lifelong education, I meet the job performance requirements at work. [27]
JPF2Since the start of lifelong education, I am fulfilling the responsibilities set out in the job description.
JPF3Since starting lifelong education, I have done well in the tasks included in the performance evaluation criteria.

References

  1. Wong, W.K.O. The sudden disruptive rise of generative artificial intelligence? An evaluation of their impact on higher education and the global workplace. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100278. [Google Scholar] [CrossRef]
  2. Chen, A.; Liu, L.; Zhu, T. Advancing the democratization of generative artificial intelligence in healthcare: A narrative review. J. Hosp. Manag. Health Policy 2024, 8, 12. [Google Scholar] [CrossRef]
  3. Xu, Q.; Zhou, G.; Zhang, C.; Chang, F.; Cao, Y.; Zhao, D. Generative AI and DT integrated intelligent process planning: A conceptual framework. Int. J. Adv. Manuf. Technol. 2024, 133, 2461–2485. [Google Scholar] [CrossRef]
  4. Bandi, A.; Adapa, P.V.S.R.; Kuchi, Y.E.V.P.K. The Power of Generative AI: A Review of Requirements, Models, Input–Output Formats, Evaluation Metrics, and Challenges. Future Internet 2023, 15, 260. [Google Scholar] [CrossRef]
  5. Zheng, Y.; Wang, L.; Feng, B.; Zhao, A.; Wu, Y. Innovating Healthcare: The Role of ChatGPT in Streamlining Hospital Workflow in the Future. Ann. Biomed. Eng. 2024, 52, 750–753. [Google Scholar] [CrossRef]
  6. Sänger, M.; De Mecquenem, N.; Lewińska, K.E.; Bountris, V.; Lehmann, F.; Leser, U.; Kosch, T. A qualitative assessment of using ChatGPT as large language model for scientific workflow development. GigaScience 2024, 13, giae030. [Google Scholar] [CrossRef]
  7. Fuchs, K. Exploring the opportunities and challenges of NLP models in higher education: Is Chat GPT a blessing or a curse? Front. Educ. 2023, 8, 1166682. [Google Scholar] [CrossRef]
  8. Dalgıç, A.; Yaşar, E.; Demir, M. ChatGPT and learning outcomes in tourism education: The role of digital literacy and individualized learning. J. Hosp. Leis. Sport Tour. Educ. 2024, 34, 100481. [Google Scholar] [CrossRef]
  9. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  10. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  11. Nugroho, M.A.; Dewanti, P.W.; Novitasari, B.T. The Impact of Perceived Usefulness and Perceived Ease of Use on Student s Performance in Mandatory E-Learning Use. In Proceedings of the 2018 International Conference on Applied Information Technology and Innovation (ICAITI), Padang, Indonesia, 3–5 September 2018; pp. 26–30. [Google Scholar]
  12. Chirchir, L.; Aruasa, W.; Chebon, S. Perceived Usefulness and Ease of Use as Mediators of the Effect of Health Information Systems on User Performance. Eur. J. Comput. Sci. Inf. Technol. 2019, 7, 22–37. [Google Scholar]
  13. Chen, L.; Aklikokou, A.K. Determinants of E-government Adoption: Testing the Mediating Effects of Perceived Usefulness and Perceived Ease of Use. Int. J. Public Adm. 2020, 43, 850–865. [Google Scholar] [CrossRef]
  14. Jarrahi, M.H.; Askay, D.; Eshraghi, A.; Smith, P. Artificial intelligence and knowledge management: A partnership between human and AI. Bus. Horiz. 2023, 66, 87–99. [Google Scholar] [CrossRef]
  15. Kim, B.-J.; Kim, M.-J. The influence of work overload on cybersecurity behavior: A moderated mediation model of psychological contract breach, burnout, and self-efficacy in AI learning such as ChatGPT. Technol. Soc. 2024, 77, 102543. [Google Scholar] [CrossRef]
  16. Damioli, G.; Van Roy, V.; Vertesy, D. The impact of artificial intelligence on labor productivity. Eurasian Bus. Rev. 2021, 11, 1–25. [Google Scholar] [CrossRef]
  17. Ouyang, F.; Wu, M.; Zheng, L.; Zhang, L.; Jiao, P. Integration of artificial intelligence performance prediction and learning analytics to improve student learning in online engineering course. Int. J. Educ. Technol. High. Educ. 2023, 20, 4. [Google Scholar] [CrossRef]
  18. Zhai, Y.; Zhang, L.; Yu, M. AI in Human Resource Management: Literature Review and Research Implications. J. Knowl. Econ. 2024. [Google Scholar] [CrossRef]
  19. Sohn, K.; Kwon, O. Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telemat. Inform. 2020, 47, 101324. [Google Scholar] [CrossRef]
  20. Liao, Y.-K.; Wu, W.-Y.; Le, T.Q.; Phung, T.T.T. The Integration of the Technology Acceptance Model and Value-Based Adoption Model to Study the Adoption of E-Learning: The Moderating Role of e-WOM. Sustainability 2022, 14, 815. [Google Scholar] [CrossRef]
  21. Aparicio, M.; Bacao, F.; Oliveira, T. Grit in the path to e-learning success. Comput. Hum. Behav. 2017, 66, 388–399. [Google Scholar] [CrossRef]
  22. Urbach, N.; Smolnik, S.; Riempp, G. An empirical investigation of employee portal success. J. Strateg. Inf. Syst. 2010, 19, 184–206. [Google Scholar] [CrossRef]
  23. Al-Sharafi, M.A.; Al-Emran, M.; Iranmanesh, M.; Al-Qaysi, N.; Iahad, N.A.; Arpaci, I. Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach. Interact. Learn. Environ. 2023, 31, 7491–7510. [Google Scholar] [CrossRef]
  24. McLean, G.; Osei-Frimpong, K. Hey Alexa… examine the variables influencing the use of artificial intelligent in-home voice assistants. Comput. Hum. Behav. 2019, 99, 28–37. [Google Scholar] [CrossRef]
  25. DeLone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  26. Maqableh, M.; Jaradat, M.; Azzam, A.a. Exploring the determinants of students’ academic performance at university level: The mediating role of internet usage continuance intention. Educ. Inf. Technol. 2021, 26, 4003–4025. [Google Scholar] [CrossRef]
  27. Susanto, P.; Hoque, M.E.; Jannat, T.; Emely, B.; Zona, M.A.; Islam, M.A. Work-Life Balance, Job Satisfaction, and Job Performance of SMEs Employees: The Moderating Role of Family-Supportive Supervisor Behaviors. Front. Psychol. 2022, 13, 906876. [Google Scholar] [CrossRef]
  28. Tuijnman, A.; Boström, A.-K. Changing notions of lifelong education and lifelong learning. Int. Rev. Educ. 2002, 48, 93–110. [Google Scholar] [CrossRef]
  29. Thwe, W.P.; Kálmán, A. Lifelong Learning in the Educational Setting: A Systematic Literature Review. Asia-Pac. Educ. Res. 2024, 33, 407–417. [Google Scholar] [CrossRef]
  30. Jarvis, P. Adult Education and Lifelong Learning: Theory and Practice; Routledge: London, UK, 2004. [Google Scholar]
  31. Cervero, R.M. Professional practice, learning, and continuing education: An integrated perspective. In From Adult Education to the Learning Society; Routledge: London, UK, 2013; pp. 170–184. [Google Scholar]
  32. Ceschi, A.; Perini, M.; Scalco, A.; Pentassuglia, M.; Righetti, E.; Caputo, B. Foster employability and fight social exclusion through the development of lifelong learning (LLL) key-competences: Reviewing twenty years of LLL policies. Eur. J. Train. Dev. 2021, 45, 475–511. [Google Scholar] [CrossRef]
  33. Sala, A.; Punie, Y.; Garkov, V. LifeComp: The European Framework for Personal, Social and Learning to Learn Key Competence; European Commission: Brussels, Belgium, 2020. [Google Scholar]
  34. Elfert, M. Lifelong learning in Sustainable Development Goal 4: What does it mean for UNESCO’s rights-based approach to adult learning and education? Int. Rev. Educ. 2019, 65, 537–556. [Google Scholar] [CrossRef]
  35. Trujillo-Torres, J.M.; Aznar-Díaz, I.; Cáceres-Reche, M.P.; Mentado-Labao, T.; Barrera-Corominas, A. Intergenerational Learning and Its Impact on the Improvement of Educational Processes. Educ. Sci. 2023, 13, 1019. [Google Scholar] [CrossRef]
  36. Pstross, M.; Corrigan, T.; Knopf, R.C.; Sung, H.; Talmage, C.A.; Conroy, C.; Fowley, C. The Benefits of Intergenerational Learning in Higher Education: Lessons Learned from Two Age Friendly University Programs. Innov. High. Educ. 2017, 42, 157–171. [Google Scholar] [CrossRef]
  37. Newman, S.; Hatton-Yeo, A. Intergenerational learning and the contributions of older people. Ageing Horiz. 2008, 8, 31–39. [Google Scholar]
  38. Merriam, S.B.; Bierema, L.L. Adult Learning: Linking Theory and Practice; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  39. Yuan, L.; Powell, S. MOOCs and Open Education: Implications for Higher Education; Manchester Metropolitan University: Manchester, UK, 2013. [Google Scholar]
  40. Walter, Y. Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. Int. J. Educ. Technol. High. Educ. 2024, 21, 15. [Google Scholar] [CrossRef]
  41. Ruiz-Rojas, L.I.; Salvador-Ullauri, L.; Acosta-Vargas, P. Collaborative Working and Critical Thinking: Adoption of Generative Artificial Intelligence Tools in Higher Education. Sustainability 2024, 16, 5367. [Google Scholar] [CrossRef]
  42. Kaswan, K.S.; Dhatterwal, J.S.; Ojha, R.P. AI in personalized learning. In Advances in Technological Innovations in Higher Education; CRC Press: Boca Raton, FL, USA, 2024; pp. 103–117. [Google Scholar]
  43. Bayly-Castaneda, K.; Ramirez-Montoya, M.-S.; Morita-Alexander, A. Crafting personalized learning paths with AI for lifelong learning: A systematic literature review. Front. Educ. 2024, 9, 1424386. [Google Scholar] [CrossRef]
  44. Reddy, S.G.; Sadhu, A.K.R.; Muravev, M.; Brazhenko, D.; Parfenov, M. Harnessing the Power of Generative Artificial Intelligence for Dynamic Content Personalization in Customer Relationship Management Systems: A Data-Driven Framework for Optimizing Customer Engagement and Experience. J. AI-Assist. Sci. Discov. 2023, 3, 379–395. [Google Scholar]
  45. Pratama, M.P.; Sampelolo, R.; Lura, H. Revolutionizing education: Harnessing the power of artificial intelligence for personalized learning. Klasikal: J. Educ. Lang. Teach. Sci. 2023, 5, 350–357. [Google Scholar] [CrossRef]
  46. Er-Rafyg, A.; Zankadi, H.; Idrissi, A. AI in Adaptive Learning: Challenges and Opportunities. In Modern Artificial Intelligence and Data Science 2024: Tools, Techniques and Systems; Idrissi, A., Ed.; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 329–342. [Google Scholar]
  47. Maghsudi, S.; Lan, A.; Xu, J.; van Der Schaar, M. Personalized education in the artificial intelligence era: What to expect next. IEEE Signal Process. Mag. 2021, 38, 37–50. [Google Scholar] [CrossRef]
  48. Gligorea, I.; Cioca, M.; Oancea, R.; Gorski, A.-T.; Gorski, H.; Tudorache, P. Adaptive Learning Using Artificial Intelligence in e-Learning: A Literature Review. Educ. Sci. 2023, 13, 1216. [Google Scholar] [CrossRef]
  49. Cheniti Belcadhi, L. Personalized feedback for self assessment in lifelong learning environments based on semantic web. Comput. Hum. Behav. 2016, 55, 562–570. [Google Scholar] [CrossRef]
  50. Huang, X.; Zou, D.; Cheng, G.; Chen, X.; Xie, H. Trends, research issues and applications of artificial intelligence in language education. Educ. Technol. Soc. 2023, 26, 112–131. [Google Scholar]
  51. Lin, C.-C.; Huang, A.Y.Q.; Lu, O.H.T. Artificial intelligence in intelligent tutoring systems toward sustainable education: A systematic review. Smart Learn. Environ. 2023, 10, 41. [Google Scholar] [CrossRef]
  52. Wang, H.; Tlili, A.; Huang, R.; Cai, Z.; Li, M.; Cheng, Z.; Yang, D.; Li, M.; Zhu, X.; Fei, C. Examining the applications of intelligent tutoring systems in real educational contexts: A systematic literature review from the social experiment perspective. Educ. Inf. Technol. 2023, 28, 9113–9148. [Google Scholar] [CrossRef]
  53. Rawas, S. ChatGPT: Empowering lifelong learning in the digital age of higher education. Educ. Inf. Technol. 2024, 29, 6895–6908. [Google Scholar] [CrossRef]
  54. Guo, L.; Wang, D.; Gu, F.; Li, Y.; Wang, Y.; Zhou, R. Evolution and trends in intelligent tutoring systems research: A multidisciplinary and scientometric view. Asia Pac. Educ. Rev. 2021, 22, 441–461. [Google Scholar] [CrossRef]
  55. Kochmar, E.; Vu, D.D.; Belfer, R.; Gupta, V.; Serban, I.V.; Pineau, J. Automated Personalized Feedback Improves Learning Gains in An Intelligent Tutoring System; Springer: Cham, Switzerland, 2020; pp. 140–146. [Google Scholar]
  56. Baker, R.S.; Hawn, A. Algorithmic Bias in Education. Int. J. Artif. Intell. Educ. 2022, 32, 1052–1092. [Google Scholar] [CrossRef]
  57. Dieterle, E.; Dede, C.; Walker, M. The cyclical ethical effects of using artificial intelligence in education. AI SOCIETY 2024, 39, 633–643. [Google Scholar] [CrossRef]
  58. Idowu, J.A. Debiasing Education Algorithms. Int. J. Artif. Intell. Educ. 2024. [Google Scholar] [CrossRef]
  59. Zhai, C.; Wibowo, S.; Li, L.D. The effects of over-reliance on AI dialogue systems on students  cognitive abilities: A systematic review. Smart Learn. Environ. 2024, 11, 28. [Google Scholar] [CrossRef]
  60. Huallpa, J.J. Exploring the ethical considerations of using Chat GPT in university education. Period. Eng. Nat. Sci. 2023, 11, 105–115. [Google Scholar]
  61. Wu, X.; Duan, R.; Ni, J. Unveiling security, privacy, and ethical concerns of ChatGPT. J. Inf. Intell. 2024, 2, 102–115. [Google Scholar] [CrossRef]
  62. Buijsman, S. Transparency for AI systems: A value-based approach. Ethics Inf. Technol. 2024, 26, 34. [Google Scholar] [CrossRef]
  63. Davenport, T.H.; Ronanki, R. Artificial intelligence for the real world. Harv. Bus. Rev. 2018, 96, 108–116. [Google Scholar]
  64. Luckin, R.; Cukurova, M.; Kent, C.; Du Boulay, B. Empowering educators to be AI-ready. Comput. Educ. Artif. Intell. 2022, 3, 100076. [Google Scholar] [CrossRef]
  65. Lameras, P.; Arnab, S. Power to the teachers: An exploratory review on artificial intelligence in education. Information 2021, 13, 14. [Google Scholar] [CrossRef]
  66. Almasri, F. Exploring the Impact of Artificial Intelligence in Teaching and Learning of Science: A Systematic Review of Empirical Research. Res. Sci. Educ. 2024, 54, 977–997. [Google Scholar] [CrossRef]
  67. Lee, D.; Kim, H.-h.; Sung, S.-H. Development research on an AI English learning support system to facilitate learner-generated-context-based learning. Educ. Technol. Res. Dev. 2023, 71, 629–666. [Google Scholar] [CrossRef]
  68. Sengar, S.S.; Hasan, A.B.; Kumar, S.; Carroll, F. Generative artificial intelligence: A systematic review and applications. Multimed. Tools Appl. 2024. [Google Scholar] [CrossRef]
  69. Bansal, G.; Chamola, V.; Hussain, A.; Guizani, M.; Niyato, D. Transforming Conversations with AI—A Comprehensive Study of ChatGPT. Cogn. Comput. 2024, 16, 2487–2510. [Google Scholar] [CrossRef]
  70. Kashive, N.; Powale, L.; Kashive, K. Understanding user perception toward artificial intelligence (AI) enabled e-learning. Int. J. Inf. Learn. Technol. 2021, 38, 1–19. [Google Scholar] [CrossRef]
  71. Saúde, S.; Barros, J.P.; Almeida, I. Impacts of Generative Artificial Intelligence in Higher Education: Research Trends and Students’ Perceptions. Soc. Sci. 2024, 13, 410. [Google Scholar] [CrossRef]
  72. Bandura, A.; Freeman, W.H.; Lightsey, R. Self-Efficacy: The Exercise of Control. J. Cogn. Psychother. 1999, 13, 158–166. [Google Scholar] [CrossRef]
  73. Al Naqbi, H.; Bahroun, Z.; Ahmed, V. Enhancing Work Productivity through Generative Artificial Intelligence: A Comprehensive Literature Review. Sustainability 2024, 16, 1166. [Google Scholar] [CrossRef]
  74. Rodríguez-Ruiz, J.; Marín-López, I.; Espejo-Siles, R. Is artificial intelligence use related to self-control, self-esteem and self-efficacy among university students? Educ. Inf. Technol. 2024. [Google Scholar] [CrossRef]
  75. Chou, C.-M.; Shen, T.-C.; Shen, T.-C.; Shen, C.-H. Influencing factors on students’ learning effectiveness of AI-based technology application: Mediation variable of the human-computer interaction experience. Educ. Inf. Technol. 2022, 27, 8723–8750. [Google Scholar] [CrossRef]
  76. Ooi, K.-B.; Tan, G.W.-H.; Al-Emran, M.; Al-Sharafi, M.A.; Capatina, A.; Chakraborty, A.; Dwivedi, Y.K.; Huang, T.-L.; Kar, A.K.; Lee, V.-H. The potential of Generative Artificial Intelligence across disciplines: Perspectives and future directions. J. Comput. Inf. Syst. 2023, 1–32. [Google Scholar] [CrossRef]
  77. Gupta, R.; Nair, K.; Mishra, M.; Ibrahim, B.; Bhardwaj, S. Adoption and impacts of generative artificial intelligence: Theoretical underpinnings and research agenda. Int. J. Inf. Manag. Data Insights 2024, 4, 100232. [Google Scholar] [CrossRef]
  78. Kim, J.; Merrill, K., Jr.; Collins, C. AI as a friend or assistant: The mediating role of perceived usefulness in social AI vs. functional AI. Telemat. Inform. 2021, 64, 101694. [Google Scholar] [CrossRef]
  79. Wang, Y.; Liu, C.; Tu, Y.-F. Factors Affecting the Adoption of AI-Based Applications in Higher Education An Analysis of Teachers Perspectives Using Structural Equation Modeling. Educ. Technol. Soc. 2021, 24, 116–129. [Google Scholar]
  80. Na, S.; Heo, S.; Choi, W.; Kim, C.; Whang, S.W. Artificial Intelligence (AI)-Based Technology Adoption in the Construction Industry: A Cross National Perspective Using the Technology Acceptance Model. Buildings 2023, 13, 2518. [Google Scholar] [CrossRef]
  81. Tiwari, C.K.; Bhat, M.A.; Khan, S.T.; Subramaniam, R.; Khan, M.A.I. What drives students toward ChatGPT? An investigation of the factors influencing adoption and usage of ChatGPT. Interact. Technol. Smart Educ. 2024, 21, 333–355. [Google Scholar] [CrossRef]
  82. Bhimavarapu, V. The Impact of Generative AI on Human Productivity in Creative Writing. J. Stud. Res. 2023, 12, 1–9. [Google Scholar] [CrossRef]
  83. Ng, D.T.K.; Leung, J.K.L.; Su, J.; Ng, R.C.W.; Chu, S.K.W. Teachers’ AI digital competencies and twenty-first century skills in the post-pandemic world. Educ. Technol. Res. Dev. 2023, 71, 137–161. [Google Scholar] [CrossRef]
  84. Bhat, R. The Impact of Technology Integration on Student Learning Outcomes: A Comparative Study. Int. J. Soc. Sci. Educ. Econ. Agric. Res. Technol. (IJSET) 2023, 2, 592–596. [Google Scholar] [CrossRef]
  85. Tamilmani, K.; Rana, N.P.; Dwivedi, Y.K. Use of ‘Habit’ Is Not a Habit in Understanding Individual Technology Adoption: A Review of UTAUT2 Based Empirical Studies; Springer: Cham, Switzerland, 2019; pp. 277–294. [Google Scholar]
  86. Kumari, A.; Singh, A.K. Technology Adoption for Facilitating Knowledge Management Practices in Firms; Springer: Singapore, 2023; pp. 117–129. [Google Scholar]
  87. Granić, A. Educational Technology Adoption: A systematic review. Educ. Inf. Technol. 2022, 27, 9725–9744. [Google Scholar] [CrossRef]
  88. Lin, S.; Döngül, E.S.; Uygun, S.V.; Öztürk, M.B.; Huy, D.T.N.; Tuan, P.V. Exploring the Relationship between Abusive Management, Self-Efficacy and Organizational Performance in the Context of Human–Machine Interaction Technology and Artificial Intelligence with the Effect of Ergonomics. Sustainability 2022, 14, 1949. [Google Scholar] [CrossRef]
  89. Shaikh, F.; Afshan, G.; Anwar, R.S.; Abbas, Z.; Chana, K.A. Analyzing the impact of artificial intelligence on employee productivity: The mediating effect of knowledge sharing and well-being. Asia Pac. J. Hum. Resour. 2023, 61, 794–820. [Google Scholar] [CrossRef]
  90. Chang, P.-C.; Zhang, W.; Cai, Q.; Guo, H. Does AI-Driven Technostress Promote or Hinder Employees’ Artificial Intelligence Adoption Intention? A Moderated Mediation Model of Affective Reactions and Technical Self-Efficacy. Psychol. Res. Behav. Manag. 2024, 17, 413–427. [Google Scholar] [CrossRef]
  91. Hong, J.-W. I Was Born to Love AI: The Influence of Social Status on AI Self-Efficacy and Intentions to Use AI. Int. J. Commun. 2022, 16, 172–191. [Google Scholar]
  92. Balakrishnan, J.; Abed, S.S.; Jones, P. The role of meta-UTAUT factors, perceived anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services? Technol. Forecast. Soc. Chang. 2022, 180, 121692. [Google Scholar] [CrossRef]
  93. Chaudhry, M.A.; Kazim, E. Artificial Intelligence in Education (AIEd): A high-level academic and industry note 2021. AI Ethics 2022, 2, 157–165. [Google Scholar] [CrossRef] [PubMed]
  94. Noy, S.; Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 2023, 381, 187–192. [Google Scholar] [CrossRef] [PubMed]
  95. Lazaroiu, G.; Rogalska, E. How generative artificial intelligence technologies shape partial job displacement and labor productivity growth. Oeconomia Copernic. 2023, 14, 703–706. [Google Scholar] [CrossRef]
  96. Soni, V. Impact of Generative AI on Small and Medium Enterprises  Revenue Growth: The Moderating Role of Human, Technological, and Market Factors. Rev. Contemp. Bus. Anal. 2023, 6, 133–153. [Google Scholar]
  97. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  98. Fazil, A.W.; Hakimi, M.; Shahidzay, A.; Hasas, A. Exploring the Broad Impact of AI Technologies on Student Engagement and Academic Performance in University Settings in Afghanistan. RIGGS J. Artif. Intell. Digit. Bus. 2024, 2, 56–63. [Google Scholar] [CrossRef]
  99. Deranty, J.-P.; Corbin, T. Artificial intelligence and work: A critical review of recent research from the social sciences. AI SOCIETY 2024, 39, 675–691. [Google Scholar] [CrossRef]
  100. Chen, D.; Esperança, J.P.; Wang, S. The Impact of Artificial Intelligence on Firm Performance: An Application of the Resource-Based View to e-Commerce Firms. Front. Psychol. 2022, 13, 884830. [Google Scholar] [CrossRef]
  101. Wijayati, D.T.; Rahman, Z.; Fahrullah, A.r.; Rahman, M.F.W.; Arifah, I.D.C.; Kautsar, A. A study of artificial intelligence on employee performance and work engagement: The moderating role of change leadership. Int. J. Manpow. 2022, 43, 486–512. [Google Scholar] [CrossRef]
  102. Hair, J.F.; Ringle, C.M.; Sarstedt, M. PLS-SEM: Indeed a silver bullet. J. Mark. Theory Pract. 2011, 19, 139–152. [Google Scholar] [CrossRef]
  103. Henseler, J.; Ringle, C.M.; Sinkovics, R.R. The use of partial least squares path modeling in international marketing. In New Challenges to International Marketing; Emerald Group Publishing Limited: Bingley, UK, 2009; Volume 20, pp. 277–319. [Google Scholar]
  104. Kock, N. Common method bias in PLS-SEM: A full collinearity assessment approach. Int. J. E-Collab. (IJEC) 2015, 11, 1–10. [Google Scholar] [CrossRef]
  105. Nunnally, J.C. Psychometric Theory, 2nd ed.; Mcgraw Hill Book Company: New York, NY, USA, 1978. [Google Scholar]
  106. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  107. Damerji, H.; Salimi, A. Mediating effect of use perceptions on technology readiness and adoption of artificial intelligence in accounting. Account. Educ. 2021, 30, 107–130. [Google Scholar] [CrossRef]
  108. Pillai, R.; Sivathanu, B. Adoption of AI-based chatbots for hospitality and tourism. Int. J. Contemp. Hosp. Manag. 2020, 32, 3199–3226. [Google Scholar] [CrossRef]
  109. Bentley, S.V.; Naughtin, C.K.; McGrath, M.J.; Irons, J.L.; Cooper, P.S. The digital divide in action: How experiences of digital technology shape future relationships with artificial intelligence. AI Ethics 2024. [Google Scholar] [CrossRef]
  110. Bubaš, G.; Čižmešija, A.; Kovačić, A. Development of an Assessment Scale for Measurement of Usability and User Experience Characteristics of Bing Chat Conversational AI. Future Internet 2024, 16, 4. [Google Scholar] [CrossRef]
  111. Lu, J.; Schmidt, M.; Lee, M.; Huang, R. Usability research in educational technology: A state-of-the-art systematic review. Educ. Technol. Res. Dev. 2022, 70, 1951–1992. [Google Scholar] [CrossRef]
  112. Martín Núñez, J.L.; Diaz Lantada, A. Artificial Intelligence Aided Engineering Education: State of the Art, Potentials and Challenges. Int. J. Eng. Educ. 2020, 36, 1740–1751. [Google Scholar]
  113. Chen, X.; Xie, H.; Hwang, G.-J. A multi-perspective study on Artificial Intelligence in Education: Grants, conferences, journals, software tools, institutions, and researchers. Comput. Educ. Artif. Intell. 2020, 1, 100005. [Google Scholar] [CrossRef]
  114. Malik, A.; De Silva, M.T.T.; Budhwar, P.; Srikanth, N.R. Elevating talents  experience through innovative artificial intelligence-mediated knowledge sharing: Evidence from an IT-multinational enterprise. J. Int. Manag. 2021, 27, 100871. [Google Scholar] [CrossRef]
  115. Maity, S. Identifying opportunities for artificial intelligence in the evolution of training and development practices. J. Manag. Dev. 2019, 38, 651–663. [Google Scholar] [CrossRef]
  116. Teles, S.; Paúl, C.; Lima, P.; Chilro, R.; Ferreira, A. User feedback and usability testing of an online training and support program for dementia carers. Internet Interv. 2021, 25, 100412. [Google Scholar] [CrossRef] [PubMed]
  117. Javaid, M.; Haleem, A.; Singh, R.P.; Khan, S.; Khan, I.H. Unlocking the opportunities through ChatGPT Tool towards ameliorating the education system. BenchCouncil Trans. Benchmarks Stand. Eval. 2023, 3, 100115. [Google Scholar] [CrossRef]
Figure 1. Research model.
Figure 1. Research model.
Sustainability 16 09066 g001
Figure 2. SEM results.
Figure 2. SEM results.
Sustainability 16 09066 g002
Table 1. Demographic features of the respondents.
Table 1. Demographic features of the respondents.
DemographicsItemSubjects (N = 300)
FrequencyPercentage
GenderMale16053.3%
Female14046.7%
Age20s5117.0%
30s11237.3%
40s8528.3%
50s4314.3%
60s93.0%
EducationHigh school graduate or below186.0%
Bachelor’s degree21973.0%
Master’s degree5518.3%
Doctoral degree82.7%
Generative AI
Model
ChatGPT (OpenAI)19565.0%
Bing (MS)134.3%
Bard or Gemini (Google)3311.0%
Co-pilot (MS)144.7%
Firefly (Adobe)113.7%
Duet (Google)3411.3%
Lifelong Education
University
Admission Year
2020 or before9732.3%
2021279.0%
20225016.7%
20239030.0%
20243612.0%
Table 2. Test results of reliability and validity.
Table 2. Test results of reliability and validity.
ConstructItemMeanSt. Dev.Factor
Loading
Cronbach’s
Alpha
CR (rho_a)CR (rho_c)AVE
TechnicalityTCH14.8271.2370.8790.8430.8440.9050.762
TCH24.8371.1820.869
TCH34.8871.1780.870
Knowledge ApplicationKAP15.0831.1590.8890.8540.8540.9110.774
KAP24.9531.1910.889
KAP34.9531.1390.861
Self-efficacy
in
AI Learning
EFC14.6101.2130.8890.8620.8640.9160.784
EFC24.7371.1920.893
EFC34.9501.1610.874
Individual
Impact
IDI15.0731.2170.8750.8510.8530.9100.771
IDI25.1101.1600.897
IDI35.1371.1480.862
UsageUSE14.6431.3380.9190.8870.9030.9300.815
USE24.6531.3240.927
USE34.5171.4340.861
Academic
Performance
APF14.9301.1100.8790.8610.8620.9150.783
APF24.9031.1430.894
APF34.9101.1290.881
Job
Performance
JPF14.7931.1480.8710.8670.8680.9190.790
JPF24.8771.1320.913
JPF34.8631.1820.882
Table 3. Discriminant validity.
Table 3. Discriminant validity.
Construct1234567
1. Technicality0.873
2. Knowledge Application0.7240.880
3. Self-efficacy in AI Learning0.7690.6810.885
4. Individual Impact0.7450.7830.7770.878
5. Usage0.7120.6290.6670.6410.903
6. Academic Performance0.5250.5300.5120.5330.4550.885
7. Job Performance0.5280.4980.4770.4800.4770.7140.889
Note: Diagonal elements are the square root of AVE.
Table 4. Test results.
Table 4. Test results.
HCauseEffectβtpResult
H1aTechnicalityIndividual Impact0.1572.5350.011Supported
H1bTechnicalityUsage0.4024.3560.000Supported
H2aKnowledge ApplicationIndividual Impact0.4156.6830.000Supported
H2bKnowledge ApplicationUsage0.1761.9720.049Supported
H3aSelf-efficacy
in AI Learning
Individual Impact0.3737.0120.000Supported
H3bSelf-efficacy
in AI Learning
Usage0.2383.1920.001Supported
H4aIndividual ImpactAcademic Performance0.4106.8540.000Supported
H4bIndividual ImpactJob Performance0.2963.7700.000Supported
H5aUsageAcademic Performance0.1922.5670.010Supported
H5bUsageJob Performance0.2884.2600.000Supported
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahn, H.Y. AI-Powered E-Learning for Lifelong Learners: Impact on Performance and Knowledge Application. Sustainability 2024, 16, 9066. https://doi.org/10.3390/su16209066

AMA Style

Ahn HY. AI-Powered E-Learning for Lifelong Learners: Impact on Performance and Knowledge Application. Sustainability. 2024; 16(20):9066. https://doi.org/10.3390/su16209066

Chicago/Turabian Style

Ahn, Hyun Yong. 2024. "AI-Powered E-Learning for Lifelong Learners: Impact on Performance and Knowledge Application" Sustainability 16, no. 20: 9066. https://doi.org/10.3390/su16209066

APA Style

Ahn, H. Y. (2024). AI-Powered E-Learning for Lifelong Learners: Impact on Performance and Knowledge Application. Sustainability, 16(20), 9066. https://doi.org/10.3390/su16209066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop