Next Article in Journal
The Response of Rapeseed (Brassica napus L.) Seedlings to Silver and Gold Nanoparticles
Next Article in Special Issue
Harmonizing Pedagogy and Technology: Insights into Teaching Approaches That Foster Sustainable Motivation and Efficiency in Blended Learning
Previous Article in Journal
Supervised Machine Learning Approaches for Predicting Key Pollutants and for the Sustainable Enhancement of Urban Air Quality: A Systematic Review
Previous Article in Special Issue
Usability of a Virtual Learning Environment in Down Syndrome Adult Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Teachers’ AI-TPACK: Exploring the Relationship between Knowledge Elements

1
College of Teacher Education, East China Normal University, Shanghai 200062, China
2
School of Mathematics and Statistics, Guangxi Normal University, Guilin 541004, China
3
School of Mathematical Sciences, Beijing Normal University, Beijing 100875, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sustainability 2024, 16(3), 978; https://doi.org/10.3390/su16030978
Submission received: 20 November 2023 / Revised: 16 January 2024 / Accepted: 19 January 2024 / Published: 23 January 2024

Abstract

:
The profound impact of artificial intelligence (AI) on the modes of teaching and learning necessitates a reexamination of the interrelationships among technology, pedagogy, and subject matter. Given this context, we endeavor to construct a framework for integrating the Technological Pedagogical Content Knowledge of Artificial Intelligence Technology (Artificial Intelligence—Technological Pedagogical Content Knowledge, AI-TPACK) aimed at elucidating the complex interrelations and synergistic effects of AI technology, pedagogical methods, and subject-specific content in the field of education. The AI-TPACK framework comprises seven components: Pedagogical Knowledge (PK), Content Knowledge (CK), AI-Technological Knowledge (AI-TK), Pedagogical Content Knowledge (PCK), AI-Technological Pedagogical Knowledge (AI-TCK), AI-Technological Content Knowledge (AI-TPK), and AI-TPACK itself. We developed an effective structural equation modeling (SEM) approach to explore the relationships among teachers’ AI-TPACK knowledge elements through the utilization of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The result showed that six knowledge elements all serve as predictive factors for AI-TPACK variables. However, different knowledge elements showed varying levels of explanatory power in relation to teachers’ AI-TPACK. The influence of core knowledge elements (PK, CK, and AI-TK) on AI-TPACK is indirect, mediated by composite knowledge elements (PCK, AI-TCK, and AI-TPK), each playing unique roles. Non-technical knowledge elements have significantly lower explanatory power for teachers of AI-TPACK compared to knowledge elements related to technology. Notably, content knowledge (C) diminishes the explanatory power of PCK and AI-TCK. This study investigates the relationships within the AI-TPACK framework and its constituent knowledge elements. The framework serves as a comprehensive guide for the large-scale assessment of teachers’ AI-TPACK, and a nuanced comprehension of the interplay among AI-TPACK elements contributes to a deeper understanding of the generative mechanisms underlying teachers’ AI-TPACK. Such insights bear significant implications for the sustainable development of teachers in the era of artificial intelligence.

1. Introduction

Artificial intelligence is fundamentally transforming the methods of teaching and learning [1,2,3]. In the educational process of students and teachers, AI is considered one of the most effective tools, both within and outside the school environment [4,5,6]. The gradual integration of technology into education has triggered higher demands for students’ AI literacy and capabilities. To cultivate essential skills, schools must adapt to the transition towards a digital society in order to cultivate students’ AI literacy [7,8,9]. The advent of AI has revolutionized the educational environment and instructional paradigms, including the introduction of new requirements related to the knowledge and capabilities of educators. Teachers, as central figures in the educational system, are presently called upon to improve their competencies, particularly in the use of artificial intelligence for pedagogical purposes, in this digital age. Existing research states that a common strategy for advancing the AI literacy of pre-service teachers requires the implementation of courses focused on AI [10,11]. An essential factor influencing the use of technology by novice educators is the quality of AI and experiences embedded within teacher education programs [12,13]. Furthermore, preliminary research has reported that merely increasing the number of AI courses in educational institutions is insufficient to address this issue comprehensively. It is crucial to invest in teacher training and effectively encourage the use of AI to support students learning [14,15]. Various countries and international organizations regularly revise standards related to the AI literacy of educators. These establishments also implement teacher education and training programs with the aim of enhancing their capacities for applying AI technology in their teaching practices [15,16,17]. As an example, two researchers from the University of Cyprus have proposed a context-based instructional design approach termed “Technology Mapping” (TM), which served as a valuable reference for carrying out case-based instruction for teachers in the context of Technological Pedagogical Content Knowledge (TPACK) [18]. The Initial Teacher Education (ITE) program in Australia was implemented across 48 universities, primarily focusing on public higher institutions [19]. In this AI era, teaching and learning are defined as complex activities that include the multifaceted use of knowledge. The avenues for knowledge acquisition have gradually diversified as AI has advanced. As a result, the conventional role of teachers in knowledge dissemination is expected to diminish or be replaced in the future [20,21]. To navigate this evolving circumstance, teachers must possess competencies in technology, pedagogy, and content knowledge. This optimization of technology to support students in learning specific subject matter is commonly referred to as Technological Pedagogical Content Knowledge (TPACK) [22,23,24]. The TPACK theory has gained widespread recognition in the field of teacher education. In the context of AI education, several research studies have expanded their framework to conceptualize the technological integration expertise of teachers as the incorporation of AI technology into technological pedagogical content knowledge, termed AI-Technological Pedagogical Content Knowledge (AI-TPACK) [25]. However, numerous findings have reported that contemporary educators lack proficiency in this area, often failing to effectively incorporate technology into the classroom instructional process [26]. Recent trends in educational research state that pre-service teachers, who are often considered digital natives, tend to identify more strongly with AI technology compared to the majority of in-service educators, regarded as digital immigrants [27,28]. The AI-TPACK framework has exerted a profound influence on research and practice in the realms of teacher education and professional development, inciting extensive scholarly investigation and academic inquiry [29,30].
The technology of artificial intelligence (AI) distinguishes itself from conventional information technologies by not only pervasively infiltrating and influencing teaching and learning across all dimensions but also by catalyzing a transformation in the cognitive structures and instructional methodologies of educators. The traditional Technological Pedagogical Content Knowledge framework (TPACK) necessitates the infusion of novel connotations, requiring continuous adaptation to contemporary trends to enhance educators’ ability to effectively respond to the demands of the AI era [31]. The passage highlights a critical aspect of educational technology and teacher education in the context of the Technological Pedagogical Content Knowledge (TPACK) framework. TPACK suggests that effective integration of technology in teaching hinges on an understanding of the interplay between three core elements: subject matter (content knowledge), pedagogy (pedagogical knowledge), and technology (technological knowledge). The passage points out that technological knowledge, unlike pedagogical and content knowledge, is more dynamic and subject to frequent changes, reflecting the fast-paced evolution of technology [17,24]. As artificial intelligence (AI) becomes increasingly integrated into educational practices, the passage raises a pertinent question about the adequacy of the existing TPACK framework for meeting the contemporary demands of teaching and professional development for educators [32]. This leads to the exploration of whether the TPACK framework needs to evolve or incorporate new dimensions in the era of AI. The integration of AI technology into the TPACK framework could potentially revolutionize teaching methodologies, learning environments, and other educational aspects [33]. Thus, the development of an AI-infused TPACK model (AI-TPACK) becomes a significant area of research and inquiry. This model would not only incorporate the traditional elements of TPACK but also integrate AI technologies, potentially leading to more effective and innovative teaching practices that align with the rapid advancements in AI and its applications in education. The exploration of AI-TPACK is essential to understanding how AI can enhance the educational process and support teachers in adapting to the evolving technological landscape.
The concept of AI-TPACK represents a nuanced and specialized form of knowledge that emerges from the intersection of three distinct areas: disciplinary knowledge (content expertise), pedagogical knowledge (teaching methods and strategies), and artificial intelligence technological knowledge. This type of knowledge is distinct from the expertise of subject-matter experts and AI technology specialists [34]. It goes beyond general pedagogical knowledge that is not specific to any discipline, embodying a tailored approach to teaching within specific subject areas through the use of artificial intelligence technology.
AI-TPACK enables educators, or AI entities functioning as educators, to possess a level of knowledge comparable to that of human teachers. This knowledge equips them to independently or collaboratively carry out teaching tasks alongside human educators [35]. This aspect is particularly significant in the current era of artificial intelligence, where AI technology transcends its traditional role as merely a tool for teaching and learning. Instead, there’s an emerging focus on how human teachers and AI entities (AI teachers) can effectively collaborate. This collaborative aspect forms an integral part of the AI-TPACK framework.
Therefore, within the AI-TPACK framework, the interactive relationships among artificial intelligence technology, subject matter content, and teaching methods are pivotal. These relationships, especially when viewed through the lens of human–computer collaborative thinking, constitute the core essence of AI-TPACK [31]. This perspective underscores the importance of integrating AI technology not just as a supplementary tool but as an integral component of the teaching and learning process, reshaping how educational content is delivered and understood in the AI era.
Several studies on TPACK have consistently focused on the application of the TPACK theory in the field of AI education [36,37]. The technology of AI extensively permeates and influences education and learning, as well as instigates transformation in the cognitive structures and instructional methods of teachers [38]. Angeli and Valanides observed the difficulty in clearly defining the constituent elements of TPACK, as the boundaries among these components are highly ambiguous and unclear. This issue is equally evident in the literature on AI-TPACK [39,40]. Despite extensive empirical research that has validated the relationships among the components of TPACK [41,42,43]. The investigation of AI-TPACK for teachers is still in its early stages. In terms of theoretical exploration, its current framework has identified constituent elements but failed to postulate further assumptions about the existing intrinsic relationships. The existing research on AI-TPACK has focused mainly on listing its components without thoroughly exploring their connections. However, there needs to be more empirical support and a reliable measure for its validity. Revisiting the relationships between technology, pedagogy, and subject matter knowledge, including constructing a framework for teachers’ AI-TPACK, has become an urgent issue. Therefore, to address these gaps, a comprehensive analysis of the current state of teachers’ AI-TPACK research was carried out by systematically exploring the concepts, structure, characteristics, and impact-effect models of teachers’ AI-TPACK. It adopted various techniques, including exploratory factor analysis, confirmatory factor analysis, and structural equation modeling techniques, as well as developed a measurement scale that complied with the psychometric standards. This scale is then empirically tested and refined to clarify the relationships among the knowledge elements of the teacher’s AI-TPACK.

2. Theoretical Framework

The existing research aimed to explore the essence of the relationships between teacher AI-TPACK components within an AI instructional environment, drawing insights from their experiences. To achieve this objective, the development and validation of the research instrument specifically designed to assess teachers’ AI-TPACK, as outlined in the AI-TPACK framework, were discussed.

2.1. TPACK Framework

The field of Technological Pedagogical Content Knowledge, commonly abbreviated as TPCK or TPACK, currently experiences rapid growth and holds significant research potential. Efforts are in progress to enhance its theoretical framework and gain broader recognition. In the last decade, the TPCK concept has gained widespread attention within the research community, resulting in a substantial body of the literature.
The TPCK framework has its foundations in the scholarly work of Shulman concerning Pedagogical Content Knowledge (PCK), which focuses on instructional methods and content expertise [44,45]. In order to enhance the integration of technology by educators, Pierson proposed the incorporation of this framework, Technological Knowledge (TK), into existing practices, Pedagogical Content Knowledge (PCK) [46]. Pioneering researchers Keating and Evans initially embraced the TPCK concept [47]. Guerrero has delineated the concept of Technological Pedagogical Knowledge (TPK), which bears resemblance to Technological Pedagogical Content Knowledge (TPACK) [48]. However, it was not until 2005 that similar investigations into this concept gained momentum. During this time, the term TPCK was used to describe technology-enhanced PCK [38]. In 2005, its theoretical framework was introduced, stressing three major knowledge domains and their interactions. These domains comprised content (C), pedagogy (P), technology (T), and various intersectional components, such as PCK, TCK, and TPK [49]. In 2007, Thompson and Mishra made a significant modification to the acronym representing Technological Pedagogical Content Knowledge by changing it from TPCK to TPACK. This alteration, seemingly small, had important implications for the framework’s accessibility and recognition. By including the word “And” (symbolized by the letter ‘A’ in TPACK), they made the acronym more readable, user-friendly, and memorable [50]. TPACK has been widely adopted by numerous educational researchers and is commonly used in the literature and communication. In 2008, Koehler and Mishra used a Venn diagram with three intersecting circles to depict the relationships between seven knowledge elements [51]. Simultaneously, Cox proposed a refinement model to provide a detailed analysis of the TPACK framework, clarifying the connections between its various elements. This marked the beginning of an exciting chapter in TPACK research within the education community [52].

2.2. AI-TPACK Framework

The TPACK theoretical framework has been used for nearly two decades, coinciding with rapid development in information technology, particularly the inception of advanced AI. This technological progress has effectively transitioned society from the information age into a new era of increased intelligence. In this context, the pressing issue is whether the existing TPACK model remains applicable to the evolving demands of teaching and the professional development requirements of educators [53]. With the increasing integration of AI into educational practices, is it necessary to infuse new connotations into the Technological Pedagogical Content Knowledge (TPACK) framework? Moreover, how would its integration of AI technologies within the TPACK framework result in novel changes in teaching methodologies, learning environments, and related aspects? In the era of AI, is it necessary to infuse new connotations into the TPACK framework? In summary, the reevaluation of the relationships between technology, pedagogy, and subject matter has become an urgent matter in the construction of a novel TPACK framework rooted in the age of artificial intelligence. Within this framework, technology represents the most dynamic element when compared to pedagogical and subject-matter knowledge. It postulated that as educators’ understanding of AI technology deepens, these associated knowledge elements will undergo corresponding transformations. Specifically, Technological Pedagogical Knowledge (TPK) would evolve into AI-TPK, Technological Content Knowledge (TCK) into AI-TCK, and eventually, TPACK tended to transition into AI-TPACK, which comprised the cognitive aspects of AI education, named AI literacy. Based on this premise, the new theoretical framework of AI-TPACK was introduced, as shown in Figure 1. And the Description of the elements of AI-TPACK is shown in Table 1.

3. Research Objectives

This research is aimed at achieving the following primary objectives:
  • To develop and validate an AI-TPACK measurement tool designed for teachers, featuring ideal metrics for assessing their knowledge levels across the various components of AI-TPACK;
  • To explore the relationships among the constituent knowledge elements of AI-TPACK and confirm whether these connections are consistent with theoretical assumptions.
To address the first objective, this research systematically dissected the essence of AI-TPACK, formulated its questionnaire items, and engaged domain experts to refine these items iteratively. The aim was to eliminate any ambiguity in item descriptions and overlap between different dimensions of AI-TPACK. This iterative process led to the development of an initial scale. Subsequently, exploratory factor analysis was conducted on the questionnaire, and its items were modified and reduced to form the formal one. Finally, the formal questionnaire was administered, and the collected data were subjected to confirmatory factor analysis and reliability assessment to validate the scientific robustness of the constructed AI-TPACK scale.
To address the second objective, considering the complexity of AI-TPACK knowledge elements and their relationships, the existing research stated that this concept should not be analyzed as a singular structural knowledge element. Instead, it is important to thoroughly explore the relationships among its inherent structures. In accordance with the general knowledge and specific technological expertise of teachers, this research constructed a model to analyze the hidden impact relationships among the seven core knowledge elements within the field of AI-TPACK for teacher education.

4. Methodology

A comprehensive account of the development process of the AI-TPACK scale, its validation, the respondents concerned, and methodological considerations were stated in this section. The development and implementation of the scale comprise several critical phases. It all commenced with an extensive literature review, followed by content deconstruction, item generation, and refinement, as well as expert review. The process was concluded with survey-based research using the methodology [67]. In this research, the project revision comprised the use of both exploratory factor analysis and confirmatory factor analysis methods. The data collected from the survey were subjected to analysis using structural equation modeling, with the main aim of understanding the interrelationships between the constituent elements of AI-TPACK knowledge [68]. Further details about each significant step in this process will be elaborated on in the relevant subsections.

4.1. Existing Scales

In recent years, numerous research studies in the field of TPACK have focused on developing tools for assessing teachers’ TPACK structures. For example, Schmidt et al. designed a five-point scale to measure its seven components among 124 pre-service teachers in the United States. This tool was adapted and localized by other researchers [66], and Koh et al. further modified it into a 29-item, 7-point scale. However, the findings indicated that not all seven TPACK factors could be clearly identified. Some factors, like PK and PCK, as well as TCK and TPACK, were merged to form new ones. The TPACK framework was also modified for more generic purposes instead of specific subject content [69]. Two items linked to TPK independently formed another factor, suggesting that the theoretically proposed seven-factor structure of TPACK does not fully manifest in practical situations [55]. To address the challenges associated with TPACK measurement, Chai and colleagues particularly focused on the conceptual distinctions among these components [59]. This scale was used to assess 455 in-service and 550 pre-service teachers in Singapore [70] and 550 pre-service teachers in the Asian Chinese-speaking regions (Singapore, Hong Kong, Mainland China, and Taiwan) [71], respectively, successfully identifying the seven TPACK factors. Similarly, when the modified scale was used to measure science and Chinese language teachers in Singapore [64], it effectively distinguished these factors. These results suggest that a clear conceptual delineation of each TPACK component tends to enhance the discriminant validity of the respective scales. Therefore, it was recommended that TPACK research, recognizing the considerable interrelatedness and overlap among the seven components, carefully define each element when developing new scales or adapting existing ones.
In the academic field, research on the TPACK Scale within the context of AI is a dynamic and evolving process. Celik introduced an ethical dimension to TPACK, giving rise to the Intelligent-TPACK Scale, designed to assess the ethical knowledge of teachers in AI. It is evident that there is currently no universally accepted AI-TPACK scale [25]. This deficiency is most apparent in several critical aspects:
Firstly, existing research has explored the integration of artificial intelligence into teachers’ AI-TPACK, although these investigations often focus on its specific aspects, such as natural language processing or machine learning, rather than the comprehensive application of teachers’ AI-TPACK as a whole. This limitation posed a challenge to establishing a comprehensive and systematic AI-TPACK scale.
The teacher’s AI-TPACK concept is complex, as it includes integrating knowledge pertaining to AI technology, subject matter expertise, pedagogical knowledge, and the intersection of these three domains. In the process of incorporating AI technology into a teacher’s TPACK, it is important to clarify which specific teacher’s AI-TPACK can be effectively combined with the particular subject matter and pedagogical knowledge to yield favorable educational outcomes [25]. However, current research often lacks an in-depth exploration of this interdisciplinary integration, thereby complicating the establishment of an AI-TPACK assessment framework.
Lastly, the development of an AI-TPACK requires a thorough examination of diverse contextual factors and challenges encountered in real-world applications, including distinct instructional settings, subject domains, and student demographics. This includes a substantial body of empirical research and on-site investigations to assess the feasibility and efficacy of the AI-TPACK assessment framework [72]. Presently, few investigations have been carried out in this field, thereby impeding the establishment of an AI-TPACK assessment framework supported by empirical evidence.
The development of an AI-TPACK assessment framework requires a comprehensive examination of various aspects. This includes the general application of AI, its integration with the teacher’s TPACK, and addressing the various contextual factors and challenges encountered in practical applications. A comprehensive and systematic AI-TPACK assessment framework can be realized only through an in-depth exploration of these factors.

4.2. Item Generation

The scale (see Appendix A) used in this research was mainly adapted from that of TPACK, developed by Schmidt [66], Landry [73], Smith [74], and Celik [25]. To refine this scale, insights were gathered from open-ended questionnaire surveys conducted among a sample of primary and secondary school teachers, as well as educational experts. In addition, the survey aimed to explore the constituents of teachers’ TPACKs integrating AI. Based on the theoretical framework for the teacher’s TPACK structure and a detailed examination of its contents and extensions, each factor was further refined. The principle of factor-item congruence demands a set of typical psychological and behavioral items be compiled for each component. As a result, 6 items were formulated for each factor, leading to a total of 42, which constitute the Teachers’ TPACK Scale: Semantic Analysis Expert Questionnaire. Incorporating feedback and suggestions from 12 education doctoral reviewers, items that were conceptually similar or repetitive were merged, while those deemed difficult to understand or ambiguous were either removed or revised. After extensive deliberation, a final set of 42 items, organized as 6 items per factor, was established. These items adopted a 5-point Likert self-assessment scoring system, ranging from Strongly Conformant to Strongly Non-conformants, with high scores indicating advanced levels of teacher AI-TPACK competence.
The formal teacher’s AI-TPACK survey questionnaire consists of two main sections: basic information and the scale. The basic information section is designed to gather essential demographic data about the respondents, including gender, highest educational attainment, teaching role, subject category, and educational stage, as well as their familiarity with and exposure to teachers’ AI-TPACK. The Teacher AI-TPACK scale section comprised seven dimensions, totaling 42 items, which collectively assessed different aspects of the teacher’s AI-TPACK.

4.3. Expert Consultation

To guarantee the reliability and validity of the measurement instrument, a consultative process was undertaken involving ten experts in the field of educational technology. These experts were selected from five reputable universities, which include East China Normal University, Beijing Normal University, and Guangxi Normal University. This diverse group of experts comprised four professors and one associate professor, supplemented by five individuals who hold doctoral degrees [75,76]. This approach of incorporating feedback and insights from a panel of distinguished experts is a standard method for enhancing the credibility and accuracy of a research instrument. By involving professionals with various levels of expertise and from different academic institutions, a comprehensive and multifaceted perspective on the instrument’s effectiveness and applicability is ensured. Their collective input contributes significantly to refining the measurement tool, ensuring that it accurately captures the intended constructs and is relevant to the field of educational technology. Such rigorous validation processes are crucial in academic research, especially in fields like educational technology, where precision and relevance are paramount.
After thorough consideration of the consistency of the measurement items and feedback from experts, the necessary adjustments were made. This process led to the development of the Integrated Teacher’s AI-TPACK Prediction Scale. As an illustration of the AI-TK dimension, the fifth item in the scale developed by Celik was formulated as “I am familiar with AI-based tools and their technical capacities”. This item was intended to assess the familiarity of educators with AI tools, as only those familiar with this technological tool can effectively use it for certain tasks. It was also recommended to revise and relocate this item to the first position under the AI-TK dimension, phrased as “I know how to execute some tasks with AI-based tools”. The items “I know how to execute some tasks with AI-based tools” and “I know how to initialize a task for AI-based technologies by text or speech” exhibited substantial conceptual overlap, both implying the use of AI technology for task execution. In an educational context, these items were modified to read, “I frequently use AI technology for teaching”. Additionally, the first item under Intelligent Technological Knowledge (TK) in the Celik scale was “I know how to interact with AI-based tools in daily life”, and the notion of interaction was slightly unclear. This item was altered or refined to “I know how to use AI technology for interactive teaching” in order to be consistent with the educational context. Finally, in the Schmidt scale, the sixth item in the TK category was “I have the technical skills I need to use technology”, originally designed to evaluate whether educators possess the necessary AI skills for teaching. However, it was reported that novice educators tend to respond negatively. To address this, the assessment of teacher AI-TK potential was recommended, thereby modifying the item to read as “I can easily acquire the AI technology skills required for teaching”.

4.4. Research Respondents

In this research, the teacher’s AI-TPACK scale was developed, and its reliability and validity assessments were performed. The survey took place from July 2023 to September 2023 and included 400 teachers as respondents. This group had completed coursework in educational technology and received systematic training in national AI technology. They were familiar with the commonly used AI technologies and had acquired a certain level of practical experience in the application of AI technology.
A randomized sampling method was used for the survey, and a total of 400 questionnaires were distributed. After an assessment of teachers’ knowledge and exposure to AI technology, it was observed that 34 respondents were either unfamiliar with or lacked prior exposure to AI technology. Subsequently, these 34 questionnaires were excluded, resulting in the final selection of 366 valid ones, representing 91.50% of the distributed surveys. The demographic characteristics of the respondents, comprising 82 males and 284 females, are shown in Table 2. Among the surveyed respondents, 36.89% and 63.11% were pre-service and in-service teachers, respectively. In terms of their educational background, 54.10%, 41.53%, and 4.37% held undergraduate, master, and doctoral degrees. Furthermore, concerning the subjects taught, 25.14% and 74.86% were from the arts and sciences, respectively. Teachers from elementary, middle, high school, and university settings represented 14.75%, 41.26%, 33.06%, and 10.93% of the sample, respectively. The t-test results indicate that there are no significant differences in teachers’ AI-TPACK proficiency across different categories of Gender, Highest Educational Attainment, Teacher Type, Subject Category, and Educational Stage (p-values > 0.5). Thus, it can be inferred that the uneven distribution of samples does not exert an influence on the outcomes.

4.5. Data Analysis

Data analysis comprised a multi-phase process, including using questionnaires to survey and measure the subjects. Both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were conducted on the acquired information, which led to the subsequent adjustments of these items. This iterative process eventually resulted in the development of the official teacher’s AI-TPACK questionnaire. It was then used in the formal measurement phase, and the respondents were encouraged to complete all questionnaire items within the designated timeframe, with a focus on providing authentic responses. To establish and validate the elements and structure of the teacher’s AI-TPACK, the data obtained from the formal measurement were divided into two essentially homogeneous halves. One half was subjected to exploratory factor analysis using the specialized software SPSS 27, while the other was used for confirmatory factor analysis and structural equation modeling (SEM).
The data analysis process consisted of three distinct stages: In the first stage, exploratory factor analysis was used to assess the structural validity of the teacher’s AI-TPACK scale and identify its optimal factor structure. In the second stage, confirmatory factor analysis was applied to validate the structural models of the teacher’s AI-TPACK scale and its constituent knowledge elements. This stage was used to confirm whether the predefined models, factor quantities, and scale structure were in line with the actual data. Finally, the third stage used structural equation modeling to examine the causal relationships among the knowledge elements within the teacher’s AI-TPACK.

5. Results

5.1. Exploratory Factor Analysis

The main objective of conducting an exploratory factor analysis (EFA) is to extract common factors from a given dataset and evaluate the construct validity of a particular scale [77]. To ensure the reliability of the EFA, it is essential to assess whether the data are suitable for this analysis [78]. However, two common indicators used for this assessment include the Kaiser–Meyer–Olkin (KMO) measure and the Bartlett test of sphericity. According to statistical standards, a KMO coefficient closer to 1 indicates data suitability for EFA, with KMO ≥ 0.9 suggesting a highly suitable condition. As for the Bartlett test of sphericity, reaching the statistical significance level is the criterion [79]. Based on the data analysis results, the KMO coefficient was calculated to be 0.939, while the Bartlett test of sphericity was statistically significant. These findings collectively signify that the data are indeed suitable for conducting EFA.
There are five guiding principles for item selection in EFA: namely, removing items with factor loadings less than 0.5, eliminating those with high and approximately equal loadings on two factors, discretionarily removing misclassified items according to predetermined conceptual factors, and iterative item removal, repeating EFA, and continuous item removal according to these principles, until a clearer factor structure emerges [77,80,81]. Regarding the use of principal component analysis in SPSS for factor extraction, PCA is the most common method for factor extraction. It extracts common factors based on the intercorrelations of the data itself and then determines the number of factors, assigns names to these variables, and interprets them post hoc [80].
EFA results showed that the factor loadings for items CK-6, AI-TK-6, and AI-TPACK-6 are 0.497, 0.485, and 0.249; the factor loadings are all less than the critical threshold of 0.5. Therefore, these items are removed from the analysis. After eliminating these items, EFA is conducted again to determine the final structure of the teacher’s AI-TPACK. The EFA method was used to extract seven common factors from the data. The items are in line with the predetermined factors, allowing for straightforward naming. The cumulative variance contribution of these seven factors is equivalent to 75.916%, indicating their strong ability to effectively describe and rank the different levels of teachers’ AI-TPACK being studied. Following the EFA, the teacher’s AI-TPACK scale comprised 39 items, which are in line with the initially conceived factors, proving the sound construct validity of the scale, as shown in Table 3.

5.2. Confirmatory Factor Analysis

Reliability and validity are crucial indicators for assessing the quality of a scale. These metrics are commonly employed to evaluate the reliability and effectiveness of a scale [67]. After using exploratory factor analysis (EFA) to validate the model, the next critical step is to assess its reliability and validity, which are achieved through CFA. The results in Table 4 showed that the factor loadings of the model exceeded the 0.5 threshold, implying a high correspondence between the observed and latent variables. Theoretically, the composite reliability (CR) of the model exceeds 0.70 [82], and the observed factor loadings for the variables all exceeded the 0.7 threshold, meeting the standard criteria. Cronbach’s α coefficient was used to assess the internal consistency or reliability of the scale [83]. The α values for the items ranged from 0.806 to 0.945, with an overall scale value of 0.957, which is in line with the established standards. In addition, the Average Variance Extracted (AVE) is used to evaluate the convergent validity of the teacher’s AI-TPACK scale based on the criterion that this value should exceed 0.5 [84]. The results indicate that the AVE value of this model is 0.562, while that of other related constructs (CK, PK, AI-TK, PCK, AI-TCK, AI-TPK, and AI-TPACK) all exceed the 0.5 threshold. This suggested good convergent validity, implying that the structural framework of teachers’ AI-TPACK developed in this research is both reliable and effective [85].

5.3. Structural Equation Model

Structural equation modeling (SEM) is a widely recognized tool in several research projects for its ability to establish, estimate, and test causal relationship models [86]. Moreover, SEM is highly effective for examining methodological effects and structural invariance [87,88]. Its main aim is to analyze the relationships between one or more independent and dependent variables, as well as systematically assess complex models. In other words, while factor analysis methods, such as EFA and CFA, were used to determine the factorial structure of the teacher’s AI-TPACK and showed reasonably good construct validity, SEM is required to further examine the relationships among the teacher AI-TPACK factors [89].
The quality of the SEM model relies on three indices, namely the Absolute Fit Index (AFI), the Incremental Fit Index (IFI), and the Parsimonious Fit Index (PFI) [87,90,91]. Model modification was achieved by observing the indices and establishing covariances between error variables to ensure that all fit indices meet the required standard.
When the results of the structural model were examined, it was discovered that most of the estimated parameter values in the seven-factor model exhibited significant or highly significant differences, with factor loadings ranging from 0.519 to 0.913, all exceeding the 0.40 threshold. In a broader context, the overall internal quality of the seven-factor model was deemed satisfactory. The evaluation of the fit indices further strengthened this perspective, as AFI, IFI, and PFI all meet the standard criteria. The goodness-of-fit indices, as shown in Table 5, proved that the hypothesized model is a good fit for the actual data, and the external quality of the seven factors is favorable.
The application of SEM to the theoretical model has yielded several significant findings. In general, the TPACK variables are explained by their predictive factors, comprising six distinct knowledge elements (PK, CK, AI-TK, PCK, AI-TCK, and AI-TPK). The impact of core knowledge elements on AI-TPACK is predominantly indirect, as shown by the path coefficients represented by dashed lines in Figure 2. In fact, a learning environment whose integration with artificial intelligence technology is maintained renders teaching more effective and permanent [92,93]. Although both core knowledge elements, CK and PK, exert positive influences on TPACK, their respective path coefficients are relatively low, measuring 0.025 and 0.097. The direct impact of AI-TCK on AI-TPACK is considerably lower than anticipated, with almost no direct effect on the development of AI-TPACK. This finding contradicted numerous previous research studies that proved the substantial influence of TCK on TPACK, and this relationship has evolved with the integration of AI technology [94]. Similarly, all other composite knowledge elements (PCK, AI-TCK, and AI-TPK) positively affected AI-TPACK, serving as mediating variables between core knowledge elements and AI-TPACK as well as playing distinct roles. Among these, AI-TPK exhibited the most substantial impact, having a predictive value of 0.870. AI-TCK’s direct effect on AI-TPACK is far less influential than anticipated, measuring only 0.207. The results from CFA indicated that the seven-factor model of the teacher’s AI-TPACK effectively reflects the various measurement variables (items) and properly fits the observed data. This outcome supports the strong conceptual validity of the seven-factor model. Numerous studies have focused on the essential knowledge teachers must possess to effectively integrate artificial intelligence (AI) technology into their classrooms. This research aligns with the ongoing evolution of technology from the past to the present. These studies underscore the importance of equipping educators with the necessary skills and understanding to leverage AI in educational settings, ensuring they can adapt and benefit from technological advancements in their teaching practices [95,96].
Standardized total effects represent the combined influence of both direct and indirect effects, which are crucial for understanding the impact of different factors [97]. In this context, the results suggested that while core knowledge elements tend to affect TPACK, their influence is mainly indirect. Based on these findings and a detailed examination of Figure 2, it is evident that the AI-TPACK model is hierarchical. The core knowledge elements are positioned at the first level, while the composite knowledge elements are placed at the second level. Notably, the most significant influence on AI-TPACK originates from the core knowledge elements at the first level, which then cascade through to the second level. It is important to note that direct connections from the first level to AI-TPACK are not effective on their own. This implied that, when considering the theoretical foundation of the TPACK model, composite knowledge elements do not simply merge two core elements (e.g., TK and CK) to form TCK. Instead, the standardized total effects revealed a more nuanced perspective: AI-TK, AI-TCK, and AI-TPK impact AI-TPACK, with varying degrees of impact (0.654, 0.207, and 0.870, respectively). The distribution of standardized total effects for each knowledge element on AI-TPACK is shown in Table 6. It is evident that both core (CK, PK, and AI-TK) and composite knowledge elements (PCK, AI-TCK, and AI-TPK) collectively affect AI-TPACK, as shown in the aforementioned Table. The effects of CK, PK, and PCK (CK = 0.052, PK = 0.088, and PCK = −0.008) on AI-TPACK are slightly less pronounced than those of AI-TK (0.654), AI-TCK (0.207), and AI-TPK (0.870), particularly AI-TPK, which has a substantial impact of 0.870. CK, PK, and PCK are non-technical knowledge elements, while AI-TK, AI-TCK, and AI-TPK are technical knowledge elements. Based on the aforementioned data, it can be inferred that the non-technical knowledge elements (CK, PK, and PCK) have significantly lower explanatory power for teacher AI-TPACK compared to the technical knowledge elements (AI-TK, AI-TCK, and AI-TPK). These findings are in line with the theoretical principles defined by Celik, as anticipated, reinforcing the validity of the model [25].

6. Discussion

The main objective of this research is to integrate AI technologies to form the teacher’s AI-TPACK framework and evaluate the knowledge elements described within the teacher’s AI-TPACK framework. When developing and validating this tool, recommendations from recent publications [25,98] and analyzed data from various perspectives were considered.
In the initial step of this research, the development and validation of the scale were based on several procedures guided by the theoretical framework of the teacher’s AI-TPACK. As reported by Graham, defining the main concepts and relationships between the knowledge elements of a teacher’s AI-TPACK is of critical importance [99]. To realize this, definitions of knowledge elements and major criteria for distinguishing these elements were established based on a two-year literature review and similar research activities [31,100]. This comprehensive process led to the determination of the structure of the teacher’s AI-TPACK and facilitated its subsequent development.
Several factors were considered during the validation process. As stated in the methodology section, in addition to traditional validation methods such as factor analysis and correlation coefficients, a more advanced technique known as structural equation modeling was adopted to validate the relationships between the teacher’s AI-TPACK elements. This additional step, in line with the recommendations made by Graham, provided valuable insights into the complex relationships between these elements [99]. The results presented raised questions about the accuracy of the teacher’s AI-TPACK model shown in Figure 2, which depicts the relationships among the seven knowledge elements. These findings are inconsistent with the existing model, showing that the relationship between knowledge elements was more complex than initially anticipated. Despite the current teacher’s AI-TPACK framework representing well-defined relationships and equal influence among these elements in the development of the teacher’s AI-TPACK, the present research results indicated that the relationships among the teacher’s AI-TPACK components were not clearly defined and were more complex.
The teacher’s AI-TPACK framework is structured as a hierarchical model, with fundamental core knowledge elements and composite knowledge repositories both situated at a lower level. As one moves from the first to the second level, a significant indirect impact on AI-TPACK is observed, whereas the direct impact from the first level on the teacher’s AI-TPACK tends to be negligible. From the existing research, it was concluded that the core knowledge elements (CK, PK, and AI-TK) have a relatively minor influence on teachers’ AI-TPACK. This finding is consistent with the investigation by Mishra and Koehler that the composite knowledge elements are not simply a combination of the two core ones; these knowledge bases possess distinct characteristics [101]. As stated in the theory, the teacher’s AI-TPACK framework was developed based on Shulman PCK principles. The reference to Pamuk’s work [42] highlights an important aspect of the TPACK framework: Pedagogical Content Knowledge (PCK) is considered to be a predominant knowledge element that directly influences the development of TPACK. This suggests that the integration of pedagogical strategies with content expertise is crucial for effectively incorporating technology into teaching practices. However, with the integration of AI technology, the results of this research indicated that the relationship between PCK and its impact on teachers’ AI-TPACK development is minimal (PCK = −0.008). This surprising discovery suggested that the explanatory power of the core knowledge elements (CK, PK, and AI-TK) is not solely derived from the composite knowledge repositories (PCK, AI-TCK, and AI-TPK).
Based on general findings, the knowledge elements were categorized into two distinct types, namely those related to and unconnected to technology. One of the most significant findings is the distinctive difference in the explanatory power of technology-related and non-technology knowledge elements. The results obtained showed that knowledge elements within the teacher’s AI-TPACK framework, especially those technology-related knowledge elements (AI-TK, AI-TCK, and AI-TPK), have a strong correlation with the teacher’s AI-TPACK and possess firm explanatory power. On the contrary, CK and PK, as well as PCK, have a relatively weaker impact on the teacher’s AI-TPACK (CK = 0.052, PK = 0.088, PCK = −0.008) when compared to AI-TK (0.654), AI-TCK (0.207), and AI-TPK (0.870). In other words, technology-related elements have a lower direct impact on AI-TPACK development compared to non-technology elements. The explanatory power of AI-TCK (0.207) is significantly lower than AI-TK (0.654), while that of PCK (−0.008) is much lower than PK (0.088). Even though PCK and AI-TCK combined PK and AI-TK with content (C) knowledge, their explanatory power decreased in relation to the teacher’s AI-TPACK. The impact of content (C) knowledge on the explanatory power of the PCK and AI-TCK variables prompts further research to investigate whether modifying the framework by removing the content (C) element would result in a better fit for the structural model.
The analyzed data and understanding of the relationships revealed the need to modify the traditional TPACK framework. This adaptation incorporates the explanatory power of the relationships between knowledge elements and their hierarchical structure. Future research should focus on several main aspects: First, increasing theoretical and empirical investigations based on the teacher’s AI-TPACK framework to uncover the reasons behind the low explanatory power of content (C) knowledge. Second, the developed teacher’s AI-TPACK model displayed good reliability and validity, but it is unclear whether the seven-factor model of the teacher’s AI-TPACK is the most optimal among other possible structural models. To address this, further exploration needs to include the construction and testing of competing models. Third, conducting investigations on the teacher’s AI-TPACK scale to assess and guide its level in practical settings effectively. Fourth, based on the developed teacher’s AI-TPACK framework, exploring the relationship between the level of the teacher’s AI-TPACK and AI literacy tends to be a critical area for investigation.
This study bridges the gap between sustainability in education and AI by proposing a contemporary educational framework tailored for teachers in the AI era. It underscores the vital importance of incorporating AI into teaching methodologies to ensure that education remains relevant and sustainable amid rapid technological advancements [96]. The study elucidates the AI-TPACK framework, highlighting its significance in the ongoing development of sustainable teaching practices and in the further integration of AI and information technology in educational contexts. The AI-TPACK model equips teachers to modify their pedagogical approaches to include AI, thereby preparing students with essential skills for a digitally driven society. This approach is not only innovative but also addresses the dynamic educational needs of a technology-centric world, contributing to the sustainability of educational practices [102].

Limitations

This study advocates for a progressive and systematic approach to assessing the validity and reliability of the AI-TPACK depth scale. Despite the comprehensive scope and nationwide application of the scale, the research has identifiable limitations.
First, the study employed a survey research model, using a scale to collect data. While surveys are effective for understanding population characteristics, they are less precise in capturing behaviors and perceptions compared to observational methods [103]. Responses in survey research are inherently constrained by the structure of the survey tool itself. A more robust developmental approach could involve qualitative data collection from not only educational technology experts but also pre-service teachers, offering a broader perspective beyond the current study’s framework.
Second, the study’s large sample size was predominantly female. However, the literature from 2002 onwards suggests no significant gender differences among pre-service teachers regarding attitudes, abilities, and use of technology [104,105,106,107]. Further, recent studies highlight that gender and computer attitudes are not significant predictors of information and communication technology usage [108]. In Schmidt et al.’s [66] survey, a notable 93.5% of respondents were females, and similar gender distributions have been observed in other studies focusing on pre-service teachers’ TPACK development [64,109,110].
Lastly, beyond the model’s results, the data interpretation and understanding of relationships suggest that the traditional AI-TPACK framework requires revision to better reflect the intensity of relationships between knowledge elements and their hierarchical structure. Future research should extend to diverse contexts, aiding not only in the validation of the framework but also in refining these insights. Specifically, there is a need for a deeper understanding and validation of the relationships across different levels within the AI-TPACK framework.
The follow-up research will pivot towards qualitative studies of AI-TPACK, focusing on its behavioral manifestations and continually validating and revising the scale in practice. Efforts will include expanding the sample size and ensuring a more balanced demographic representation, such as in terms of gender and educational levels. Additionally, the research will delve into the relationships between different levels of the AI-TPACK framework, exploring the evolution and interplay of core knowledge elements (CK, PK, and AI-TK) and composite knowledge repositories (PCK, AI-TCK, and AI-TPK).

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su16030978/s1.

Author Contributions

Conceptualization, B.X. and Y.Z.; methodology, Y.N.; software, Y.N.; validation, Y.N., C.Z. and T.T.W.; formal analysis, B.X. and Y.Z.; investigation, Y.N., C.Z. and T.T.W.; resources, B.X. and Y.Z.; data curation, B.X. and Y.Z.; writing—original draft preparation, C.Z.; writing—review and editing, Y.N.; visualization, C.Z.; supervision, B.X. and Y.Z.; project administration, B.X. and Y.Z.; funding acquisition, B.X. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data were obtained from SCOPUS data base (with permission). Data are contained within the article and Supplementary Materials.

Acknowledgments

Yimin Ning and Cheng Zhang are co-first authors. We are grateful for the financial support from the Major Project of National Social Science Foundation of China (no. ACA230019) and the Self-regulated Learning of Mathematics Normal Students in the Field of Internet plus Education (no. 2023JGZ107).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

  • Teachers’ AI-TPACK Scale
Dear teacher, greetings! We sincerely appreciate your willingness to take the time to respond to this questionnaire. Please carefully read each question and, based on your actual circumstances and initial impressions after reading the question, mark a “√” in the corresponding column below. Each question should only have one “√” marked. The options in these questions are designed to assess your pedagogical content knowledge integrating artificial intelligence technology (AI-TPACK). While there is no time limit, we encourage you to complete this survey as expeditiously as possible. Please review the questions attentively, provide thoughtful responses, and ensure no questions are left unanswered. Some questions may not align with your experiences or may be entirely new to you. In such cases, please select the answer that you personally lean towards. This questionnaire is anonymous and does not involve the evaluation of your individual teaching abilities. We guarantee that it will not have any adverse impact on your personal well-being. We hope you complete this questionnaire earnestly and truthfully. We anticipate that the valuable data you provide will serve as a critical basis for educational research and administrative decision-making in the field of education science.
  • Basic Information
1. Your gender is: ________
A. Male                                         B. Female
2. Your highest level of education is: ________
A. Bachelor’s Degree         B. Master’s Degree         C. Doctor’s Degree
3. Are you a pre-service teacher or an in-service teacher? ________
A. Pre-Service Teacher                              B. In-Service Teacher
4. The subject category you teach is: ________
A. Arts                                            B. Sciences
5. The educational level you teach is: ________
A. Primary School         B. Middle School         C. High School         D. University
6. Are you familiar with common AI technologies or products (such as robots, intelligent question banks, intelligent tutoring systems, etc.)?
A. Familiar         B. Not very familiar         C. Completely unfamiliar
7. Have you had any exposure to AI technologies or products?
A. Yes                                                      B. No
  • AI-TPACK Scale
Please choose the option that best corresponds to your actual circumstances from the choices A, B, C, D, and E and mark it with a checkmark (✓). The meaning of each alternative answer is as follows:
A. Strongly Conformant: This category denotes that the statement is highly conformant to personal experiences encountered in nearly all instances.
B. Conformant: This classification indicates that, under typical circumstances, the statement is conformant to personal situation.
C. Uncertain: This category signifies that the statement is conformant to personal situation in approximately half of the cases.
D. Non-conformant: This designation signifies that, under typical circumstances, the statement is non-conformant to personal situation.
E. Strongly Non-conformant: This category suggests that the statement is highly non-conformant to personal circumstances in nearly all instances.
ItemsOptions
ABCDE
Content Knowledge (CK)
I possess a strong understanding of the concepts and principles within my discipline.
I completely understand the historical evolution of concepts and principles in the subject I teach.
I am knowledgeable about how the subject matter I teach can be applied in everyday life.
I have a deep understanding of the knowledge structure (organization) of the content I teach.
I possess a substantial depth of subject-specific knowledge and am highly familiar with the instructional materials and curriculum standards.
I find it challenging to establish connections between the knowledge in the subject I teach and that in distinct disciplines.Removed
Pedagogical Knowledge (PK)
I am capable of using a variety of diverse teaching methods in the classroom.
I can select appropriate teaching methods based on the instructional content.
I can adjust my teaching methods based on the performance or feedback of the students.
I possess knowledge of effective classroom organization and management.
I take into consideration students backgrounds, interests, motivations, and other needs during teaching.
I am proficient in using multiple assessment methods to evaluate students learning outcomes.
AI-Technological Knowledge (AI-TK)
I am familiar with commonly encountered AI technologies in the educational environment.
I possess the capability to easily acquire AI technologies necessary for teaching.
I frequently incorporate AI technologies in the pedagogical context.
I am proficient in using AI technologies to enhance the instructional process.
I am knowledgeable about using AI technologies for interactive teaching purposes.
I lack the knowledge to resolve issues related to AI technologies when encountered.Removed
Pedagogical Content Knowledge (PCK)
I am proficient at formulating curriculum plans with ease.
I am well-acquainted with the focal points and challenging aspects of teaching.
I prioritize analyzing students learning situations and am capable of altering instructions to suit their individual needs.
I am capable of creating engaging group activities for students.
I am aware of the common mistakes students frequently make during their learning process.
I can assist students in correcting the learning errors they often commit.
AI-Technological Content Knowledge (AI-TCK)
I am familiar with AI in specific academic domains, such as mathematical intelligent tutoring systems.
I am capable of effortlessly using AI in specific academic domains.
I am proficient in using AI to update my knowledge base within the academic discipline.
I can select appropriate AI tools based on the subject matter I am teaching.
I am adept at using AI to effectively enhance students comprehension of the material.
I can use AI to broaden the knowledge horizons of students.
AI-Technological Pedagogical Knowledge (AI-TPK)
I am capable of using AI to enhance my pedagogical perspectives.
I am able to apply appropriate AI in various teaching activities.
I have the capacity to select AI to sustain students motivation and interest.
I can apply AI to assess the learning outcomes of students.
I am proficient in using AI to optimize classroom instructional management.
I possess the ability to explain information derived from AI to provide real-time feedback.
AI- Technological Pedagogical Content Knowledge (AI-TPACK)
I am knowledgeable in integrating AI with educational content and teaching methods to improve classroom teaching efficiency and effectiveness.
I am capable of selecting appropriate teaching methods and AI based on the educational content for instruction.
I can use AI to create, simulate, and adapt scenarios that are in line with the educational content.
I can use personalized AI to select suitable teaching methods as well as guide students in practical learning.
I will use AI for self-directed learning, further deepening my subject knowledge and understanding of educational pedagogical theories.
I am also committed to actively exploring and learning new AI technologies and their applications in my educational and teaching practices.Removed

References

  1. Barnett, L.; Brunne, D.; Maier, P.; Warren, A. Using Technology in Teaching and Learning; Routledge: London, UK, 2013. [Google Scholar]
  2. Eady, M.; Lockyer, L. Tools for Learning: Technology and Teaching. In Learning to Teach in the Primary School; Routledge: London, UK, 2013; Volume 71. [Google Scholar]
  3. Wijaya, T.T.; Weinhandl, R. Factors Influencing Students’ Continuous Intentions for Using Micro-Lectures in the Post-COVID-19 Period: A Modification of the UTAUT-2 Approach. Electronics 2022, 11, 1924. [Google Scholar] [CrossRef]
  4. Jobirovich, Y.M. The Role of Digital Technologies in Reform of the Education System. Am. J. Soc. Sci. Educ. Innov. 2021, 3, 461–465. [Google Scholar] [CrossRef]
  5. Earle, R.S. The integration of instructional technology into public education: Promises and challenges. Educ. Technol. 2002, 42, 5–13. [Google Scholar]
  6. Algerafi, M.A.M.; Zhou, Y.; Alfadda, H.; Wijaya, T.T. Understanding the Factors Influencing Higher Education Students’ Intention to Adopt Artificial Intelligence-Based Robots. IEEE Access 2023, 11, 99752–99764. [Google Scholar] [CrossRef]
  7. Breivik, P.S. 21st century learning and information literacy. Chang. Mag. High. Learn. 2005, 37, 21–27. [Google Scholar] [CrossRef]
  8. Šorgo, A.; Bartol, T.; Dolničar, D.; Boh Podgornik, B. Attributes of digital natives as predictors of information literacy in higher education. Br. J. Educ. Technol. 2017, 48, 749–767. [Google Scholar] [CrossRef]
  9. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: Honolulu, HI, USA, 2020; pp. 1–16. [Google Scholar]
  10. Niess, M.L. Re-Thinking Pre-Service Mathematics Teachers Preparation: Developing Technological, Pedagogical, and Content Knowledge (TPACK). In Developing Technology-Rich Teacher Education Programs: Key Issues; IGI Global: Hershey, PA, USA, 2012; pp. 316–336. [Google Scholar] [CrossRef]
  11. Polly, D.; Mims, C.; Shepherd, C.E.; Inan, F. Evidence of impact: Transforming teacher education with preparing tomorrow’s teachers to teach with technology (PT3) grants. Teach. Teach. Educ. 2010, 26, 863–870. [Google Scholar] [CrossRef]
  12. Agyei, D.D.; Voogt, J.M. Exploring the potential of the will, skill, tool model in Ghana: Predicting prospective and practicing teachers’ use of technology. Comput. Educ. 2011, 56, 91–100. [Google Scholar] [CrossRef]
  13. Tondeur, J.; van Braak, J.; Sang, G.; Voogt, J.; Fisser, P.; Ottenbreit-Leftwich, A. Preparing pre-service teachers to integrate technology in education: A synthesis of qualitative evidence. Comput. Educ. 2012, 59, 134–144. [Google Scholar] [CrossRef]
  14. Chiu, T.K.F.; Sun, J.C.-Y.; Ismailov, M. Investigating the relationship of technology learning support to digital literacy from the perspective of self-determination theory. Educ. Psychol. 2022, 42, 1263–1282. [Google Scholar] [CrossRef]
  15. Hew, K.F.; Brush, T. Integrating technology into K-12 teaching and learning: Current knowledge gaps and recommendations for future research. Educ. Technol. Res. Dev. 2007, 55, 223–252. [Google Scholar] [CrossRef]
  16. Dhahri, M.; Khribi, M.K. Teachers’ Information and Communication Technology (ICT) Assessment Tools: A Review. In Proceedings of the 2021 International Conference on Advanced Learning Technologies (ICALT), Tartu, Estonia, 12–15 July 2021; pp. 56–60. [Google Scholar]
  17. Ning, Y.; Zhou, Y.; Wijaya, T.T.; Chen, J. Teacher Education Interventions on Teacher TPACK: A Meta-Analysis Study. Sustainability 2022, 14, 11791. [Google Scholar] [CrossRef]
  18. Angeli, C.; Valanides, N. Technology Mapping: An Approach for Developing Technological Pedagogical Content Knowledge. J. Educ. Comput. Res. 2013, 48, 199–221. [Google Scholar] [CrossRef]
  19. Murray, S.; Nuttall, J.; Mitchell, J. Research into initial teacher education in Australia: A survey of the literature 1995–2004. Teach. Teach. Educ. 2008, 24, 225–239. [Google Scholar] [CrossRef]
  20. Szymkowiak, A.; Melović, B.; Dabić, M.; Jeganathan, K.; Kundi, G.S. Information technology and Gen Z: The role of teachers, the internet, and technology in the education of young people. Technol. Soc. 2021, 65, 101565. [Google Scholar] [CrossRef]
  21. Rahmatullah, A.S.; Mulyasa, E.; Syahrani, S.; Pongpalilu, F.; Putri, R.E. Digital era 4.0: The contribution to education and student psychology. Linguist. Cult. Rev. 2022, 6, 89–107. [Google Scholar] [CrossRef]
  22. Voogt, J.; Fisser, P.; Pareja Roblin, N.; Tondeur, J.; van Braak, J. Technological pedagogical content knowledge–a review of the literature. J. Comput. Assist. Learn. 2013, 29, 109–121. [Google Scholar] [CrossRef]
  23. Mishra, P.; Koehler, M.J. Introducing technological pedagogical content knowledge. In Proceedings of the Annual Meeting of the American Educational Research Association, New Orleans, LA, USA, 23–27 April 1984; p. 16. [Google Scholar]
  24. Koehler, M.; Mishra, P. What is technological pedagogical content knowledge (TPACK)? Contemp. Issues Technol. Teach. Educ. 2009, 9, 60–70. [Google Scholar] [CrossRef]
  25. Celik, I. Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Comput. Hum. Behav. 2023, 138, 107468. [Google Scholar] [CrossRef]
  26. Wijaya, T.T.; Ning, Y.; Salamah, U.; Hermita, N. Professional Teachers using Technological Pedagogical Mathematics Knowledge, are Mathematics Pre-Service Teachers Ready? J. Phys. Conf. Ser. 2021, 2123, 012040. [Google Scholar] [CrossRef]
  27. Wilson, M.L.; Hall, J.A.; Mulder, D.J. Assessing digital nativeness in pre-service teachers: Analysis of the Digital Natives Assessment Scale and implications for practice. J. Res. Technol. Educ. 2022, 54, 249–266. [Google Scholar] [CrossRef]
  28. Lei, J. Digital natives as preservice teachers: What technology preparation is needed? J. Comput. Teach. Educ. 2009, 25, 87–97. [Google Scholar]
  29. Depaepe, F.; Verschaffel, L.; Kelchtermans, G. Pedagogical content knowledge: A systematic review of the way in which the concept has pervaded mathematics educational research. Teach. Teach. Educ. 2013, 34, 12–25. [Google Scholar] [CrossRef]
  30. Stoilescu, D. A Critical Examination of the Technological Pedagogical Content Knowledge Framework: Secondary School Mathematics Teachers Integrating Technology. J. Educ. Comput. Res. 2015, 52, 514–547. [Google Scholar] [CrossRef]
  31. Sun, J.; Ma, H.; Zeng, Y.; Han, D.; Jin, Y. Promoting the AI teaching competency of K-12 computer science teachers: A TPACK-based professional development approach. Educ. Inf. Technol. 2023, 28, 1509–1533. [Google Scholar] [CrossRef]
  32. Sofyan, S.; Habibi, A.; Sofwan, M.; Yaakob, M.F.M.; Alqahtani, T.M.; Jamila, A.; Wijaya, T.T. TPACK–UotI: The validation of an assessment instrument for elementary school teachers. Humanit. Soc. Sci. Commun. 2023, 10, 55. [Google Scholar] [CrossRef]
  33. Elmaadaway, M.A.N.; Abouelenein, Y.A.M. In-service teachers’ TPACK development through an adaptive e-learning environment (ALE). Educ. Inf. Technol. 2023, 28, 8273–8298. [Google Scholar] [CrossRef]
  34. Reyes Jr, V.C.; Reading, C.; Doyle, H.; Gregory, S. Integrating ICT into teacher education programs from a TPACK perspective: Exploring perceptions of university lecturers. Comput. Educ. 2017, 115, 1–19. [Google Scholar] [CrossRef]
  35. Adipat, S. Developing Technological Pedagogical Content Knowledge (TPACK) through Technology-Enhanced Content and Language-Integrated Learning (T-CLIL) Instruction. Educ. Inf. Technol. 2021, 26, 6461–6477. [Google Scholar] [CrossRef] [PubMed]
  36. Jang, S.-J.; Tsai, M.-F. Exploring the TPACK of Taiwanese elementary mathematics and science teachers with respect to use of interactive whiteboards. Comput. Educ. 2012, 59, 327–338. [Google Scholar] [CrossRef]
  37. Urbina, A.; Polly, D. Examining elementary school teachers’ integration of technology and enactment of TPACK in mathematics. Int. J. Inf. Learn. Technol. 2017, 34, 439–451. [Google Scholar] [CrossRef]
  38. Niess, M.L.; Ronau, R.N.; Shafer, K.G.; Driskell, S.O.; Harper, S.R.; Johnston, C.; Browning, C.; Özgün-Koca, S.A.; Kersaint, G. Mathematics teacher TPACK standards and development model. Contemp. Issues Technol. Teach. Educ. 2009, 9, 4–24. [Google Scholar]
  39. Angeli, C.; Valanides, N. Epistemological and methodological issues for the conceptualization, development, and assessment of ICT–TPCK: Advances in technological pedagogical content knowledge (TPCK). Comput. Educ. 2009, 52, 154–168. [Google Scholar] [CrossRef]
  40. Angeli, C.; Valanides, N. TPCK in pre-service teacher education: Preparing primary education students to teach with technology. In Proceedings of the AERA Annual Conference, New York, NY, USA, 24–28 March 2008. [Google Scholar]
  41. Celik, I.; Sahin, I.; Akturk, A.O. Analysis of the relations among the components of technological pedagogical and content knowledge (TPACK): A structural equation model. J. Educ. Comput. Res. 2014, 51, 1–22. [Google Scholar] [CrossRef]
  42. Pamuk, S.; Ergun, M.; Cakir, R.; Yilmaz, H.B.; Ayas, C. Exploring relationships among TPACK components and development of the TPACK instrument. Educ. Inf. Technol. 2015, 20, 241–263. [Google Scholar] [CrossRef]
  43. Khine, M.S.; Ali, N.; Afari, E. Exploring relationships among TPACK constructs and ICT achievement among trainee teachers. Educ. Inf. Technol. 2017, 22, 1605–1621. [Google Scholar] [CrossRef]
  44. Shulman, L.S. Those who understand: Knowledge growth in teaching. Educ. Res. 1986, 15, 4–14. [Google Scholar] [CrossRef]
  45. Shulman, L. Knowledge and teaching: Foundations of the new reform. Harv. Educ. Rev. 1987, 57, 1–23. [Google Scholar] [CrossRef]
  46. Pierson, M.E. Technology Integration Practice as a Function of Pedagogical Expertise. J. Res. Comput. Educ. 2001, 33, 413–430. [Google Scholar] [CrossRef]
  47. Keating, T.; Evans, E. Three Computers in the Back of the Classroom: Preservice teachers’conceptions of technology integration. In Proceedings of the Society for Information Technology & Teacher Education International Conference, Orlando, FL, USA, 5–10 March 2001; Association for the Advancement of Computing in Education (AACE): Waynesville, NC USA, 2001; pp. 1671–1676. [Google Scholar]
  48. Guerrero, S.M. Teacher knowledge and a new domain of expertise: Pedagogical technology knowledge. J. Educ. Comput. Res. 2005, 33, 249–267. [Google Scholar] [CrossRef]
  49. Koehler, M.J.; Mishra, P. What happens when teachers design educational technology? The development of technological pedagogical content knowledge. J. Educ. Comput. Res. 2005, 32, 131–152. [Google Scholar] [CrossRef]
  50. Thompson, A.D.; Mishra, P. Editors’ remarks: Breaking news: TPCK becomes TPACK! J. Comput. Teach. Educ. 2007, 24, 38–64. [Google Scholar]
  51. Koehler, M.J.; Mishra, P. Introducing tpck. In Handbook of Technological Pedagogical Content Knowledge for Educators; Routledge: New York, NY, USA, 2008; Volume 1, pp. 3–29. [Google Scholar]
  52. Cox, S. A Conceptual Analysis of Technological Pedagogical Content Knowledge; Brigham Young University: Provo, UT, USA, 2008. [Google Scholar]
  53. Kanbul, S.; Adamu, I.; Usman, A.G.; Abba, S.I. Coupling TPACK Instructional Model with Computing Artificial Intelligence Techniques to Determine Technical and Vocational Education Teacher’s Computer and ICT Tools Competence. Preprints 2022, 2022030048. [Google Scholar] [CrossRef]
  54. Jüttner, M.; Boone, W.; Park, S.; Neuhaus, B.J. Development and use of a test instrument to measure biology teachers’ content knowledge (CK) and pedagogical content knowledge (PCK). Educ. Assess. Eval. Account. 2013, 25, 45–67. [Google Scholar] [CrossRef]
  55. Koh, J.H.L.; Chai, C.S.; Tsai, C.-C. Examining the technological pedagogical content knowledge of Singapore pre-service teachers with a large-scale survey. J. Comput. Assist. Learn. 2010, 26, 563–573. [Google Scholar] [CrossRef]
  56. Watzke, J.L. Foreign language pedagogical knowledge: Toward a developmental theory of beginning teacher practices. Mod. Lang. J. 2007, 91, 63–82. [Google Scholar] [CrossRef]
  57. Gatbonton, E. Investigating experienced ESL teachers’ pedagogical knowledge. Can. Mod. Lang. Rev. 2000, 56, 585–616. [Google Scholar] [CrossRef]
  58. Kabakci Yurdakul, I.; Odabasi, H.F.; Kilicer, K.; Coklar, A.N.; Birinci, G.; Kurt, A.A. The development, validity and reliability of TPACK-deep: A technological pedagogical content knowledge scale. Comput. Educ. 2012, 58, 964–977. [Google Scholar] [CrossRef]
  59. Chai, C.S.; Koh, J.H.L.; Tsai, C.-C. Facilitating preservice teachers’ development of technological, pedagogical, and content knowledge (TPACK). J. Educ. Technol. Soc. 2010, 13, 63–73. [Google Scholar]
  60. Abbitt, J.T. Measuring Technological Pedagogical Content Knowledge in Preservice Teacher Education. J. Res. Technol. Educ. 2011, 43, 281–300. [Google Scholar] [CrossRef]
  61. Hill, H.C.; Ball, D.L.; Schilling, S.G. Unpacking pedagogical content knowledge: Conceptualizing and measuring teachers’ topic-specific knowledge of students. J. Res. Math. Educ. 2008, 39, 372–400. [Google Scholar] [CrossRef]
  62. Schmelzing, S.; van Driel, J.H.; Jüttner, M.; Brandenbusch, S.; Sandmann, A.; Neuhaus, B.J. Development, evaluation, and validation of a paper-and-pencil test for measuring two components of biology teachers’ pedagogical content knowledge concerning the “cardiovascular system”. Int. J. Sci. Math. Educ. 2013, 11, 1369–1390. [Google Scholar] [CrossRef]
  63. Baser, D.; Kopcha, T.J.; Ozden, M.Y. Developing a technological pedagogical content knowledge (TPACK) assessment for preservice teachers learning to teach English as a foreign language. Comput. Assist. Lang. Learn. 2016, 29, 749–764. [Google Scholar] [CrossRef]
  64. Lin, T.-C.; Tsai, C.-C.; Chai, C.S.; Lee, M.-H. Identifying Science Teachers’ Perceptions of Technological Pedagogical and Content Knowledge (TPACK). J. Sci. Educ. Technol. 2013, 22, 325–336. [Google Scholar] [CrossRef]
  65. Lee, M.-H.; Tsai, C.-C. Exploring teachers’ perceived self efficacy and technological pedagogical content knowledge with respect to educational use of the World Wide Web. Instr. Sci. 2010, 38, 1–21. [Google Scholar] [CrossRef]
  66. Schmidt, D.A.; Baran, E.; Thompson, A.D.; Mishra, P.; Koehler, M.J.; Shin, T.S. Technological Pedagogical Content Knowledge (TPACK): The Development and Validation of an Assessment Instrument for Preservice Teachers. J. Res. Technol. Educ. 2009, 42, 123–149. [Google Scholar] [CrossRef]
  67. Bernstein, E.; Putnam, F.; Carlson, E. Development, Reliability, and Validity of a Dissociation Scale. J. Nerv. Ment. Dis. 2019, 174, 727–735. [Google Scholar] [CrossRef]
  68. Thompson, B. Exploratory and Confirmatory Factor Analysis: Understanding Concepts and Applications; American Psychological Association: Washington, DC, USA, 2004; Volume 10694, p. 3. [Google Scholar]
  69. Graham, R.; Burgoyne, N.; Cantrell, P.; Smith, L.; St Clair, L.; Harris, R. Measuring the TPACK confidence of inservice science teachers. TechTrends 2009, 53, 70–79. [Google Scholar]
  70. Koh, J.H.L.; Chai, C.S.; Tsai, C.-C. Examining practicing teachers’ perceptions of technological pedagogical content knowledge (TPACK) pathways: A structural equation modeling approach. Instr. Sci. 2013, 41, 793–809. [Google Scholar] [CrossRef]
  71. Chai, C.S.; Ng, E.M.; Li, W.; Hong, H.-Y.; Koh, J.H. Validating and modelling technological pedagogical content knowledge framework among Asian preservice teachers. Australas. J. Educ. Technol. 2013, 29. [Google Scholar] [CrossRef]
  72. DeVellis, R.F.; Thorpe, C.T. Scale Development: Theory and Applications; Sage publications: New York, NY, USA, 2021. [Google Scholar]
  73. Landry, G.A. Creating and Validating an Instrument to Measure Middle School Mathematics Teachers’ Technological Pedagogical Content Knowledge (TPACK). Ph.D. Thesis, University of Tennessee-Knoxville, Knoxville, TN, USA, 2010. [Google Scholar]
  74. Smith, P.G.; Zelkowski, J. Validating a TPACK instrument for 7–12 mathematics in-service middle and high school teachers in the United States. J. Res. Technol. Educ. 2023, 55, 858–876. [Google Scholar] [CrossRef]
  75. Davis, L.L. Instrument review: Getting the most from a panel of experts. Appl. Nurs. Res. 1992, 5, 194–197. [Google Scholar] [CrossRef]
  76. Hardesty, D.M.; Bearden, W.O. The use of expert judges in scale development: Implications for improving face validity of measures of unobservable constructs. J. Bus. Res. 2004, 57, 98–107. [Google Scholar] [CrossRef]
  77. Costello, A.B.; Osborne, J. Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Pract. Assess. Res. Eval. 2005, 10, 7. [Google Scholar]
  78. Conway, J.M.; Huffcutt, A.I. A Review and Evaluation of Exploratory Factor Analysis Practices in Organizational Research. Organ. Res. Methods 2003, 6, 147–168. [Google Scholar] [CrossRef]
  79. Izquierdo, I.; Olea, J.; Abad, F.J. Exploratory factor analysis in validation studies: Uses and recommendations. Psicothema 2014, 26, 395–400. [Google Scholar]
  80. Ferguson, E.; Cox, T. Exploratory factor analysis: A users’ guide. Int. J. Sel. Assess. 1993, 1, 84–94. [Google Scholar] [CrossRef]
  81. Hair, J.F.; Anderson, R.E.; Tatham, R.L.; Black, W.C. Análisis Multivariante; Pearson Prentice Hall: Madrid, 2004. [Google Scholar]
  82. Ab Hamid, M.R.; Sami, W.; Mohmad Sidek, M.H. Discriminant validity assessment: Use of Fornell & Larcker criterion versus HTMT criterion. J. Phys. Conf. Ser. 2017, 890, 012163. [Google Scholar] [CrossRef]
  83. Ursachi, G.; Horodnic, I.A.; Zait, A. How Reliable are Measurement Scales? External Factors with Indirect Influence on Reliability Estimators. Procedia Econ. Financ. 2015, 20, 679–686. [Google Scholar] [CrossRef]
  84. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  85. Hulland, J. Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strateg. Manag. J. 1999, 20, 195–204. [Google Scholar] [CrossRef]
  86. Zhao, J.; Wijaya, T.T.; Mailizar, M.; Habibi, A. Factors Influencing Student Satisfaction toward STEM Education: Exploratory Study Using Structural Equation Modeling. Appl. Sci. 2022, 12, 9717. [Google Scholar] [CrossRef]
  87. Harrington, D. Confirmatory Factor Analysis; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  88. Gay, L.R.; Airasian, P. Educational Research: Consequences for Analysis and Applications; Pearson: Upper Saddle River, NJ, USA, 2003. [Google Scholar]
  89. Suhr, D.D. Exploratory or confirmatory factor analysis? In Proceedings of the 31st Annual SAS? Users Group International Conference. San Francisco, CA, USA, 26–29 March 2006; SAS Institute Inc: Cary, NC, USA Paper Number: 200-231. [Google Scholar]
  90. Sahoo, M. Structural equation modeling: Threshold criteria for assessing model fit. In Methodological Issues in Management Research: Advances, Challenges, and the Way Ahead; Emerald Publishing Limited: Leeds, UK, 2019; pp. 269–276. [Google Scholar] [CrossRef]
  91. Cheung, G.W.; Rensvold, R.B. Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance. Struct. Equ. Model. A Multidiscip. J. 2002, 9, 233–255. [Google Scholar] [CrossRef]
  92. Chen, X.; Zou, D.; Xie, H.; Cheng, G.; Liu, C. Two decades of artificial intelligence in education. Educ. Technol. Soc. 2022, 25, 28–47. [Google Scholar]
  93. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 1–27. [Google Scholar] [CrossRef]
  94. Akyuz, D. Measuring technological pedagogical content knowledge (TPACK) through performance assessment. Comput. Educ. 2018, 125, 212–225. [Google Scholar] [CrossRef]
  95. Park, J.; Teo, T.W.; Teo, A.; Chang, J.; Huang, J.S.; Koo, S. Integrating artificial intelligence into science lessons: Teachers’ experiences and views. Int. J. STEM Educ. 2023, 10, 61. [Google Scholar] [CrossRef]
  96. Kong, S.-C.; Lai, M.; Li, Y. Scaling up a teacher development programme for sustainable computational thinking education: TPACK surveys, concept tests and primary school visits. Comput. Educ. 2023, 194, 104707. [Google Scholar] [CrossRef]
  97. Sobel, M.E. Asymptotic confidence intervals for indirect effects in structural equation models. Sociol. Methodol. 1982, 13, 290–312. [Google Scholar] [CrossRef]
  98. Velander, J.; Taiye, M.A.; Otero, N.; Milrad, M. Artificial Intelligence in K-12 Education: Eliciting and reflecting on Swedish teachers’ understanding of AI and its implications for teaching & learning. Educ. Inf. Technol. 2023. [Google Scholar] [CrossRef]
  99. Graham, J.; Nosek, B.A.; Haidt, J.; Iyer, R.; Koleva, S.; Ditto, P.H. Mapping the moral domain. J. Personal. Soc. Psychol. 2011, 101, 366–385. [Google Scholar] [CrossRef] [PubMed]
  100. An, X.; Chai, C.S.; Li, Y.; Zhou, Y.; Shen, X.; Zheng, C.; Chen, M. Modeling English teachers’ behavioral intention to use artificial intelligence in middle schools. Educ. Inf. Technol. 2023, 28, 5187–5208. [Google Scholar] [CrossRef]
  101. Koehler, M.J.; Mishra, P.; Kereluik, K.; Shin, T.S.; Graham, C.R. The Technological Pedagogical Content Knowledge Framework. In Handbook of Research on Educational Communications and Technology; Spector, J.M., Merrill, M.D., Elen, J., Bishop, M.J., Eds.; Springer: New York, NY, USA, 2014; pp. 101–111. [Google Scholar] [CrossRef]
  102. Chen, J.; Li, D.; Xu, J. Sustainable Development of EFL Teachers’ Technological Pedagogical Content Knowledge (TPACK) Situated in Multiple Learning Activity Systems. Sustainability 2022, 14, 8934. [Google Scholar] [CrossRef]
  103. Archambault, L.M.; Barnett, J.H. Revisiting technological pedagogical content knowledge: Exploring the TPACK framework. Comput. Educ. 2010, 55, 1656–1662. [Google Scholar] [CrossRef]
  104. Aust, R.; Newberry, B.; O’Brien, J.; Thomas, J. Learning Generation: Fostering innovation with tomorrow’s teachers and technology. J. Technol. Teach. Educ. 2005, 13, 167. [Google Scholar]
  105. Teo, T. Pre-service teachers’ attitudes towards computer use: A Singapore survey. Australas. J. Educ. Technol. 2008, 24, 413–424. [Google Scholar] [CrossRef]
  106. Teo, T. Development and validation of the E-learning Acceptance Measure (ElAM). Internet High. Educ. 2010, 13, 148–152. [Google Scholar] [CrossRef]
  107. Teo, T.; Fan, X.; Du, J. Technology acceptance among pre-service teachers: Does gender matter? Australas. J. Educ. Technol. 2015, 31, 235–251. [Google Scholar] [CrossRef]
  108. Rahimi, M.; Yadollahi, S. ICT Use in EFL Classes: A Focus on EFL Teachers’ Characteristics. World J. Engl. Lang. 2011, 1, 17. [Google Scholar] [CrossRef]
  109. Cheng, K.-H. A survey of native language teachers’ technological pedagogical and content knowledge (TPACK) in Taiwan. Comput. Assist. Lang. Learn. 2017, 30, 692–708. [Google Scholar] [CrossRef]
  110. Petrea, R.; Yehuda, P. Exploring TPACK among pre-service teachers in Australia and Israel: Exploring TPACK among preservice teachers in Australia and Israel. Br. J. Educ. Technol. 2019, 50, 2040–2054. [Google Scholar] [CrossRef]
Figure 1. AI-TPACK structural diagram.
Figure 1. AI-TPACK structural diagram.
Sustainability 16 00978 g001
Figure 2. AI-TPACK element relationship diagram.
Figure 2. AI-TPACK element relationship diagram.
Sustainability 16 00978 g002
Table 1. Description of the elements of AI-TPACK.
Table 1. Description of the elements of AI-TPACK.
ElementsConnotationReferences
Content Knowledge (CK)The knowledge applied by educators when delivering instruction in specific subject domains, such as mathematics or scientific knowledge.[54,55]
Pedagogical Knowledge (PK)Knowledge pertaining to the pedagogical process, methodologies, and practices comprised the formulation of instructional plans, the selection of teaching methods, classroom management strategies, assessment of student behavior, and academic performance, among other aspects.[56,57]
AI-Technological Knowledge (AI-TK)Educators’ understanding and application of available AI technologies are of critical importance. This consists of the comprehension and familiarity of visible and tangible AI platforms, tools, products, and educational resources. It also includes adopting a pedagogical approach to using AI in educational contexts, such as fostering its literacy.[58,59]
Pedagogical Content Knowledge (PCK)The knowledge required to select appropriate teaching methods and strategies designed to suit specific instructional content also includes the ability to reconfigure and present information to improve pedagogical outcomes.[60,61,62]
AI-Technological Content Knowledge (AI-TCK)Teachers use AI to provide learners with highly immersive and interactive learning experiences that suit individual knowledge levels, cognitive states, and learning preferences.[63,64]
AI-Technological Pedagogical Knowledge (AI-TPK)An essential aspect is having a dynamic understanding of how the use of AI transforms the teaching and learning processes. This understanding includes recognizing the mutual support, provisioning, and constraints between AI technologies and pedagogy, as well as being able to design effective teaching strategies and activities accordingly.[39]
AI-Technological
Pedagogical Content Knowledge (AI-TPACK)
AI-TPACK comprises specific knowledge related to integrating AI technologies into subject-specific instruction. This knowledge includes the capacity to articulate subject concepts using AI technologies, apply pedagogical skills creatively in teaching with this tool, use AI to address challenges students encounter during concept learning, and use its application to either develop new epistemologies or reinforce existing ones based on established foundations.[22,65,66]
Note: AI encompasses both AI technologies and products. AI technologies comprise machine learning, deep learning, and natural language processing, among others. Meanwhile, AI products encompass a wide range of applications, including robotics, intelligent question banks, and intelligent tutoring systems.
Table 2. Sample information.
Table 2. Sample information.
VariablesCategoriesNumberProportion
GenderMale8222.40%
Female28477.60%
Highest Educational AttainmentUndergraduate19854.10%
Master15241.53%
Doctor164.37%
Teacher TypePre-service teacher13536.89%
In-service teacher23163.11%
Subject CategoryArts9225.14%
Sciences27474.86%
Educational StageElementary School5414.75%
Middle School15141.26%
High School12133.06%
University4010.93%
Table 3. EFA analysis results.
Table 3. EFA analysis results.
ItemsFactorFactor Loadings
1234567
AI-TPK-10.865 0.813
AI-TPK-20.887 0.791
AI-TPK-30.853 0.777
AI-TPK-40.858 0.814
AI-TPK-50.852 0.814
AI-TPK-60.837 0.756
AI-TCK-1 0.671 0.725
AI-TCK-2 0.722 0.750
AI-TCK-3 0.792 0.768
AI-TCK-4 0.821 0.805
AI-TCK-5 0.868 0.836
AI-TCK-6 0.829 0.779
AI-TPACK-1 0.823 0.785
AI-TPACK-2 0.819 0.798
AI-TPACK-3 0.843 0.800
AI-TPACK-4 0.868 0.821
AI-TPACK-5 0.789 0.748
AI-TPACK-6 0.524 0.497
PCK-1 0.482 0.562
PCK-2 0.726 0.689
PCK-3 0.767 0.792
PCK-4 0.667 0.628
PCK-5 0.709 0.533
PCK-6 0.660 0.642
PK-1 0.669 0.705
PK-2 0.668 0.732
PK-3 0.711 0.723
PK-4 0.698 0.724
PK-5 0.705 0.703
PK-6 0.737 0.724
CK-1 0.631 0.512
CK-2 0.651 0.598
CK-3 0.637 0.659
CK-4 0.732 0.714
CK-5 0.580 0.634
CK-6 0.084 0.485
AI-TK-1 0.5920.784
AI-TK-2 0.5900.654
AI-TK-3 0.6990.758
AI-TK-4 0.5570.752
AI-TK-5 0.5880.797
AI-TK-6 0.1300.249
Eigenvalue19.3455.8371.7751.3881.2801.1741.086
Explained Variance (%)46.06013.8994.2253.3043.0482.7952.585
Cumulative Explained Variance (%)46.06059.95964.18467.48870.53673.33175.916
Table 4. Indicators of construct validity.
Table 4. Indicators of construct validity.
FactorVariableEstimateAVECRCronbach’s αp
CKCK-10.7090.5030.8320.8090.000
CK-20.6550.000
CK-30.5730.000
CK-40.8590.000
CK-50.7200.000
PKPK-10.7010.5060.8580.8370.000
PK-20.7650.000
PK-30.8050.000
PK-40.7550.000
PK-50.5750.000
PK-60.6370.000
AI-TKAI-TK-10.7100.7130.9250.9120.000
AI-TK-20.8210.000
AI-TK-30.8780.000
AI-TK-40.9310.000
AI-TK-50.8640.000
PCKPCK-10.5720.5020.8560.8060.000
PCK-20.7260.000
PCK-30.7730.000
PCK-40.6180.000
PCK-50.7030.000
PCK-60.8270.000
AI-TCKAI-TCK-10.8350.7760.9540.9450.000
AI-TCK-20.8180.000
AI-TCK-30.8810.000
AI-TCK-40.9270.000
AI-TCK-50.9430.000
AI-TCK-60.8730.000
AI-TPKAI-TPK-10.8010.7830.9560.9420.000
AI-TPK-20.8790.000
AI-TPK-30.9130.000
AI-TPK-40.9000.000
AI-TPK-50.9300.000
AI-TPK-60.8810.000
AI-TPACKAI-TPACK-10.8920.8140.9560.9310.000
AI-TPACK-20.8810.000
AI-TPACK-30.9010.000
AI-TPACK-40.9320.000
AI-TPACK-50.9030.000
AI-TPACK scaleCK0.3390.5030.8600.9570.000
PK0.3560.000
AI-TK0.6560.000
PCK0.4840.000
AI-TCK0.9050.000
AI-TPK0.9660.000
AI-TPACK0.9270.000
Table 5. SEM fit indices.
Table 5. SEM fit indices.
Fitness IndexCriteriaNumerical Values
AFIGFI>0.900.921
AGFI>0.900.906
RMR<0.050.046
RMSEA<0.080.068
IFINFI>0.900.912
TLI>0.900.933
CFI>0.900.938
RFI>0.900.906
PFIPGFI>0.50.658
PNFI>0.50.741
PCFI>0.50.825
Note: AFI—Absolute Fit Index, GFI—Goodness-of-fit Index, AGFI—Adjusted Goodness-of-Fit Index, RMR—Root Mean Square Residual, RMSEA—Root Mean Square Error of Approximation, IFI —Incremental Fit Index, NFI—Normed Fit Index, TLI—Non-normed Fit Index (also known as the Tucker–Lewis Index, TLI), CFI—Comparative Fit Index, RFI—Relative Fit Index, PFI—Parsimonious Fit Index, PGFI—Parsimonious Goodness-of-Fit Index, PNFI—Parsimonious Normed Fit Index, and PCFI —Parsimonious Comparative Fit Index.
Table 6. Standardized overall effect distribution.
Table 6. Standardized overall effect distribution.
PCKAI-TCKAI-TPKAI-TPACK
CK0.5410.1510.0000.052
PK0.5130.000 0.0060.088
AI-TK0.0000.7200.6840.654
PCK1.0000.0000.000 0.008
AI-TCK0.0001.0000.0000.207
AI-TPK0.0000.0001.0000.870
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ning, Y.; Zhang, C.; Xu, B.; Zhou, Y.; Wijaya, T.T. Teachers’ AI-TPACK: Exploring the Relationship between Knowledge Elements. Sustainability 2024, 16, 978. https://doi.org/10.3390/su16030978

AMA Style

Ning Y, Zhang C, Xu B, Zhou Y, Wijaya TT. Teachers’ AI-TPACK: Exploring the Relationship between Knowledge Elements. Sustainability. 2024; 16(3):978. https://doi.org/10.3390/su16030978

Chicago/Turabian Style

Ning, Yimin, Cheng Zhang, Binyan Xu, Ying Zhou, and Tommy Tanu Wijaya. 2024. "Teachers’ AI-TPACK: Exploring the Relationship between Knowledge Elements" Sustainability 16, no. 3: 978. https://doi.org/10.3390/su16030978

APA Style

Ning, Y., Zhang, C., Xu, B., Zhou, Y., & Wijaya, T. T. (2024). Teachers’ AI-TPACK: Exploring the Relationship between Knowledge Elements. Sustainability, 16(3), 978. https://doi.org/10.3390/su16030978

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop