Next Article in Journal
A Robust AR-DSNet Tracking Registration Method in Complex Scenarios
Previous Article in Journal
A Pseudo-Differential LNA with Noise Improvement Techniques for Concurrent Multi-Band GNSS Applications
Previous Article in Special Issue
Prospective Directions in the Computer Systems Industry Foundation Classes (IFC) for Shaping Data Exchange in the Sustainability and Resilience of Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Empirical Investigation of Multilayered Framework for Predicting Academic Performance in Open and Distance Learning

by
Muyideen Dele Adewale
1,*,
Ambrose Azeta
2,
Adebayo Abayomi-Alli
3 and
Amina Sambo-Magaji
4
1
Africa Centre of Excellence on Technology Enhanced Learning, National Open University of Nigeria, Abuja 900108, Nigeria
2
Department of Software Engineering, Namibia University of Science and Technology, Windhoek 13388, Namibia
3
Department of Computer Science, Federal University of Agriculture, Abeokuta 111101, Nigeria
4
Digital Literacy & Capacity Development Department, National Information Technology Development Agency, Abuja 900247, Nigeria
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2808; https://doi.org/10.3390/electronics13142808
Submission received: 1 May 2024 / Revised: 6 June 2024 / Accepted: 20 June 2024 / Published: 17 July 2024

Abstract

:
Integrating artificial intelligence (AI) in open and distance learning (ODL) necessitates comprehensive frameworks to evaluate its educational implications. Existing models lack a robust multilayered analysis of AI’s impact on educational outcomes in ODL. This study introduces a Multilayered Process Framework designed to predict academic performance in ODL and enhance inclusivity, aligning with UNESCO’s 2030 educational goals. The current research employed structural equation modelling (SEM) to analyse the impact of AI adoption, focusing on the initial layers of the process framework. Preliminary validation of the SEM framework showed a good model fit, with a Chi-square/df ratio of 2.34, Root Mean Square Error of Approximation (RMSEA) of 0.045, and Comparative Fit Index (CFI) of 0.97, indicating the model’s effectiveness in capturing the complexities of AI impacts on student outcomes. This framework provides a structured, multilayered approach to understanding AI’s role in education, facilitating the development of equitable and accessible AI-driven educational technologies. It lays the foundational work for expanding research into predictive analytics with a support vector machine (SVM), aiming to universalise quality education and ensure global educational equity. This study highlights the practical implications for integrating AI in educational settings and suggests future research directions to enhance the adaptability and effectiveness of AI-driven educational interventions.

1. Introduction

Artificial intelligence (AI) catalysed a paradigm shift in educational landscapes, significantly changing teaching methods and student experiences. Ref. [1] emphasised AI’s role in personalising education through sophisticated, data-centric strategies, thus elevating the quality of learning and student academic paths. As AI becomes increasingly integral to education, particularly in open and distance learning (ODL), educators, policymakers, and AI developers must understand its extensive effects.
However, harnessing AI’s full potential in ODL requires an intricate understanding of its multifaceted implications—a challenge taken up by this research. Pioneering efforts by [2,3,4] have highlighted AI’s potential, yet the realisation of this promise within ODL remains a work in progress.
This study builds upon the work of [5] by introducing a comprehensive, multilayered process framework for AI adoption in ODL. This advanced framework enhances previous models by incorporating multiple layers of analysis, which provide a deeper understanding of AI’s influence on learner outcomes. The framework, aligned with UNESCO’s 2030 vision for inclusive and equitable education, employs a novel multi-layer approach to predict and assess AI’s impact on student performance in ODL. It integrates various predictive algorithms and considers gender and regional disparities.
The current study expands on the role of a multi-layer process framework in AI integration within ODL settings by building on foundational concepts from [5]. A hierarchical approach dissects AI’s influence across AI adoption factors, from individual learner interactions to broad systemic changes. To assess how AI impacts student performance across diverse settings, we evaluated critical aspects such as AI alignment and relevance, system quality, user satisfaction, and moderators such as gender and region. This approach not only elucidates AI’s diverse impacts but also aids in developing tailored educational technologies that effectively bridge the theoretical potential to practical applications.
The framework’s architecture comprises five distinct layers: starting with the foundational Common Components layer, followed by Structural Equation Modelling (SEM), Support Vector Machine (SVM), Improved SVM, and culminating with a Comparative Analysis layer. This structured approach allows for a more nuanced analysis than single-algorithm models, enhancing our understanding of the direct effects of AI on academic outcomes.
By introducing this framework, this study highlights the critical role of AI in education and provides a methodical way to examine its effects. The goal is to ensure that AI-driven educational advancements are equitable and accessible, reflecting a commitment to global educational equity. This approach aims to benefit students from all backgrounds, irrespective of gender or geographical location, by tailoring educational technologies to meet diverse needs effectively.
While the problem statement outlines the integration challenges of AI in ODL, further elaboration of the existing knowledge gaps is crucial. The current literature primarily focuses on AI’s role in enhancing learning without thoroughly addressing how these tools meet the diverse needs of educational environments or the complexities of their integration across different geographic and educational contexts. These overlooked aspects are addressed by proposing a tailored, multilayered analytical framework that specifically caters to the nuanced needs of ODL systems. This approach not only fills the identified gaps but also advances the application of AI in education, ensuring more effective, inclusive, and adaptive educational technologies.
Our research, therefore, embarks on a quest to bridge this knowledge chasm. The core ambition is to architect a process framework that remains agnostic to specific machine learning algorithms. This universality is critical because the educational milieu is in constant flux, and tethering our framework to a specific algorithm might render it obsolete or less effective over time. By establishing a framework independent of the algorithm, we ensure flexibility, adaptability, and longevity in its application. It makes a significant theoretical contribution by conceptualising a multilayered process framework. Integrating various AI algorithms, including SVM, provides a robust foundation for future empirical testing. The paper effectively outlines the proposed framework, demonstrating clear conceptual thinking and a logical progression through the layers of analysis. The novel approach of layering different analytical methods represents an advancement in predictive modelling within the educational sector, particularly ODL. The framework’s consideration of gender and regional disparities illustrates a commitment to inclusivity and equity in educational research.
An exhaustive review of articles from 2014 to 2024 was conducted to establish an empirical foundation, highlighting critical factors for AI integration in ODL, revealing gender disparities in AI applications, and illuminating AI’s predictive power regarding academic outcomes. This endeavour aims to guide stakeholders in customising AI tools for diverse ODL student populations, promoting optimised learning and championing gender equity. This study introduces a novel Multilayered Process Framework that enhances the understanding of AI’s impact on student outcomes by integrating various predictive algorithms and focusing on equity and accessibility. It also emphasises the framework’s adaptability to various cutting-edge machine learning algorithms, ensuring its relevance in evolving educational contexts. Furthermore, by addressing gender and regional disparities, this research underscores a commitment to inclusivity, striving to make AI-driven educational advancements accessible to all, thereby contributing to global educational equity.
This endeavour goes beyond theoretical framework development, as shown in the work of [6], to include testing and validation with actual data. This work provides empirical evidence to support the framework previously discussed. The study’s current stage involves initial findings that are instrumental in refining the research tool (the questionnaire). The current scope is to execute layers 1 and 2 of the framework, which essentially means using SEM to analyse the intricacies of AI impacts on academic outcomes. In the future, advanced machine-learning techniques will be used to plan more comprehensive validation.
To enhance the impact of our study’s objectives, we have set performance benchmarks, including achieving a Chi-square/df ratio below 3, a Root Mean Square Error of Approximation (RMSEA) below 0.06, and a Comparative Fit Index (CFI) above 0.95. These benchmarks are recognised psychometrics and social sciences standards and were selected to ensure our SEM results’ statistical robustness and validity. These measurable targets demonstrate our framework’s effectiveness in analysing AI’s impact on academic outcomes in ODL environments, providing clear and transparent performance standards to substantiate our conclusions and evaluate our study’s success.
The remaining parts of this paper are organised as follows: Section two presents the literature review. Section three describes the materials and methods employed in the study. It comprises the research design, process framework, model formulation, single-layered framework, and ethical considerations. Section four highlights the practical implications, limitations and future research endeavours, while Section five concludes the paper.

2. Literature Review

AI has rapidly evolved as a transformative force with the potential to reshape many sectors, including education. Scholarly discourse has been ignited, especially concerning AI’s incorporation into the educational sphere, given the profound implications for teaching, learning, and the overall academic milieu. This comprehensive review aims to shed light on AI’s multifaceted role, potential benefits, challenges, and the path forward, especially within ODL paradigms. The studies were curated based on their relevance to applying AI in distance learning contexts, with a particular emphasis on peer-reviewed articles, empirical research, and case studies published within the last decade. This time frame ensures the inclusion of recent advancements in AI technologies and their educational applications.
Profound transformations in pedagogical strategies, mentoring approaches, and educational content have marked the integration of AI in distance education. Several scholars have explored the myriad potentials of AI in ODL settings. For instance, [7] emphasised AI’s potential to enrich ODL methodologies, elucidating the merits and challenges of such computer-assisted systems. On the other hand, refs. [8,9] have shed light on the tangible enhancements AI brings to e-learning platforms, offering adaptive and intelligent services to stakeholders. These insights converge with [10]’s emphasis on understanding the motivators behind AI’s adoption in distance education and its potential to elevate the educational experiences.
However, there remain areas that have yet to be fully explored. Ref. [11] indicated the incomplete integration of AI, especially in open-source learning management systems. This gap highlights the importance of research like ours, which aims to offer a comprehensive process framework and research model, ensuring a more holistic integration of AI into ODL environments. A confluence of factors propels AI’s integration into ODL. Reviewing leading technology acceptance theories, such as the Technology Acceptance Model (TAM) [12] and the Unified Theory of Acceptance and Use of Technology (UTAUT) [13], offers insight into these determinants. Collectively, these theories highlight the ease of use, usefulness, system quality, user satisfaction, and social influences as key influencers.
Further, the recent literature underscores the value of personalised learning in ODL contexts. Refs. [3,14] assert that efficiency, cost-effectiveness, and heightened student engagement are pivotal in AI adoption in ODL. Emotional, real-time feedback [15] and learning-related anxiety play a role in influencing AI adoption [14]. The literature consistently highlights a positive relationship between AI’s application in ODL and enhanced student outcomes. AI-driven platforms significantly improve student performance through personalised learning experiences and immediate feedback [16,17]. Moreover, studies like [18] exemplify AI’s predictive capabilities in assessing student performance trajectories.
AI’s transformative potential extends beyond academic outcomes and pedagogical techniques to reshape interactions in digital educational environments. Emphasising this, ref. [19] underlines the significance of AI in enhancing the rapport between educators and students in online settings. Ref. [20] further advocate for a nuanced exploration of variables, like gender and geographical disparities, ensuring a comprehensive understanding of AI’s educational dynamics. However, specific challenges need addressing. People have raised concerns about the overwhelming presence of AI in education, its potential biases, and its influence on self-guided learning [1,21]. These concerns necessitate a robust predictive, process-based framework that considers both AI’s potential and pitfalls.
When evaluating AI’s influence on ODL, it is imperative to consider moderating factors such as gender and geographical differences, especially between developed and developing nations. Refs. [22,23,24] highlight gender biases and their potential implications for interactions with AI systems. Concurrently, disparities between developed and developing regions can significantly affect AI’s integration and impact. Ref. [25] further underscores the risk of AI intensifying the digital divide, with particular emphasis on challenges faced by women in both developed and developing settings. Some of the previous studies involving ODL include [26,27,28]. In predicting ODL student performance, Support Vector Machine (SVM) stands out as the most precise AI algorithm [18,29,30,31]. Other promising techniques include evolutionary computation and random forest [32,33]. These tools aid in the early detection of at-risk students, ensuring improved educational outcomes.
This literature review is the foundation for our study, which meticulously crafts a process-based framework and research model. Our study decouples from any specific predictive algorithm, offering a universally adaptable model. By intertwining various variables—from AI adoption drivers to gender and regional disparities—we aim to craft a comprehensive, predictive, process-based framework and research model aligning with UNESCO’s 2030 educational aspirations. This inclusivity ensures that the model is versatile, catering to diverse communities and regions, thus expanding opportunities for all.
This comprehensive literature review focuses on specific AI techniques relevant to distance learning, including machine learning algorithms like SVM and random forests. Each technique’s application in enhancing online education, particularly in equity and accessibility, is explored to provide a nuanced understanding of AI’s role in distance learning.
The integration of AI into the educational domain requires thorough research and adaptation. This literature review highlights the need for a predictive, process-based framework. Emphasising robust theoretical integration and ODL-specific customisations is essential. By adopting these insights, the education sector can seamlessly combine AI innovations with pedagogical practices, ensuring a superior learning experience. Table 1 summarises the limitations identified in the related literature and how this current research addresses them. The table effectively bridges the gap between the literature review and the contributions of this study.

3. Materials and Methods

A thorough literature review was undertaken to create a process framework and research model for predicting the impact of AI adoption on students’ academic performance. This approach aimed to harness insights from current research to shape the conceptual framework and model. The following section systematically elaborates on the methodology employed. Our method was divided into a comprehensive literature review and a subsequent design process, emphasising a design-centric approach. The intricate procedures for both phases are detailed below.

3.1. Research Design

The aim was to comprehensively examine, assess, and consolidate existing scholarly literature on implementing AI in educational settings, focusing on its impact on student performance. The primary objective of this research was to comprehensively review and analyse the current body of literature to identify the various factors and theoretical frameworks relevant to adopting AI in educational contexts. This investigation aimed to facilitate the development of a detailed procedural framework and a predictive research model specifically tailored to understanding the effects of AI on ODL student academic outcomes.
I. 
Search Methodology
A comprehensive search strategy was employed, exploring multiple esteemed academic databases such as Google Scholar, Scopus, and Web of Science. The search involved specific keyword combinations, including terms like “Artificial Intelligence” or “AI”, linked with “student performance” or “academic outcomes”, and “adoption factors” or “integration” to capture a broad spectrum of relevant scholarly literature.
To ensure a thorough and methodical approach in our literature search, specific strategies and parameters were set:
  • Search Equations and Boolean Operators: We utilised Boolean operators such as “AND” and “OR” to refine our searches. For instance, searches included equations like “(‘Artificial Intelligence’ OR ‘AI’) AND (‘student performance’ OR ‘academic outcomes’) AND (‘adoption factors’ OR ‘integration’)”.
  • Database-Specific Strategies: Each database was queried with adaptations to its unique indexing and search capabilities. For example, Scopus used specific filters for document type and publication year to streamline the search results.
  • Keyword Variations: To accommodate variations in terminology across different studies and disciplines, multiple synonyms and related terms were used for each key concept, enhancing the breadth and depth of the search.
II. 
Criteria for Literature Selection
  • Inclusion Criteria:
  • Articles and conference papers from peer-reviewed sources focusing on AI in ODL environments.
  • Literature discussing AI adoption theories, models, or frameworks in education.
  • Recent publications (within the last eight years) in English for current relevance.
  • Exclusion Criteria:
  • Articles and conference proceedings that were not peer-reviewed or academic were excluded.
  • Studies not explicitly centred on AI adoption in ODL settings.
III. 
Data Compilation Process From Each Selected Publication:
  • The authors and year of publication are documented.
  • We identify the objectives, also known as research questions.
  • Extracting key findings, particularly on AI adoption factors.
  • Note any significant frameworks, models, or theories mentioned.
These methods were designed to capture a comprehensive range of articles, ensuring that our review was exhaustive and included all pertinent studies.
The ten-year timeframe for our literature search was deliberately chosen to ensure a comprehensive understanding of recent advancements and foundational studies in AI integration within ODL. This timeframe captures the significant technological and educational developments that have influenced AI’s integration into education. Extending our review to ten years, we ensured a comprehensive analysis that included the initial applications and ongoing evolution of AI technologies in educational settings, providing valuable historical context and insight into long-term trends.

3.2. Design of Process-Based Framework

Upon completing the literature review, the focus shifted to the design phase, employing a systematic approach to ensure the new models’ robustness and relevancy. After the exhaustive review and synthesis, the identified themes, principles, and theoretical constructs were utilised as foundational elements in designing the process framework and research model. Various tools and techniques, including mind mapping and conceptual modelling, were employed to visualise and structure the relationships between the elements and constructs drawn from the literature.
This stage also involved careful consideration of the theoretical coherence and logical consistency of the designed framework and model. The constructs were analysed and organised to logically depict the flow and interrelation, reflecting the dynamics of AI adoption and its impact on student’s academic performance.
I. 
Identification of Core Components
The synthesised literature identified the key components influencing AI adoption and its impact on student performance. These components acted as building blocks for the new design.
II. 
Framework Drafting
We crafted an initial draft of the process framework, leveraging insights from the literature and delineating the sequence and interrelationship of core components, from AI adoption determinants to their eventual influence on academic outcomes.
III. 
Model Conceptualisation
We then conceptualised the research model, detailing the variables, their interrelationships, and their theoretical underpinning. The model’s conceptualisation aims to understand how AI adoption could comprehensively predict academic performance.
IV. 
Evaluation and Refinement
The drafted framework and model underwent iterative refinement. The iterative refinement involved reviewing the literature, ensuring alignment with identified factors, and making necessary adjustments for clarity and coherence. Once the initial designs were developed, they underwent a series of evaluations and refinements to ensure their robustness, relevance, and applicability. The frameworks were scrutinised for inconsistencies, redundancies, or gaps, and necessary modifications were made to enhance their comprehensiveness and coherence.
V. 
Tool Selection
Given the design focus, tools were selected to visually represent and articulate the process framework and research model. Software such as Microsoft Visio 2013 provided the means to create clear, visually appealing, and comprehensive diagrams, ensuring the resultant designs were scholarly, robust, and user-friendly.
Figure 1 is the layered architecture depicting the specific procedural steps of the three chosen algorithms. The architecture has five layers, with layer 1 consisting of three components. Layer 2 has three components. Layer 3 has three components. Layer 4 has three components. Layer 5 has one component.
The framework begins by identifying key factors affecting student performance in ODL from the educational, psychology, and technology adoption literature and integrating them into a coherent model for testing specific hypotheses. It employs structural equation modelling (SEM) to articulate and validate these complex relationships, ensuring theoretical and statistical rigour. The model then advances to a machine learning perspective using SVM for its classification accuracy and capability to manage high-dimensional data, optimising SVM algorithms for varied performance outcomes and refining through feature engineering. The improved SVM layer incorporates kernel adjustments and hyperparameter optimisation, enhancing predictive accuracy. The framework culminates in a comparative analysis layer evaluating SEM, SVM, and improved SVM models using accuracy, precision, recall, and F1 score, supporting model selection for educational contexts. This structured methodology aims for empirical validation and real-world applicability in ODL settings, employing multiple algorithms to comprehensively analyse AI’s impact on academic performance.
To ensure clarity and replicability, the methodology (See Figure 1) underpinning our process framework provides a systematic and structured approach for predicting the impact of AI adoption on students’ academic performance in ODL using multiple algorithms such as SVM, improved SVM and SEM.

3.3. Process of Empirical Data Collection and Analysis

In the SEM phase, we focused on validating our multilayered framework and research model using data from surveys administered to ODL students. These surveys, integral to the framework’s design and detailed in Table 2, assessed the AI-based Moodle platform’s effectiveness and user experience. We evaluated vital variables such as interactive capability (IC), knowledge absorption and user satisfaction (KAUS), and system quality and social influence (SQSI). These variables are based on underlying constructs rooted in models like the Unified Theory of Acceptance and Use of Technology (UTAUT), DeLone & McLean’s Information Systems Success Model (D&M), and the Technology Acceptance Model (TAM). These were harmoniously blended with factors explicitly tailored for ODL domains to address aspects relevant to ODL. This evaluation scheme’s comprehensive and systematic design ensured a thorough assessment, capturing various dimensions of user experience and system efficacy.
Each survey item was meticulously crafted to measure specific constructs, drawing from established theoretical models, ensuring a robust framework for analysing the AI-based Moodle platform’s impact on learning outcomes. The collected data were analysed using SEM to verify the relationships and constructs within our model. Responses were recorded on a Likert scale, with the encoding ranging from “Strongly Disagree” (1) to “Strongly Agree” (5).
Future research stages will incorporate machine-learning algorithms to analyse real-world ODL data further, aiming to enhance our model’s predictive capabilities.
i. 
Empirical Data Collection: Data are being gathered from ODL systems, focusing on variables identified in Table 2. The study’s current stage involves initial datasets and findings that are instrumental in refining the research tool (the questionnaire). Furthermore, more comprehensive validation is planned for the future using SVM and an improved SVM machine learning algorithm.
ii. 
Application of Machine Learning Algorithms: SEM will be used to validate the relationships between these variables. Future research plans to use the SVM and an improved SVM to predict student performance based on the identified variables.
iii. 
Evaluation of Algorithms: We will evaluate the performance of SEM using identified metrics such as the Root Mean Square Error of Approximation (RMSEA), the Comparative Fit Index (CFI), the Tucker-Lewis Index (TLI), and the Standardized Root Mean Square Residual (SRMR). The SVM, an improved SVM, and SEM will be compared based on accuracy, precision, and ability to model complex relationships in future research.
iv. 
Comparative Analysis: Analysing the strengths and weaknesses of each algorithm in the context of ODL to determine the most effective method for predicting and understanding student performance in these environments will be done in future research as more data is gathered to train the SVM models.
This approach ensures a comprehensive validation of the research model and framework, providing a robust analysis of AI’s impact on academic outcomes in ODL.
This framework facilitates understanding the complex relationships between key factors in AI adoption and academic outcomes by integrating the theoretical foundations of AI, ODL, and machine learning algorithms such as SVM and SEM. The framework is independent of machine learning algorithms, and it guides future empirical studies and practical implementation, enabling educational institutions to leverage AI effectively in ODL to enhance students’ academic performance and learning experiences.
In summary, the process framework adopted for this research work encompasses a comprehensive approach that includes a literature review, research model formulation, data collection, rigorous analysis, predictive modelling, and interpretation of findings. By adhering to this framework, this study aims to ensure methodological coherence and produce valuable insights into the relationship between AI adoption and students’ academic performance in ODL environments.

3.4. Research Model Formulation

The research model delineates the principal elements and correlations in predicting AI adoption’s impact on students’ academic performance. This model integrates three primary aspects: AI adoption factors, moderating factors such as gender and regional/geographical differences, and the outcome variable.

3.4.1. AI Adoption Factors

In ODL, AI adoption factors capture a broad spectrum of variables that directly influence the assimilation and application of AI technologies. The variables used as AI adoption factors and the hypothesis proposed in the work of [5] are used in this study.

3.4.2. Moderating Factors Such as Gender and Geographical Differences

The moderating factors, such as gender and geographical differences, are recognised for their influence on the effect of AI adoption on students’ academic performance in ODL. These variables were incorporated into the multilayered framework research model. Model parameters were adjusted, and stratified data analysis was employed to examine how AI adoption impacts diverse student groups and to understand the varied responses across different demographics in ODL settings. This approach ensures that the framework is inclusive and reflects global educational diversity, explicitly focusing on the moderating effects of gender and geographical differences during framework implementation.

3.4.3. Outcome Variable

The outcome variable in this study is students’ academic performance, which is measurable through diverse metrics like final grades, assessment scores, or overall grade point average (GPA). Academic performance is the dependent variable, predicted using AI adoption and moderating factors.
The research model (see Figure 2) hypothesises that AI adoption factors influence students’ uptake of AI in ODL, affecting their academic performance. Moderating factors act as intermediary variables to clarify the relationship between AI adoption and academic outcomes.
This multilayered process framework employs algorithms to develop predictive models that forecast academic performance based on recognised factors and moderators. By incorporating AI adoption and moderating factors and using multiple algorithms, the model offers a systematic methodology to examine the impact of AI on academic performance in ODL. The insights from this study aim to enhance the integration of AI in education and optimise learning experiences in online settings.

3.5. Preliminary Empirical Validation of the Framework

3.5.1. Method of Analysis

This research utilised SEM to analyse collected data and assess the impact of AI adoption on students’ academic performance, considering gender and geographical regions. Currently, the study focuses on SEM analysis as depicted in layers 1 and 2 of Figure 1, with plans to incorporate SVM techniques in future phases once more data are available.
SEM investigates the relationships between AI adoption factors, academic performance, and other moderating variables. Future stages will introduce an enhanced SVM algorithm designed to handle non-linear relationships and noisy data effectively, with improvements for managing missing values, ensuring internal consistency with a Cronbach’s alpha threshold of 0.7 and reducing multicollinearity to enhance reliability and interpretability.
We conducted the SEM analyses using R (version 4.1.2) and the Lavaan package (version 0.6–17), ensuring the models’ reproducibility and integrity. The subsequent introduction of SVM and its comparative analysis with SEM will complete the framework for predicting academic performance in ODL.

3.5.2. Test for Multicollinearity

The variance inflation factor (VIF) is a statistical tool used to diagnose multicollinearity among predictors in a regression model. The VIF measures how much the variance of an estimated regression coefficient increases due to multicollinearity. It measures the inflation in the regression parameter estimates’ variances due to collinearities among the predictors. The rule of thumb is that a VIF exceeding 10 indicates high multicollinearity [34]. VIF is calculated in Equation (1) for each predictor variable as follows:
V I F i = 1 1 R i 2
where R i 2 is the coefficient of determination of regression of predictor i on all of the other predictors. A high VIF indicates that the predictor is highly correlated with the other predictors, making it difficult to assess the individual contribution of predictors to the variation in the response variable [35].

3.5.3. Model Evaluation

While SEM and SVM serve different purposes and employ distinct methodologies, integrating insights from both offers a robust approach to understanding and predicting complex phenomena. SEM validates theoretical relationships and constructs, which can enhance the predictive patterns identified by SVM. Particularly effective is when SVM’s variable importance corroborates SEM’s established relationships, allowing for a comprehensive cross-validation approach that enriches SVM’s interpretive value within a well-supported theoretical framework.
When evaluating SEM models, the focus is on the model fit indices, which determine how well the model reproduces the observed data. Standard indices include the Chi-Square Test of Model Fit (χ2), which compares the model to the observed covariance matrix; a non-significant chi-square suggests a good fit but it is sensitive to sample size [36]. The RMSEA measures the lack of fit against a perfect model, with values ≤ 0.05 indicating a close fit and values > 0.10 a poor fit [37]. Comparing the model against a baseline, the CFI and TLI denote a good fit with values ≥ 0.95 [38]. The Standardized Root Mean Square Residual (SRMR), with ideal values < 0.08, averages discrepancies between observed and predicted correlations [39]. These indices assess the overall model structure’s coherence with empirical data, focusing on inter-variable relationships rather than individual prediction accuracy.
Currently, this research is focused on executing layers 1 and 2 of the framework, as outlined in Figure 1, using only SEM at this stage. Plans for incorporating SVM and an improved SVM, along with a comparative analysis of SEM, SVM, and the enhanced SVM, are slated for future research once sufficient data has been collected to robustly support these machine learning models.

3.6. Measures for Validity and Reliability in the Systematic Search and Data Analysis Model

In order to guarantee validity and reliability in carrying out the systematic search and using the model applied for data analysis, predefined keywords, Boolean operators, and search equations tailored to each database were used alongside strict inclusion and exclusion criteria. Structural Equation Modelling (SEM) was employed for data analysis, enhanced by cross-validation and established goodness-of-fit measures such as the Chi-square/df ratio, RMSEA, and CFI. We also assessed multicollinearity using VIF to maintain statistical integrity. These rigorous methodologies guarantee the robustness and replicability of the findings.

3.7. Ethical Considerations in the Framework

Efforts to maximise the educational value of artificial intelligence (AI) in education are tempered by significant risks and limitations, as noted by [40,41], including privacy concerns and ethical implications. These challenges must be effectively addressed during the integration process. The process-based framework developed for this purpose emphasises privacy, data security, and bias mitigation within AI’s educational applications. It implements stringent privacy measures and secure data protocols, ensuring student confidentiality and the protection of their rights. Additionally, the framework incorporates bias mitigation strategies designed to provide equitable learning experiences by accounting for gender and location as moderating factors. These measures demonstrate a commitment to the ethical use of AI in education, striving to align with societal values and legal standards through data anonymisation, secure data handling, and systematic checks for algorithmic bias.

4. Results

This section concisely and precisely describes the experimental results, their interpretation, and the conclusions drawn.

Results of the Structural Equation Modelling

Table 3 lists the variables used in this study, including the AI adoption factors and the outcome variable. Table 4 provides a structured way to glance at each composite score’s statistical summary across measures like count, mean, standard deviation, percentiles (25th and 75th), and variance. This organisation helps analyse and compare each measure’s distribution and variability succinctly. Table 5 presents each predictor’s VIF, used to identify multicollinearity in regression models. If the predictors are correlated, the VIF values indicate how much the variance of an estimated regression coefficient increases. Table 6 provides a summary of how well the model fits the data. Table 7, Table 8, Table 9 and Table 10 offer a detailed view of the regression estimates, highlighting both main and interaction effects within the structural equation model. It allows for a nuanced discussion of how each variable affects the dependent variable individually, as well as gender and location modifiers. They summarise the estimated parameters of the model, including path coefficients, variances, and their statistical significance.

5. Discussion

The discussion covers statistical summaries, multicollinearity tests, and model fit indices, revealing nuanced perceptions of AI adoption in ODL. Notable findings include variable independence validation, model fit excellence, and varied parameter influences, with gender and location moderating effects highlighted.

5.1. Statistical Summary

Table 4 presents a statistical summary of nine composite measures related to AI adoption in ODL. Each measure was analysed based on a sample size of 361, indicating uniform data collection. The mean scores mostly hover around four, suggesting generally positive perceptions, except for the AILA and ARFC Composites, which score lower, indicating less favourable or more varied perceptions.
Standard deviations vary, with AILA showing the highest variability (1.258) and SAP Composite being the least (0.498), respectively, indicating the most and least minor consensus among responses. The variance also reflects the data spread, with higher variances indicating greater response diversity, particularly in the AILA.
The 25th and 75th percentiles highlight the data distribution, with some measures showing a more comprehensive interquartile range, suggesting polarised responses. This table effectively quantifies AI perception’s dispersion and central tendencies across educational aspects, guiding further detailed analysis and decision-making in ODL enhancements.

5.2. Test for Multicollinearity Using VIF

Table 5 presents the VIF for predictors in the regression model, assessing multicollinearity. ‘KAUS’ and ‘IC’ exhibit VIF values above three, indicating moderate multicollinearity, while ‘ARFC’ and ‘AILA’ show lower VIF values, suggesting less concern. This analysis validates variable independence and ensures regression reliability; a VIF exceeding 10 indicates high multicollinearity.

5.3. Model Fit Indices Discussion

With a Comparative Fit Index (CFI) and Tucker–Lewis Index (TLI), the model achieved perfect fit indices of 1.000, indicating an excellent fit for the data. The root mean square error of approximation (RMSEA) was 0.000, and a standardised root mean square residual (SRMR) error of 0.000 suggests a perfect fit. The test statistic for the user model was 0.000 with zero degrees of freedom, indicating that it was just identified. The baseline model showed a chi-square of 425.992 with 24 degrees of freedom, yielding a highly significant p-value (<0.001), which supports the substantial improvement of the user model over the baseline model. The unrestricted (H1) and user model (H0) likelihood values were −110.518. The Akaike Information Criterion (AIC) was 271.035, the Bayesian Information Criterion (BIC) was 368.188, and the sample-size adjusted BIC (SABIC) was 288.875, which provides a robust basis for model comparison and selection.

5.4. Parameter Estimation Discussion

The parameter estimates indicated varied influences of the variable’s scores on the SAP outcome. Notably, the IC exhibited the most substantial positive impact (Estimate = 0.598, SE = 0.060, p < 0.001), suggesting a strong predictive relationship with the SAP score. In contrast, interactions such as AAR by gender (Estimate = −0.103, SE = 0.112, p = 0.360) and CAAI by gender (Estimate = 0.205, SE = 0.073, p = 0.005) underscore the moderating effects of gender, albeit with varied significance and directionality.
Geographical location also moderated several relationships, such as the strong positive effect of EEU on SAP within different locations (Estimate = 0.576, SE = 0.129, p < 0.001), highlighting the contextual sensitivity of educational outcomes to geographic factors.
When comparing the findings of this study with previous models, it is evident that the proposed Multilayered Process Framework enhances the predictive accuracy and provides a more nuanced understanding of AI’s impact on student outcomes in ODL. Previous models, such as those discussed by [40,41], primarily focused on single-layered analytical approaches. However, a multi-layer analysis incorporating SEM and plans for integrating SVM in future research offers a more comprehensive examination of AI influences. Notably, we considered gender and geographical location as moderating factors, which were previously overlooked. This advancement improves the accuracy in predicting academic performance and highlights AI’s practical applications in improving educational strategies. Future research incorporating SVM will validate and refine the model, ensuring robust and adaptable AI-driven educational technologies.

5.5. Practical Implications, Limitations of the Study and Future Research Endeavours

These findings highlight the intricate interactions between student characteristics, the educational environment, and learning outcomes. The study’s notable interactions indicate that gender and geographical location are critical in shaping educational strategies. When designing and implementing learning strategies, educational institutions should consider these factors. This model provides a nuanced understanding that can inform tailored interventions, enhancing educational effectiveness by addressing demographic and locational diversity. Moreover, the insights from this study are vital for shaping educational strategies and policies. They emphasise the importance of adopting adaptive learning strategies that consider the diverse needs of students based on gender and location, potentially improving learning outcomes.
This research emphasises the importance of personalised educational approaches. For instance, the positive impact of interactive capabilities on student academic performance suggests that integrating interactive AI tools can significantly enhance learning. Moreover, understanding the specific needs and challenges faced by students from different locations or gender backgrounds can help in designing more effective educational tools and strategies.
While this study provides valuable insights, it is not without limitations. The reliance on structural equation modelling necessitates assumptions about linearity and relationships among variables, which may not fully capture the complexity of educational dynamics. Additionally, the study’s context, limited to a specific educational setting, may affect the generalizability of the findings.
Future research will concentrate on gathering additional data to complete the remaining layers of the proposed multilayered framework, which includes the SVM and the improved SVM models. It will facilitate a comprehensive comparative analysis between the SEM, SVM, and improved SVM approaches to better understand their predictive accuracy and differences in educational settings. Ensuring sufficient data collection is critical for effectively training these machine learning models. Therefore, efforts will be directed towards accumulating a robust dataset that supports these sophisticated analytical techniques. Further studies will also identify which AI adoption factors most significantly influence academic performance prediction, providing deeper insights into targeted educational interventions and the dynamics of AI-enhanced learning environments. These efforts can extend the current study’s implications, offering a more robust framework for understanding and enhancing student academic performance through targeted educational strategies.
This study significantly enhances the theoretical and practical understanding of AI in education, yet further efforts are needed to improve its adaptability, effectiveness, and ethical application across varied educational environments. It is recommended that future research incorporate longitudinal studies to assess the long-term effects and sustainability of the AI-driven interventions proposed by the framework.
We have designed the framework for incremental adoption, tailored to diverse technological and educational needs, to enhance accessibility and simplify the application of the multilayered framework for various users, including those with limited experience in advanced analytics. Training modules and support materials are provided to assist educators and technical staff, and the use of open-source tools is advocated, along with fostering a community of practice for ongoing support. In future stages, the phased validation of the framework will incorporate progressively complex layers and integrate advanced methods like SVM. Additionally, Likert scales will be employed in surveys to standardise data collection, making analysis straightforward even for those with basic statistical skills. These steps will ensure the framework is robust yet user-friendly, promoting effective adoption across different educational contexts. Targeted training programs and user-friendly tools have been enhanced to prevent overfitting and strengthen the commitment to ethical AI use, focusing on robust data privacy measures and reducing algorithmic biases.

6. Conclusions

In conclusion, the preliminary insights into a multilayered framework for predicting academic performance in ODL have illuminated the intricate interplay between AI adoption factors and educational outcomes in a distance learning context. Our exploration through structural equation modelling has highlighted the significant roles that gender and geographical location play in shaping academic performance. This study emphasises the necessity for educational institutions to adopt adaptive and personalised learning strategies that cater to the diverse needs of students.
These findings lay a foundational educational policy and strategy platform, advocating for personalised approaches sensitive to the student body’s demographics and locational diversities. As we continue integrating advanced technologies like AI into education, our methodologies must evolve to fully leverage these tools to enhance educational effectiveness.
Future phases of this research will focus on enriching the dataset to support the complete execution of the framework, including comparative analyses between standard SVM and improved SVM models. This endeavour aims to refine our predictive capabilities and contribute to a more nuanced understanding of how AI adoption factors influence academic performance. By advancing this research, we aim to establish a comprehensive model that predicts educational outcomes and significantly enriches the educational experience for all students, regardless of their circumstances.

Recommendations

In light of the results of this study, it is recommended that educational institutions prioritise the development of adaptive learning strategies that cater to students’ specific demographic and geographic backgrounds. Institutions should improve data collection processes to refine AI tool utilisation for real-time analytics, facilitating targeted interventions. Additionally, educators should receive ongoing training in AI applications within educational settings to effectively integrate these insights into curriculum design and teaching methodologies. Policymakers should consider these factors to ensure inclusivity and effectiveness in educational policy development. Further research should focus on comparing standard and improved SVM models to determine the most impactful AI adoption factors on academic performance. By championing these recommendations, we anticipate further enriching the realm of AI in ODL, driving transformative change and fostering academic excellence.

Author Contributions

M.D.A. contributed to the conceptualisation, methodology, data collection, analysis, and manuscript writing. Other co-authors provided guidance throughout the research process, offering valuable insights and feedback. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data for this review are sourced from public academic databases, including Scopus, Google Scholar, and Web of Science. Please direct any additional inquiries to the corresponding author.

Conflicts of Interest

All of the authors have said that no financial or personal ties between them could have influenced the work reported in this paper.

References

  1. Wang, Y.; Liu, C.; Tu, Y.-F. Factors Affecting the Adoption of AI-Based Applications in Higher Education. Educ. Technol. Soc. 2021, 24, 116–129. [Google Scholar]
  2. Picciano, A.G. Theories and frameworks for online education: Seeking an integrated model. Online Learn. 2017, 21, 166–190. [Google Scholar] [CrossRef]
  3. Haenlein, M.; Kaplan, A. A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. Calif. Manag. Rev. 2019, 61, 5–14. [Google Scholar] [CrossRef]
  4. Hwang, G.J.; Xie, H.; Wah, B.W.; Gašević, D. Vision, challenges, roles, and research issues of artificial intelligence in education. Comput. Educ. Artif. Intell. 2020, 1, 100001. [Google Scholar] [CrossRef]
  5. Adewale, M.D.; Azeta, A.; Abayomi-Alli, A.; Sambo-Magaji, A. Artificial intelligence influence on learner outcomes in distance education: A process-based framework and research model. In Proceedings of the EAI ICISML 2024—3rd International Conference on Intelligent Systems and Machine Learning, Pune, India, 5–6 January 2024. [Google Scholar]
  6. Adewale, M.D.; Azeta, A.; Abayomi-Alli, A.; Sambo-Magaji, A. A multilayered process framework for predicting students’ academic performance in open and distance learning. In Proceedings of the EAI MTYMEX 2024—3rd EAI International Conference on Smart Technologies and Innovation Management, Vancouver, BC, Canada, 29 March 2024. [Google Scholar]
  7. Liu, N.B.; Lu, N.Z. Design of spoken English teaching based on artificial intelligence educational robots and wireless network technology. ICST Trans. Scalable Inf. Syst. 2023, 10, e12. [Google Scholar] [CrossRef]
  8. Gao, H. Online AI-guided video extraction for distance education with applications. Math. Probl. Eng. 2022, 2022, 5028726. [Google Scholar] [CrossRef]
  9. Tanjga, M. E-learning and the use of AI: A review of current practices and future directions. Qeios 2023. [Google Scholar] [CrossRef]
  10. Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.S.; Eirug, A.; et al. Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice, and policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
  11. Huang, J.; Saleh, S.; Liu, Y. A review on artificial intelligence in education. Acad. J. Interdiscip. Stud. 2021, 10, 206. [Google Scholar] [CrossRef]
  12. Charness, N.; Boot, W.R. Technology, gaming, and social networking. In Handbook of the Psychology of Aging; Academic Press: Cambridge, MA, USA, 2016; pp. 389–407. [Google Scholar] [CrossRef]
  13. Yakubu, M.N.; Dasuki, S.I. Factors affecting the adoption of e-learning technologies among higher education students in Nigeria. Inf. Dev. 2018, 35, 492–502. [Google Scholar] [CrossRef]
  14. Almaiah, M.A.; Alfaisal, R.; Salloum, S.A.; Hajjej, F.; Thabit, S.; El-Qirem, F.A.; Al-Maroof, R.S. Examining the impact of artificial intelligence and social and computer anxiety in e-learning settings: Students’ perceptions at the university level. Electronics 2022, 11, 3662. [Google Scholar] [CrossRef]
  15. Horowitz, M.; Kahn, L.E. What influences attitudes about artificial intelligence adoption: Evidence from U.S. local officials. PLoS ONE 2021, 16, e0257732. [Google Scholar] [CrossRef]
  16. Zhu, Z.; Liu, Q.; Li, H. The application of artificial intelligence in open and distance learning: A review. Int. J. Emerg. Technol. Learn. 2018, 13, 114–126. [Google Scholar]
  17. Bertl, M.; Metsallik, J.; Ross, P. A systematic literature review of AI-based digital decision support systems for post-traumatic stress disorder. Front. Psychiatry 2022, 13, 923613. [Google Scholar] [CrossRef]
  18. Ouyang, F.; Wu, M.; Zheng, L.; Zhang, L.; Jiao, P. Integration of artificial intelligence performance prediction and learning analytics to improve student learning in online engineering course. Int. J. Educ. Technol. High. Educ. 2023, 20, 4. [Google Scholar] [CrossRef]
  19. Chen, X.; Xie, H.; Zou, D.; Hwang, G.J. Application and theory gaps during the rise of artificial intelligence in education. Comput. Educ. Artif. Intell. 2020, 1, 100002. [Google Scholar] [CrossRef]
  20. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.T. Ethical principles for artificial intelligence in education. Educ. Inf. Technol. 2022, 28, 4221–4241. [Google Scholar] [CrossRef]
  21. Seo, K.W.; Tang, J.; Roll, I.; Fels, S.; Yoon, D. The impact of artificial intelligence on learner–instructor interaction in online learning. Int. J. Educ. Technol. High. Educ. 2021, 18, 54. [Google Scholar] [CrossRef]
  22. Daraz, L.; Bouseh, S.; Chang, B.S. Subpar: The challenges of gender parity in Canada’s artificial intelligence ecosystem. Comput. Inf. Sci. 2022, 15, 1. [Google Scholar] [CrossRef]
  23. Gardner, J.; Brooks, C.; Baker, R.S. Evaluating the fairness of predictive student models through slicing analysis. In Proceedings of the LAK19: Proceedings of the 9th International Conference on Learning Analytics & Knowledge, Tempe, AZ, USA, 4–8 March 2019. [Google Scholar] [CrossRef]
  24. Kumar, S.; Choudhury, S. Gender and feminist considerations in artificial intelligence from a developing-world perspective, with India as a case study. Humanit. Soc. Sci. Commun. 2022, 9, 31. [Google Scholar] [CrossRef]
  25. Toplic, L. If AI Is the Future, Gender Equity is Essential. NetHope. 2021. Available online: https://nethope.org/articles/if-ai-is-the-future-gender-equity-is-essential/ (accessed on 26 December 2022).
  26. Azeta, A.A.; Guembe, B.; Ankome, T.; Osakwe, J. Machine learning techniques for automatic long text examination in open and distance learning. In Proceedings of the International Conference on Information systems and Emerging Technologies (ICISET), and International Conference on Data Science, Machine Learning and Artificial Intelligence (DSMLAI), Namibia University of Science and Technology, Windhoek, Namibia, 23–27 November 2022. [Google Scholar]
  27. Blessing, G.; Azeta, A.A.; Misra, S.; Chigozie, F.; Ahuja, R. A machine learning prediction of automatic text-based assessment for open and distance learning: A review. In Innovations in Bio-Inspired Computing and Applications, Proceedings of the 10th International Conference on Innovations in Bio-Inspired Computing and Applications, IBICA 2019 and 9th World Congress on Information and Communication Technologies, WICT 2019, Gunupur, India, 16–18 December 2019; Advances in Intelligent Systems and Computing, AISC; Springer: Cham, Switzerland, 2021; Volume 1180. [Google Scholar]
  28. Ayo, C.K.; Odukoya, J.O.; Azeta, A.A. A review of open and distance education and human development in Nigeria. Int. J. Emerg. Technol. Learn. 2014, 9, 63–67. [Google Scholar]
  29. Mduma, N.; Kalegele, K.; Machuve, D. A survey of machine learning approaches and techniques for student dropout prediction. Data Sci. J. 2019, 18, 14. [Google Scholar] [CrossRef]
  30. Tomasevic, N.; Gvozdenovic, N.; Vranes, S. An overview and comparison of supervised data mining techniques for student exam performance prediction. Comput. Educ. 2020, 143, 103676. [Google Scholar] [CrossRef]
  31. Ayouni, S.; Hajjej, F.; Maddeh, M.; Al-Otaibi, S. A new ML-based approach to enhance student engagement in online environment. PLoS ONE 2021, 16, e0258788. [Google Scholar] [CrossRef] [PubMed]
  32. Jiao, P.; Ouyang, F.; Zhang, Q.; Alavi, A.H. Artificial intelligence-enabled prediction model of student academic performance in online engineering education. Artif. Intell. Rev. 2022, 55, 6321–6344. [Google Scholar] [CrossRef]
  33. Holicza, B.; Kiss, A. Predicting and comparing students’ online and offline academic performance using machine learning algorithms. Behav. Sci. 2023, 13, 289. [Google Scholar] [CrossRef] [PubMed]
  34. Kim, J.H. Multicollinearity and misleading statistical results. Korean J. Anesthesiol. 2019, 72, 558–569. [Google Scholar] [CrossRef] [PubMed]
  35. Salmerón, R.; García, C.; García, J. Overcoming the Inconsistencies of the Variance Inflation Factor: A Redefined VIF and a Test to Detect Statistical Troubling Multicollinearity. arXiv 2020, arXiv:2005.02245. Available online: https://consensus.app/papers/overcoming-inconsistences-variance-inflation-factor-salmeron/2701f6aa76e8527c87c9f9ed439e28d7/ (accessed on 11 February 2024).
  36. Lai, K. Fit difference between nonnested models given categorical data: Measures and estimation. Struct. Equ. Model. A Multidiscip. J. 2020, 28, 99–120. [Google Scholar] [CrossRef]
  37. Xia, Y.; Yang, Y. RMSEA, CFI, and TLI in structural equation modeling with ordered categorical data: The story they tell depends on the estimation methods. Behav. Res. Methods 2018, 51, 409–428. [Google Scholar] [CrossRef]
  38. Shi, D.; Distefano, C.; Maydeu-Olivares, A.; Lee, T. Evaluating SEM model fit with small degrees of freedom. Multivar. Behav. Res. 2021, 57, 179–207. [Google Scholar] [CrossRef]
  39. Shi, D.; Maydeu-Olivares, A. The effect of estimation methods on SEM fit indices. Educ. Psychol. Meas. 2020, 80, 421–445. [Google Scholar] [CrossRef]
  40. Wang, T.; Lund, B.D.; Marengo, A.; Pagano, A.; Mannuru, N.R.; Teel, Z.A.; Pange, J. Exploring the potential impact of artificial intelligence (AI) on international students in higher education: Generative AI, chatbots, analytics, and international student success. Appl. Sci. 2023, 13, 6716. [Google Scholar] [CrossRef]
  41. Marengo, A.; Pagano, A.; Pange, J.; Soomro, K.A. The educational value of artificial intelligence in higher education: A 10-year systematic literature review. Interact. Technol. Smart Educ. 2024. ahead-of-print. [Google Scholar] [CrossRef]
Figure 1. The Multilayered Process Framework for Predicting the Impact of AI Adoption on Students’ Academic Performance in ODL (Source: [6]).
Figure 1. The Multilayered Process Framework for Predicting the Impact of AI Adoption on Students’ Academic Performance in ODL (Source: [6]).
Electronics 13 02808 g001
Figure 2. Research Model (Source: [6]).
Figure 2. Research Model (Source: [6]).
Electronics 13 02808 g002
Table 1. Addressing limitations in related work.
Table 1. Addressing limitations in related work.
ReferencesLimitation IdentifiedAddressed by Current Study
[11]Incomplete integration of AI in open-source LMS.The research proposes a holistic framework that ensures comprehensive AI integration in Open and Distance Learning (ODL) environments.
[23,24]Gender biases and geographical disparities in AI adoption.It incorporates a nuanced analysis that accounts for gender and geographical disparities, aiming for equitable AI integration.
[3,4]The potential of AI in ODL has not been fully realised.Introduces a Multi-layer Process Framework to harness AI’s full potential in enhancing ODL outcomes.
[10]Lack of understanding of AI adoption motivators.The study explores AI adoption drivers in depth, integrating these insights into the framework design.
[1,21]Concerns about AI’s overwhelming presence and potential biases.The framework prioritises ethical considerations by embedding privacy, data security, and bias mitigation strategies within its structure.
[16,18]AI’s predictive capabilities on academic outcomes are not fully leveraged.Leverages advanced predictive algorithms for accurate student performance forecasts, including an improved SVM.
[19,20]Limited exploration of AI’s impact on educator-student rapport.It aims to enhance digital educational interactions, using AI to improve communication and feedback loops.
[25]Risk of AI intensifying the digital divide.Addresses digital divide concerns by ensuring the framework’s adaptability across diverse educational settings.
Table 2. Questionnaire items used to measure the research model’s variables.
Table 2. Questionnaire items used to measure the research model’s variables.
VariablesItemsSourceConstructs Measured
AI Alignment and Relevance (AAR)
  • I feel that the AI-based Moodle platform used in my course aligns well with my learning needs and objectives.
Elements Peculiar to ODLInstitutional Alignment
2.
The AI-based Moodle platform implemented in my institution aligns with its educational goals and values.
Elements Peculiar to ODLInstitutional Alignment
3.
The use of AI-based Moodle platform features makes my course content more relevant.
UTAUTPerceived Usefulness
4.
Using the AI-based Moodle platform in my course positively impacts my attitude towards technology in education.
TAMAttitude toward Technology
Comparative Advantage of AI (CAAI)
  • Learning with the AI-based Moodle platform is more effective than traditional educational methods.
Elements Peculiar to ODLComparative Advantage
2.
The AI-based Moodle platform features provide significant advantages to my learning process compared to traditional methods.
Elements Peculiar to ODLComparative Advantage
3.
Learning with the AI-based Moodle platform is more efficient in terms of time and resource utilisation.
UTAUTPerceived Usefulness
4.
The AI-based Moodle platform enhances the effectiveness of my learning outcomes compared to traditional methods.
UTAUTPerceived Usefulness
Ease and Enjoyment of Use (EEU)
  • I find it easy to use the AI-based Moodle platform for learning in my course.
UTAUTPerceived Ease of Use
2.
My experience interacting with the AI-based Moodle platform in my course is enjoyable.
UTAUTPerceived Enjoyment
3.
Learning with the AI-based Moodle platform is intuitive and user-friendly.
UTAUTPerceived Ease of Use
4.
The use of the AI-based Moodle platform in my course is engaging and motivating.
UTAUTPerceived Enjoyment
AI Readiness and Facilitating Conditions (ARFC)
  • I feel well-prepared to use the AI-based Moodle platform in my learning.
Elements Peculiar to ODLReadiness for AI adoption
2.
My institution is well-prepared for adopting and implementing the AI-based Moodle platform.
Elements Peculiar to ODLReadiness for AI adoption
3.
I receive substantial support (technical, learning resources, etc.) in using the AI-based Moodle platform for learning.
UTAUTFacilitating Conditions
4.
The conditions in my institution facilitate the effective use of the AI-based Moodle platform for learning.
UTAUTFacilitating Conditions
AI-induced Learning Anxiety (AILA)
  • I often feel anxious or stressed about using the AI-based Moodle platform in my course.
Elements Peculiar to ODLStress linked to AI-based learning.
2.
I feel worried about relying on the AI-based Moodle platform for learning.
Elements Peculiar to ODLStress linked to AI-based learning.
3.
I often feel overwhelmed by the complexity of the AI-based Moodle platform used in my course.
Elements Peculiar to ODLStress linked to AI-based learning.
4.
I worry that errors or problems in the AI-based Moodle platform could negatively impact my learning outcomes.
Elements Peculiar to ODLStress linked to AI-based learning.
Interactive Capability (IC)
  • I feel well-prepared to interact and collaborate in an online environment facilitated by the AI-based Moodle platform.
Elements Peculiar to ODLPreparedness for online interactions
2.
The AI-based Moodle platform has enhanced my ability to interact with teachers and peers.
TAMPerceived usefulness
3.
The use of the AI-based Moodle platform has positively impacted my collaboration in group projects or activities.
Elements Peculiar to ODLImpact on group collaboration
4.
The AI-based Moodle platform facilitates effective communication in my learning environment.
UTAUTPerceived Ease of Use
Knowledge Absorption and User Satisfaction (KAUS)
  • The AI-based Moodle platform enhances my understanding and absorption of course material.
Elements Peculiar to ODLImpact on knowledge uptake
2.
I am satisfied with my learning outcomes due to the use of the AI-based Moodle platform.
D&M ModelUser Satisfaction
3.
The AI-based Moodle platform often aids in clarifying complex course material or concepts.
Elements Peculiar to ODLImpact on knowledge uptake
4.
The use of the AI-based Moodle platform improves my satisfaction with the learning experience.
D&M ModelUser Satisfaction
Systems Quality and Social Influence (SQSI)
  • The AI-based Moodle platform used in my course is of high quality (reliability, speed, design, etc.).
D&M ModelSystem Quality
2.
The views of my peers significantly influence my usage of the AI-based Moodle platform in my course.
UTAUTSocial Influence
3.
Social media, discussions with peers, or instructors’ opinions have a strong impact on my acceptance and use of the AI-based Moodle platform.
UTAUTSocial Influence
4.
High-quality AI systems enhance their acceptance and use among my peers.
D&M ModelSystem Quality
Students’ Academic Performance
  • I believe that using AI tools like the AI-based Moodle platform has improved my academic performance.
Self-AssessmentPerceived Academic Enhancement
2.
AI in online learning has helped me better understand the course materials.
Self-AssessmentPerceived Understanding
3.
AI tools like the AI-based Moodle platform have contributed to better grades in my courses.
Self-AssessmentPerceived Grade Improvement
4.
How would you classify your Cumulative Grade Point Average (CGPA) on a scale of 5? Please select the range that applies to your academic performance.
Academic RecordsObjective Academic Performance
Table 3. Variables used in the study.
Table 3. Variables used in the study.
S/NVariablesDescription
1AI Alignment and Relevance (AAR)Measures of AI’s fit with student and institutional needs, integrating Institutional Alignment, Attitude toward Technology, and facets of Perceived Usefulness.
2Comparative Advantage of AI (CAAI)Assesses the benefits of AI versus traditional methods, integrating Comparative Advantage and aspects of Perceived Usefulness.
3Ease and Enjoyment of Use (EEU)This measure gauges the simplicity and pleasure of AI use by blending Perceived Ease of Use and Perceived Enjoyment.
4AI Readiness and Facilitating Conditions (ARFC)It evaluates the readiness for AI adoption and existing supportive conditions.
5AI-induced Learning Anxiety (AILA)It determines the stress linked to AI-based learning.
6Interactive Capability (IC)Assesses preparedness for and enhancements in AI-facilitated online interactions.
7Knowledge Absorption and User Satisfaction (KAUS)It examines AI’s impact on knowledge uptake and overall user contentment.
8Systems Quality and Social Influence (SQSI)It evaluates AI system quality and the role of societal factors in its adoption.
9Student’s Academic Performance (SAP)Measures the educational outcomes and academic achievements of students.
Table 4. Statistical Summary of Composite Measures in AI Adoption.
Table 4. Statistical Summary of Composite Measures in AI Adoption.
Variables N Mean Std. Dev 25% 75% Variance
AAR 361 4.337 0.474 4.00 4.75 0.224
CAAI 361 4.335 0.563 4.00 4.75 0.317
EEU 361 4.339 0.533 4.00 5.00 0.284
ARFC 361 3.921 0.842 3.25 4.50 0.709
AILA 361 3.274 1.258 2.00 4.25 1.583
IC 361 4.179 0.592 4.00 4.50 0.351
KAUS 361 4.228 0.554 4.00 4.50 0.306
SQSI 361 3.971 0.807 3.75 4.50 0.651
SAP 361 4.348 0.498 4.00 4.75 0.248
Table 5. Variance Inflation Factors (VIF) for Predictors.
Table 5. Variance Inflation Factors (VIF) for Predictors.
VariablesVIF
AAR2.761451
CAAI2.133114
EEU2.741645
ARFC1.374151
AILA1.637115
IC3.573034
KAUS4.914508
SQSI2.587426
SAP2.588414
Table 6. Model Fit Indices.
Table 6. Model Fit Indices.
Fit IndexValueDescription
Chi-Square (χ2)0.000Model’s chi-square statistic
Degrees of Freedom0Degrees of freedom for the model
Comparative Fit Index (CFI)1.000Indicates an excellent fit of the user model
Tucker–Lewis Index (TLI)1.000It also indicates an excellent fit of the user model
Root Mean Square Error of Approximation (RMSEA)0.000Suggests a perfect fit with a lower bound of 0.000 and upper bound of 0.000
Standardised Root Mean Square Residual (SRMR)0.000Reflects perfect model fit
Akaike Information Criterion (AIC)271.035A measure for model comparison
Bayesian Information Criterion (BIC)368.188A measure for model comparison considering sample size
Sample-size adjusted BIC (SABIC)288.875Adjusted BIC for model comparison
Table 7. Parameter estimates (main effects).
Table 7. Parameter estimates (main effects).
PredictorEstimateStd.Errz-Valuep-ValueStd.lvStd.All
SAP ~
AAR−0.1480.095−1.5600.119−0.148−0.118
CAAI−0.1370.051−2.6780.007−0.137−0.154
EEU0.0650.0780.8390.4020.0650.060
ARFC−0.0940.045−2.1030.035−0.094−0.161
AILA−0.0380.031−1.2380.216−0.038−0.088
IC0.5980.0609.983<0.0010.5980.640
KAUS0.5250.0955.518<0.0010.5250.488
SQSI0.2410.0643.786<0.0010.2410.327
Table 8. Parameter estimates (interactions with gender).
Table 8. Parameter estimates (interactions with gender).
PredictorEstimateStd.Errz-Valuep-ValueStd.lvStd.All
Interactions: Gender
AAR-Gender−0.1030.112−0.9150.360−0.103−0.364
CAAI-Gender0.2050.0732.7850.0050.2050.735
EEU-Gender0.2570.1052.4380.0150.2570.910
ARFC-Gender0.0960.0472.0680.0390.0960.321
AILA-Gender0.1460.0354.160<0.0010.1460.491
IC-Gender−0.1240.096−1.2910.197−0.124−0.437
KAUS-Gender−0.1860.115−1.6110.107−0.186−0.649
SQSI-Gender−0.2960.071−4.150<0.001−0.296−1.008
Table 9. Parameter estimates (interactions with location).
Table 9. Parameter estimates (interactions with location).
PredictorEstimateStd. Errz-Valuep-ValueStd. lvStd. All
Interactions: Location
AAR-Location0.3570.1073.3260.0010.3571.281
CAAI-Location0.1930.1031.8760.0610.1930.705
EEU-Location0.5760.1294.460<0.0010.5762.090
ARFC-Location0.0020.0470.0430.9660.0020.006
AILA-Location−0.0650.034−1.9190.055−0.065−0.213
IC-Location−0.7630.084−9.134<0.001−0.763−2.710
KAUS-Location−0.1390.131−1.0620.288−0.139−0.494
SQSI-Location−0.1780.068−2.5990.009−0.178−0.609
Table 10. Parameter estimates (Variances).
Table 10. Parameter estimates (Variances).
VariableEstimateStd. Errz-Valuep-ValueStd.lvStd.All
. SAP0.1080.00813.416<0.0010.1080.306
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adewale, M.D.; Azeta, A.; Abayomi-Alli, A.; Sambo-Magaji, A. Empirical Investigation of Multilayered Framework for Predicting Academic Performance in Open and Distance Learning. Electronics 2024, 13, 2808. https://doi.org/10.3390/electronics13142808

AMA Style

Adewale MD, Azeta A, Abayomi-Alli A, Sambo-Magaji A. Empirical Investigation of Multilayered Framework for Predicting Academic Performance in Open and Distance Learning. Electronics. 2024; 13(14):2808. https://doi.org/10.3390/electronics13142808

Chicago/Turabian Style

Adewale, Muyideen Dele, Ambrose Azeta, Adebayo Abayomi-Alli, and Amina Sambo-Magaji. 2024. "Empirical Investigation of Multilayered Framework for Predicting Academic Performance in Open and Distance Learning" Electronics 13, no. 14: 2808. https://doi.org/10.3390/electronics13142808

APA Style

Adewale, M. D., Azeta, A., Abayomi-Alli, A., & Sambo-Magaji, A. (2024). Empirical Investigation of Multilayered Framework for Predicting Academic Performance in Open and Distance Learning. Electronics, 13(14), 2808. https://doi.org/10.3390/electronics13142808

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop