Next Article in Journal
Personal Growth under Stress: Mediating Effects of Unit Cohesion and Leadership during Mandatory Military Training
Previous Article in Journal
Durability of Engineered Cementitious Composites Incorporating High-Volume Fly Ash and Limestone Powder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multidimensional Evaluation of Technology-Enabled Assessment Methods during Online Education in Developing Countries

by
Ambreen Sultana Khattak
1,
Muhammad Khurram Ali
1,* and
Mohammed Al Awadh
2
1
Industrial Engineering Department, University of Engineering and Technology, Taxila 47050, Pakistan
2
Department of Industrial Engineering, College of Engineering, King Khalid University, Abha 62529, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(16), 10387; https://doi.org/10.3390/su141610387
Submission received: 28 May 2022 / Revised: 2 August 2022 / Accepted: 2 August 2022 / Published: 20 August 2022
(This article belongs to the Topic Education and Digital Societies for a Sustainable World)

Abstract

:
Humanity has faced unprecedented chaos in the education sector due to the inevitable sudden adoption of online mode of learning during the pandemic. The complexities associated with technology-enabled learning and assessment have different connotations in developing countries due to a lack of infrastructure and awareness. Such countries can switch over to an online mode of education more frequently in the future due to highly volatile local political and cultural situations on top of the pandemic. This study evaluates the complexities associated with technology-enabled online assessment methods in Pakistan. Technology readiness and performance for the learning assessment of students are appraised through approaching approximately one thousand students from more than one hundred public and private sector engineering universities. A screened list of assessment alternatives and their influencing factors are then prioritized using the multi-actor multi-criteria analysis (MAMCA) by considering the perceptions of national policymakers, faculty members and students. The aggregate results reveal that, among the influencing factors, ‘mental health’ received the highest weightage, and stakeholders are indifferent to associated costs despite financial challenges. Automated MCQs secured the top position in the ranking list. Sensitivity analysis incorporates some disagreements among the stakeholders, which makes this study highly beneficial for policy modeling.

1. Introduction

The unprecedented outbreak of COVID-19 posed extraordinary challenges to higher education institutions in their teaching and learning activities. Apart from ensuring the quality of online education, the major challenge due to the pandemic was assessing the students remotely. Since there was no clear policy or guidelines in most higher education institutions, either for online teaching or assessment methodology, the institutions felt handicapped. The students pursuing professional degrees were the ones who suffered the most since the delay in the educational process resulted in a delay to start their careers [1].
Universities had to increase their productivity and efficiency by making use of technology and virtually increasing their reach to students [2]. Consequently, most higher education institutions now offer online courses and have adopted modern techniques [3,4]. The new techniques are based on the E-learning methodology, which includes the use of online learning portals, video conferencing, streaming on social media, such as WhatsApp, Telegram and mobile apps, such as Zoom, Google Meet and Microsoft Teams [5,6].
The inclusion of artificial-intelligence-based learning themes has changed the traditional way of education to the modern ways of learning [3,6,7]. These circumstances have, therefore, attracted many researchers and policymakers to evaluate these new technologies under the context of their acceptance among societies [8].
Online learning and assessment had a different connotation in developing countries [6]. It badly affected every facet of life, most prominently the education sector, where the educational institutions were the first ones to be closed [3]. Apart from the usual challenges in the online mode of learning, developing countries had much greater issues. Developed countries had been partially using the online mode of teaching for a long time, however, developing countries mostly adopted it after the pandemic and, therefore, had to face plenty of hurdles in the implementation process [6,9].
There were several concerns, such as students coming from remote areas with no internet connections; students coming from poor families attempting to learn online while living in only two rooms with five to six family members; non-availability of educational tools, such as laptops, mobile phones and tablets [9] (prerequisites for online education), as the parents could not afford to provide these for their every school-going child. All this further amplified the severity of the pandemic for educational systems in developing countries.
Most of the studies on online education reported in the literature provide instances from developed countries. Models that have been used in some of the past studies on online learning and assessment are summarized in Figure 1. The presented studies evaluate the acceptance and optimization of online education using different techniques.
TAM and DM Models have been successfully utilized for efficiency measurement of the online education system. TAM [10] provides various solutions based on the results, such as e-monitoring [11], to be the optimal solution. Some of the studies are from the perspective of COVID-19 [12,13], and others compare online education with the conventional education system [14]. Some of the studies compared different demographics, such as gender [4], for the evaluation of online education quality.
One important challenge associated with online education is how readily different cultures adopt the technology [9]. Research on technology acceptance has yielded important insights into the complexities of how and why people choose to embrace or reject technology, as well as the rate at which that acceptance or rejection occurs [15]. Students and instructors in both developed and developing nations faced technological challenges including the lack of technical skill sets and economic restrictions while implementing virtual classroom environments [12,13,16,17]. Figure 1 also highlights the research studies that focused on TAM and its factors. TAM gives insight into some of the factors that determine the user’s acceptance behavior of new technology. Those factors include Facilitating Conditions, Attitude, Computer Efficacy and Technological Anxiety [18,19,20,21].
Another important aspect is to measure the effectiveness of an online learning system as compared with traditional learning. There is a need to develop an efficient system for the measurement of the quality of online education. The DM model, often known as the IS (Information Systems) success model, has received a great deal of attention from information system experts. It has been used in previous studies by researchers for assessing the success of various systems [22,23,24,25]. In the Information System Success model, the studies emphasized six dimensions, including Information Quality, System Quality, System Use, User Satisfaction and Organizational Impact [2,22,26,27,28,29].
To concurrently address the issues of technology acceptance and effectiveness, both the TAM and DM models are utilized in the first phase of the current research, which is elaborated in the next section.
After the implementation of an effective online learning system, one of the major challenges is the assessment of online learning, specifically for developing countries, since the education ministries have yet to establish strategies that address key issues about exams including integrity, accuracy, convenience, etc. In addition to the traditional evaluation techniques, such as projects and viva, the need of online education has compelled the policymakers to adopt non-traditional evaluation alternatives, including individual projects, MCQs within a time window, multiple version exams, close book, participation-based assessment, e-proctoring [30], open-book exam, open-book exam (slot within a timed window), open-book exam (timed window), research paper, annotated anthology, bibliography, literature review, reports, memos, reflection paper, class presentation, one-on-one oral exam and audio-visual presentation [31,32,33,34,35], as shown in Figure 1.
When deciding to select an optimal assessment policy for educational institutes in developing countries, it is requisite to rationally evaluate the available alternatives and their influencing criteria under the cultural and social context of these countries. It can be ensured by considering all the affecting parameters and considering the opinion of stakeholders. Tools from the domain of multi-criteria decision-making (MCDM) can be utilized for these kinds of problems.
It is because of this multidimensional nature of the MCDM methodologies that they are being used in some latest studies on the evaluation of online education. A few such instances include prioritization of criteria for e-learning systems using Fuzzy-MCDM [36], evaluating the adequacy of online systems [37], satisfaction assessment of online education [38] and selection of universities in the COVID-19 [39], etc. An extended form of MCDM is the Multi-actor Multi-criteria Analysis (MAMCA), which ensures the inclusion of different actors on top of the alternatives and criteria.
It can be inferred from the literature that researchers have not yet focused on evaluating the technology-enabled online learning and assessment in developing countries where the cultural and social circumstances make things much more complex and multifaceted than in the developed world.
There is a need to study the technology readiness/effectiveness and investigate the factors that can have an impact on online assessment methods. This can be ensured only by taking a broader spectrum of influencing criteria and involving the opinion of all concerned stakeholders. Moreover, previous studies have discussed some other aspects of online education; however, the issue of online exam conduct is only rarely addressed.
This study intends to propose a policy recommendation framework based on measuring the technology readiness and effectiveness and prioritizing the online learning assessment methods using MAMCA. This has been done in two phases. The first phase includes measuring the technology readiness and effectiveness using TAM and DM models as it is an inevitable pre-requite for technology-enabled online assessment. The second phase explores and prioritizes the online assessment methods and their influencing criteria using MAMCA. These two phases are elaborated in the next section of the paper, which also includes modeling of the problem. Developed models are solved using the concerned software in the Results and Discussions section. Section three also includes a sensitivity analysis that gauges the robustness of models.

2. Materials and Methods

In this section, two phases of the adopted research methodology are elaborated. The first phase applies the hybrid TAM and DM models to evaluate if the students of developing countries are ready to accept and efficiently utilize the technology during online education without compromising the learning quality. In the second phase, the factors and alternatives for technology-enabled online assessment are prioritized using the multi-actor multi-criteria analysis (MAMCA) by taking all stakeholders on board.

2.1. Phase I: Evaluation of Technology-Enabled Online Learning Using TAM and DM

In this phase of research, the TAM and DM models are merged to evaluate whether the students have efficiently accepted the technology-enabled online learning without compromising the learning quality. The factors adopted from the TAM and DM models are summarized in Figure 2.
Twelve hypotheses are built using variables from TAM and DM that have been incorporated. An influence on the assessment of technology-enabled online learning is hypothesized for each factor. From a survey of the literature and opinion of expert’s Technology-enabled online education is thought to be very impactful in terms of facilitating conditions, attitude, computer efficacy, technological anxiety and all the other DM model constructs [4,18].

Questionnaire Development and Participants of the Study

The Questionnaire developed for this study consisted of three (03) Sections. The first section specified the demographic information while the rest of the two sections were for questions regarding all hypotheses. The Questionnaire consisted of Likert-scale questions ranging from 1 to 5, with 1 representing “strongly disagree” and 5 representing “strongly agree”. The data was collected by adopting an online survey methodology to maximize the response rate [40]. The survey was conducted through google forms using online study groups and other social media platforms.
The designed questionnaire was circulated among the students at different institutes of Pakistan (round 138 institutes) with diversified demographics, following a convenience sampling technique via email and by using social media, such as WhatsApp and Facebook. A total of 837 responses were obtained. The targeted participants are from public and private sector engineering universities’ enrolled in undergraduate and graduate programs. A questionnaire has been filled with a nearly equal ratio of male and female participants to make the study free from gender bias.
As the research aims to investigate the factors that impact online learning, the TAM questionnaire has been modified in this study to match the current requirements. The predefined model of TAM has a list of constructs; however, only four of them are screened for this analysis. Based on these constructs, twelve hypotheses have been made to measure the significance of each construct with technology-enabled online learning. Some of the built TAM hypothesis are elaborated here as examples.
Technological anxiety is the major construct of TAM. As the current study is from the perspective of students, it is important to assess their anxiety toward technology while implementing online learning methods [4]. The stated hypothesis will assume that Technological Anxiety has an impact on the evaluation of technology-enabled online learning.
Attitude has been the significant construct of TAM. The term “attitude” refers to a person’s inclination to react positively or negatively to an experience. Refs. [41,42] found that attitude is a determining element of behavioral intention toward e-learning usage in previous research on e-learning adoption.
Refs. [43,44] revealed that attitude is a dominating element in influencing behavioral intention. Thus, this hypothesis proposes that Attitude has an impact on the evaluation of technology-enabled online learning. The degree to which a person feels that an organizational and technological infrastructure exists to enable the usage of the system is referred to as Facilitating Conditions. While conducting online education, gadgets have a crucial role. The network (e.g., wireless and satellite) and technology (e.g., computers, laptops and smartphones) are utilized to determine the type of online learning. Courses, modules and smaller learning units [45] are all examples of online learning. The notion is that there is sufficient infrastructure to facilitate the usage of technology. Based on the above discussion, we propose that Computer Efficacy has an impact on the evaluation of technology-enabled online learning.
Similarly, constructs that come from the DM model are predefined, and the hypothesis is made as per the required relation to measure the technology-enabled online learning, especially from a student perspective [4]. For quantitative data analysis, the use of Structural Equation Modelling [46] has been adopted as per recommended practices [47]. It entails a thorough study of the data and verifies the model’s validity.
Detailed steps of the SEM are presented in the form of a flow diagram in Figure 3. Table 1 shows the hypotheses that were created as a part of the DM and TAM Models. The technology acceptance model (TAM) was used to construct the attitude (ATT), computer efficacy (CE), enabling conditions and technology anxiety. The dependent factor evaluation of technology-enabled online learning, or “E,” must relate to each of the elements. This will demonstrate how attitude and the assessment of technologically enhanced online learning are related. Similarly, all DM components are related to the dependent variable “E”.

2.2. Phase II: Prioritization of Technology-Enabled Online Assessment Methodologies Using Multi-Actor Multi-Criteria Analysis (MAMCA)

In the second phase of the study, the technology-enabled assessment methodologies are prioritized using Multi-Actor Multi-Criteria Analysis (MAMCA) considering the opinions of three stakeholders including students, faculty members and policymaker persons from the education ministry. There were a total of 35 to 40 students who attended a physical meeting conducted for this study.
Pairwise comparisons required for the Analytic Hierarchy Process were performed in the respective software during the same meeting interactively. Faculty members approached for this study were mostly deans, chairs and professors from various engineering institutes. Representatives of the third stakeholders were from HEC (Higher Education Commission) and the ministry of education. All of them had more than 10 years of professional experience in the field.
The Analytical Hierarchy Process (AHP) is utilized to model the MAMCA for each stakeholder as well as the overall results [48,49,50]. With the MAMCA, one can explicitly take the objectives of complex projects into account and come to a good overview of the advantages and disadvantages of different options. Analytic Hierarchy Process (AHP) is applied under the perspectives of different stakeholders and the aggregate priority weights and rankings are eventually computed. The results are presented in the form of multiple actors (stakeholders) and as a whole [51].
The pairwise comparison matrix results in a square matrix of S nxn as shown in Equation (1). The element s ij represents the relative importance of criterion i with respect to criterion j. In the matrix, s ij = 1 only when i = j.
S = 1 s 12 s 13 s 1 n 1 s 12 1 s 23 s 2 n 1 s 1 n 1 s 2 n 1 s 3 n 1
Normalized form of the scores presented in Equation (1) are given in Equation (2).
c ij   = s ij / ( j = 1 n s ij   )
for i and j = 1, 2, 3, …, n.
After normalization, the weight vector ‘w’ of decision-makers is presented (Equation (3)), which is also referred to as the priority vector of the decision-maker.
w i = i = 1 n c ij / ( n )
For i and j = 1, 2, 3, …, n.
All Alternatives and Criteria are based on the elements considered in determining the impact of online learning. A 9-point Likert scale questionnaire was created to collect responses from all stakeholders, including students, teachers and senior members of HEC. Face-to-face meetings, as well as online sessions on Zoom and Microsoft teams, were used to collect data. This procedure was done using the AHP Software for Collaborative Decision-Making Solution to assign weights to all proposed Criteria and Alternatives for conducting online exams.

2.3. Linkage of the Two Phases

While the main emphasis of this research is on reducing the complexities associated with the prioritization of technology-enabled assessment methods, it has been first evaluated whether the students are ready to accept the respective technology effectively. We enter into the second phase, once the acceptance and effectiveness of online education is appraised. Since the main focus of this paper is on the prioritization of assessment alternatives, we excluded the other two stakeholders during the first phase. We assume that faculty members of engineering universities are either already comfortable with the technology or will be trained to handle this new technology as a part of their job.

3. Results and Discussion

In this section, results obtained from the two phases of research are presented in two subsections.

3.1. Evaluation of Technology-Enabled Online Learning

This section presents the results obtained from the hybrid TAM and DM models, which encompass the performance and acceptance of technology-enabled online education. The designed questionnaire was circulated among the students of more than one hundred engineering institutions in Pakistan with diversified demographics, following convenience sampling technique. A total of 837 responses were formally recorded. Three records were deleted, being outliers. Demographic details with the frequency of the records are shown in Table 2.
The observed data is normally distributed, as the skewness and kurtosis values of the variables are within the range of ±1.96 and ±1, respectively. Following the confirmation of data normality, some reliability and validity tests of the suggested model were determined.
Table 3 includes the constructs of the current research with different indicators/items, depending upon the required information. Constructs can have two items/indicators if the required dimension of the construct is fully accomplished [52]. Every item has different loading values, but all of the loading values are greater than 0.50, which means that the items are significant to make each construct [53]. Structural equation modeling [46] was used to analyze the results quantitatively.
The consistency and reliability of the records were checked using a reliability test that measured the dependability of each construct using loading values for the items [53]. A Cronbach’s alpha value greater than 0.7 shows that the sample is highly reliable [54]. From Table 3 the values of Cronbach’s alpha (0.7–0.88) indicate a significantly high internal consistency of the data. User satisfaction has the highest Cronbach’s Alpha value (0.88), which means that it has significantly high internal consistency among all the constructs.
A convergent validity test was conducted to measure the accuracy of the research instrument. It determined via three measures viz. loadings, composite reliability (CR) and average variance extracted (AVE). The values of the item loadings, CR and AVE must be greater than 0.5, 0.7 and 0.5, respectively [53]). The average variance extracted (AVE) is a measure of how much variation a concept captures compared to how much variance is attributable to measurement error. The obtained values of CR, AVE and items loading are above the threshold point, which confirms convergent validity. All the indicators have item loading values greater than 0.05, CR value greater than 0.82 and AVE greater than 0.50 means that all the constructs are accurate for this predictive model. R-squared indicates the amount of shared variation between two or more variables.
The R-squared values for both independent variables is 0.291 and 0.729, and the dependent variable is 0.879. A low R-square of at least 0.1 (or 10 percent) is acceptable on the condition that some or most of the predictors or explanatory variables are statistically significant [55], as all of the constructs are statistically significant so we can say that R square is acceptable in this case. Hence, we can say that our proposed model is highly significant and fulfills all the research instruments as it has been proven from the statistical tests, which endorse that the data is valid and accurate.
Hypothesis testing was performed to confirm the significance of the predicted hypothesis across each factor of the study as shown in Table 4. The criteria used to evaluate each hypothesis is the t-value across each loading. Significant t-values for the path loadings signify support for the proposed path, mean std deviation and t-statistic hypothesis. The cut-off criteria used was a t-value greater or equal to 1.645 for an alpha level of 0.05 [31,56]. Hypothesis testing for this study is shown in Table 4. Twelve hypotheses have an impact on the evaluation of technology-enabled online learning. Out of these, eleven hypotheses are accepted, while H11 addressing Technological Anxiety is rejected.
The results of hypothesis testing indicate that eleven out of twelve hypotheses were accepted, whereas hypothesis H11 (Technological Anxiety having an impact on the evaluation of technology-enabled online learning) is rejected. For the 0.05 value of alpha, the t value greater than 1.645 indicates that the hypothesis is accepted. Hypothesis H1, for example, which states that Attitude has an impact on the evaluation of technology-enabled online learning, has a t-value of 2.44, which is greater than 1.645; hence, H1 can be accepted whereas Hypothesis H11 has a t-value of 1.13, which is less than 1.645; hence, H11 is rejected.
The results of this study indicate that all of the technology factors are important for the evaluation of technology-enabled online learning [3]. This means that the engineering students of developing countries, such as Pakistan, feel that the evaluation of technology-enabled online learning can be enhanced if modern technologies are effectively adopted. Moreover, it can be inferred from the results that students do not feel any ‘Technological Anxiety’ during technology-enabled online learning.
This means that the younger generation is quite familiar with technology, and they do not face any fear or anxiety while using technology for educational purposes. Overall, the results obtained from the students’ responses indicate their satisfaction with online education in a pandemic-like situation.

3.2. Prioritization of Technology-Enabled Online Assessment Methodologies

This phase of the study intends to prioritize the assessment methods adopted in remote exams conducted during online education, considering the overall judgments of various stakeholders involved in the education system (faculty, students and policymakers from the ministry of education). Analytic Hierarchy Process (AHP)-based multi-actor multi-criteria analysis was deployed to rationally prioritize the screened alternatives and influencing factors.
As a standard first step of MCDM, it is required to prepare the list of assessment alternatives and the factors that affect their prioritized adoption. A large number of such factors were initially explored for the deliberations of education experts and selected students. A consensus was developed to include the six most crucial criteria for MAMCA-based evaluation, which are presented in Table 5 along with their inclusion rationale and literature evidence.
Four assessment alternatives had been short-listed by the experts concerning ground realities for online exam conduct in Pakistan. These include ‘Project and Viva’, ‘Automated MCQs and Short Questions’, ‘E-Proctored Exams’ and ‘Open Book Written Exams’. All of these selected alternatives are for online assessment when the traditional paper-based exams with the physical on-campus presence of students and faculty are not possible. In the MCDM terminology, we use the term ‘alternatives’ for available options.
Since we are evaluating these options to facilitate policy makers in education sector, the term ‘policy’ is used for all these assessment methods. Six criteria are screened, which include fairness, accuracy, cost, conveniences, mental health and availability of infrastructure. The complete AHP hierarchy having shortlisted alternatives and criteria is shown in Figure 4. The arrowhead originated from each criterion and met with all the alternatives one by one are performing a multi-action decision-making analysis.
An AHP-based questionnaire consisting of a pair-wise comparison matrix of criteria and assessment alternatives using Saaty’s 9-scale ratio was created and physically distributed to the faculty members and policymakers at their various locations and in individual encounters. Moreover, on-campus meetings were held with university students to gather their opinions on the prioritization of factors and alternatives using similar questionnaires. An AHP-based MAMCA was applied to the data obtained from all three stakeholders including students, faculty members from accredited universities and policymakers from the HEC [34]. For solutions to the created AHP models, commercially available software was used to reduce the computing effort. MAMCA results in the form of criteria weights and prioritized alternatives are shown in Figure 5 and Figure 6.
Students and policymakers have given a top priority to mental wellbeing during online assessments. For faculty members, the availability of infrastructure is the most important factor, which is followed by Fairness and Mental Health. One of the interesting results obtained from the prioritization of criteria is that all three stakeholders have given the least importance to the Cost factor.
This means that there is a consensus among the stakeholders about sustainable investments in online education even if the incurred costs are high. Almost equal weightage is assigned to Fairness, which emerged as the second most important factor among all stakeholders. There seems a clear disagreement on the Availability of Infrastructure where the faculty members have assigned a priority that is much higher than those assigned by both the students and policymakers.
The results computed for the policy alternatives during the online assessment are compiled in Figure 6. Faculty has given a top priority to E-Proctored Exams and Automated MCQ’s/Short Questions while giving a marginally lesser priority to the Open book exams. However, they have shown less interest in the conduct of viva and evaluating the assigned projects. This is likely due to the time and labor required to assess a large number of students orally using video calls. Contrary to faculty members and policymakers, the students gave the least importance to the E-proctored exam. Although Open Book exams seem a popular assessment method for students, they do not seem to be attractive to faculty and policymakers.
Overall results representing the perspectives of all stakeholders are summarized in Figure 7. Mental Health is the most important factor in the aggregate results followed by Fairness and Infrastructure Availability. Automated MCQs have secured a top ranking among the four online assessment policy alternatives, which is followed by the Open Book exams. There is only a marginal difference between E-Proctored Exams and Projects and Viva. Both of these alternatives seem less attractive when looking at results as a whole.

3.3. Analysing the Effect of Priority Variations

Sensitivity analysis is carried out by systematically altering inputs and observing their impact on the ranking list of assessment alternatives. Figure 8 shows six different scenarios where the priority weight input is changed for each criterion one by one. The names of influencing criteria are represented by horizontal lines, while their weights are scaled on the left vertical axis. Each alternative’s performance is mapped to all the parameters and shown accordingly. On the right vertical axis, the overall scores earned by the options are highlighted. In each scenario, the weight of a single parameter is fixed at 60%, as recommended by [82]. Other parameters’ strength varies in proportion to their original weights in a way that the sum of all the criteria weights stays equal to 100%. The outcome of the Sensitivity Analysis is summarized in Figure 8.
The rankings of the top-most Alternative (Automated MCQs and Short Questions) are preserved in four out of the six scenarios. The only two exceptions are the cases when Infrastructure Availability and Cost were given the top priorities. However, even in these extreme cases, it remained second only to the Open Book Exams. It can therefore be concluded that the top two alternatives are robust with respect to different scenarios. An almost similar pattern is observed for the two bottom-most alternatives. They preserve their bottom-most position in five scenarios.
Overall, it can be concluded that the top-two and bottom-two alternatives are mostly preserved in different perspectives, which means that ‘Automated MCQs and Short Questions’, and the “Open Book Exams” remain at the top while ‘E-proctored’ and ‘Projects and Viva’s’ remain at the bottom. It can, therefore, be concluded that change in priority weights has not had a significant effect on the overall results.

4. Conclusions and Recommendations

Despite being one of the toughest challenges humanities has ever faced, the COVID-19 pandemic had an overall evolutionary effect on the education sector, especially in developing countries. This study shows that students of engineering institutions in Pakistan are ready to embrace the technology during online learning and assessment without affecting their academic productivity. In developing countries, such as Pakistan, it is not uncommon to face different unavoidable political, cultural and regional scenarios when the physical movement of students and educationist becomes bleak. This paper, therefore, opens a scientific horizon for policymakers to efficiently assess the students’ learning using a technology-enabled online education system in the future.
While the first phase of this research, using hybrid methodologies, confirms the students’ ability to efficiently handle the technology during online learning, the second phase rationally prioritizes factors and alternatives for online assessment. Overall, the mental health of students/teachers and the availability of infrastructure got the highest aggregate priority weights. There are only a couple of disagreements among the stakeholders on assigned factor weights. Students and policymakers have given the highest priority to ‘mental health’ whereas faculty members feel that infrastructure availability and fairness in online assessments are the most crucial factors. This is probably because of the reason that faculty members may feel that mental health is a function of system integrity and infrastructure availability.
The results reveal that there is a consensus among the stakeholders in assigning the highest priority to Automated MCQs/Short Questions via an efficiently designed online infrastructure for the conduct and management of online exams. Open Book exams emerged as the second most important alternative followed by the E-Proctored exams and Project/Viva-based exams. One significant difference of opinion observed in alternative ranking is that students seem less comfortable with E-proctored exams, which is given a higher priority by faculty and policymakers. Sensitivity analysis confirms the overall model robustness, although there are a few interesting exceptions when Infrastructure Availability and Cost are artificially assigned the highest weights.
The major contribution of this research is to rank and recommend the evaluation methodologies during distance education and online learning in Pakistan and other developing countries having similar socio-political and technological backgrounds.

4.1. Policy Implication

  • This research provides a multidimensional set of results as flexible policy guidance for local as well as international education policymakers in setting the stakeholder priorities for the commencement of online education in developing countries.
  • Whenever online education becomes the only option left to continue the education process worldwide, it requires an efficient system that considers all the factors and fulfills the needs of all stakeholders. This study can play a role in the development of an effective recommendation system for improved online education and exam conduct.
  • The top two modes of examination suggested by stakeholders, after keeping in mind various factors, are the Automated MCQS/Short Questions and the Open Book exams.

4.2. Limitations and Recommendations

  • It is worth noting that the results were achieved by solving the established models with specific data in relation to the current scenario. Political instability and financial uncertainty will have an impact on model inputs and outcomes. While the second phase takes all three stakeholders on board, the first phase is limited to students only. Another limitation of the study is that an equal weightage was given to the opinion of three stakeholders in the aggregate results. However, the differences in their assigned priorities are clearly highlighted and discussed.
This research can be expanded in the future by examining additional demographics, such as elementary and intermediate level students. Furthermore, the factors influencing different catastrophe situations would change, allowing those circumstances to be investigated using new elements. The study’s findings can be extended to other developing countries with modest changes.

Author Contributions

Data curation, A.S.K. and M.A.A.; Formal analysis, A.S.K. and M.A.A.; Funding acquisition, M.A.A.; Investigation, A.S.K., M.K.A. and M.A.A.; Methodology, M.K.A. and M.A.A.; Software, M.K.A.; Supervision, M.K.A.; Validation, A.S.K. and M.A.A.; Visualization, M.A.A.; Writing—original draft, A.S.K.; Writing—review and editing, M.K.A. and M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by King Khalid University, grant number [RGP.2/163/43].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All necessary data samples are provided in the paper.

Acknowledgments

The authors are grateful to the Industrial Engineering Department at the University of Engineering and Technology, Taxila, Pakistan, for providing all the supporting services necessary to complete this project. The authors extend their appreciation to the Deanship of Scientific Research, King Khalid University for funding this work through the General Research Project under grant number RGP2/163/43.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United Nations. Policy Brief: Education during COVID-19 and Beyond. 2020. Available online: https://www.un.org/development/desa/dspd/wp-content/uploads/sites/22/2020/08/sg_policy_brief_covid-19_and_education_august_2020.pdf (accessed on 20 May 2022).
  2. Sapkota, P.P.; Narayangarh, C. Determining Factors of the Use of E-Learning during COVID-19 Lockdown among the College Students of Nepal: A Cross-Sectional Study; A Mini Research Report; Balkumari College: Chitwan, Nepal, 2020. [Google Scholar]
  3. Nguyen, T. The effectiveness of online learning: Beyond no significant difference and future horizons. MERLOT J. Online Learn. Teach. 2015, 11, 309–319. [Google Scholar]
  4. Shahzad, A.; Hassan, R.; Aremu, A.Y.; Hussain, A.; Lodhi, R.N. Effects of COVID-19 in E-learning on higher education institution students: The group comparison between male and female. Qual. Quant. 2020, 55, 805–826. [Google Scholar] [CrossRef] [PubMed]
  5. Scholtz, B.; Kapeso, M. An m-learning framework for ERP systems in higher education. Interact. Technol. Smart Educ. 2014, 11, 287–301. [Google Scholar] [CrossRef]
  6. Oyedotun, T.D. Sudden change of pedagogy in education driven by COVID-19: Perspectives and evaluation from a developing country. Res. Glob. 2020, 2, 100029. [Google Scholar] [CrossRef]
  7. Di Vaio, A.; Palladino, R.; Hassan, R.; Escobar, O. Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. J. Bus. Res. 2020, 121, 283–314. [Google Scholar] [CrossRef]
  8. Hamadi, M.; El-Den, J.; Azam, S.; Sriratanaviriyakul, N. Integrating social media as cooperative learning tool in higher education classrooms: An empirical study. J. King Saud Univ.-Comput. Inf. Sci. 2021, 34, 3722–3731. [Google Scholar] [CrossRef]
  9. Rapanta, C.; Botturi, L.; Goodyear, P.; Guàrdia, L.; Koole, M. Online university teaching during and after the COVID-19 crisis: Refocusing teacher presence and learning activity. Postdigital Sci. Educ. 2020, 2, 923–945. [Google Scholar] [CrossRef]
  10. Lazim, C.S.L.M.; Ismail, N.D.B.; Tazilah, M.D.A.K. Application of technology acceptance model (TAM) towards online learning during COVID-19 pandemic: Accounting students perspective. Int. J. Bus. Econ. Law 2021, 24, 13–20. [Google Scholar]
  11. Daradoumis, T.; Rodriguez-Ardura, I.; Faulin, J.; Juan, A.; Xhafa, F.; Lopez, F.J.M. Customer Relationship Management applied to higher education: Developing an e-monitoring system to improve relationships in electronic learning environments. Int. J. Serv. Technol. Manag. 2010, 14, 103–125. [Google Scholar] [CrossRef]
  12. Almaiah, M.A.; Al-Khasawneh, A.; Althunibat, A. Exploring the critical challenges and factors influencing the E-learning system usage during COVID-19 pandemic. Educ. Inf. Technol. 2020, 25, 5261–5280. [Google Scholar] [CrossRef]
  13. Tănase, F.-D.; Demyen, S.; Manciu, V.-C.; Tănase, A.-C. Online Education in the COVID-19 Pandemic—Premise for Economic Competitiveness Growth? Sustainability 2022, 14, 3503. [Google Scholar] [CrossRef]
  14. Qazi, A.; Naseer, K.; Qazi, J.; AlSalman, H.; Naseem, U.; Yang, S.; Hardaker, G.; Gumaei, A. Conventional to online education during COVID-19 pandemic: Do develop and underdeveloped nations cope alike. Child. Youth Serv. Rev. 2020, 119, 105582. [Google Scholar] [CrossRef] [PubMed]
  15. Ismail, A.; Kuppusamy, K. Web accessibility investigation and identification of major issues of higher education websites with statistical measures: A case study of college websites. J. King Saud Univ.-Comput. Inf. Sci. 2019, 34, 901–911. [Google Scholar] [CrossRef]
  16. Almaiah, M.A.; Al Mulhem, A. A conceptual framework for determining the success factors of e-learning system implementation using Delphi technique. J. Theor. Appl. Inf. Technol. 2018, 96, 5962–5976. [Google Scholar]
  17. Ang, L.; Buttle, F. Customer retention management processes: A quantitative study. Eur. J. Mark. 2006, 40, 83–99. [Google Scholar] [CrossRef] [Green Version]
  18. Gibson, S.G.; Harris, M.L.; Colaric, S.M. Technology acceptance in an academic context: Faculty acceptance of online education. J. Educ. Bus. 2008, 83, 355–359. [Google Scholar] [CrossRef]
  19. Abbasi, S.; Ayoob, T.; Malik, A.; Memon, S.I. Perceptions of students regarding E-learning during COVID-19 at a private medical college. Pak. J. Med. Sci. 2020, 36, S57–S61. [Google Scholar] [CrossRef]
  20. Alfadda, H.A.; Mahdi, H.S. Measuring students’ use of zoom application in language course based on the technology acceptance model (TAM). J. Psycholinguist. Res. 2021, 50, 883–900. [Google Scholar] [CrossRef]
  21. Reimers, F.M.; Schleicher, A. A Framework to Guide an Education Response to the COVID-19 Pandemic of 2020; OECD: Paris, France, 2020. [Google Scholar]
  22. DeLone, W.H.; McLean, E.R. Information systems success: The quest for the dependent variable. Inf. Syst. Res. 1992, 3, 60–95. [Google Scholar] [CrossRef] [Green Version]
  23. DeLone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  24. Ng, H.H.; Tan, H.H. An annotated checklist of the non-native freshwater fish species in the reservoirs of Singapore. COSMOS 2010, 6, 95–116. [Google Scholar] [CrossRef]
  25. Petter, S.; Delone, W.; McLean, E.R. Information Systems Success: The Quest for the Independent Variables. J. Manag. Inf. Syst. 2013, 29, 7–62. [Google Scholar] [CrossRef]
  26. Chen, T.; Peng, L.; Yin, X.; Rong, J.; Yang, J.; Cong, G. Analysis of User Satisfaction with Online Education Platforms in China during the COVID-19 Pandemic. Healthcare 2020, 8, 200. [Google Scholar] [CrossRef] [PubMed]
  27. Dawadi, S.; Giri, R.A.; Simkhada, P. Impact of COVID-19 on the Education Sector in Nepal: Challenges and Coping Strategies. 2020; 16p. Available online: https://files.eric.ed.gov/fulltext/ED609894.pdf (accessed on 18 April 2022).
  28. Lauwerier, T. Reactions to COVID-19 from International Cooperation in Education: Between Continuities and Unexpected Changes. Blog edu‘C’oop. 2020. Available online: https://archive-ouverte.unige.ch/unige:138268 (accessed on 18 April 2022).
  29. Mohan, G.; Mccoy, S.; Carroll, E.; Mihut, G.; Lyons, S.; Domhnaill, C.M. Learning for All? Second-Level Education in Ireland during COVID-19. ESRI Survey and Statistical Report Series 92. 2020. Available online: https://www.researchgate.net/profile/Selina-Mccoy/publication/342453663_Learning_For_All_Second-Level_Education_in_Ireland_During_COVID-19/links/5ef52ec4458515505072782b/Learning-For-All-Second-Level-Education-in-Ireland-During-COVID-19.pdf (accessed on 19 April 2022).
  30. Teräs, M.; Suoranta, J.; Teräs, H.; Curcher, M. Post-COVID-19 education and education technology ‘solutionism’: A seller’s market. Postdigital Sci. Educ. 2020, 2, 863–878. [Google Scholar] [CrossRef]
  31. Perets, E.A.; Chabeda, D.; Gong, A.Z.; Huang, X.; Fung, T.S.; Ng, K.Y.; Bathgate, M.; Yan, E.C.Y. Impact of the Emergency Transition to Remote Teaching on Student Engagement in a Non-STEM Undergraduate Chemistry Course in the Time of COVID-19. J. Chem. Educ. 2020, 97, 2439–2447. [Google Scholar] [CrossRef]
  32. Hoodbhoy, P. Cheating on Online Exams. Dawn, 23 January 2021. [Google Scholar]
  33. Star, W. HEC Allows Universities to Hold Online Exams after Days-Long Protests by Students. Dawn, 27 January 2021. [Google Scholar]
  34. HEC. HEC Policy Guidance Series on COVID-19. Guidance on Assessments and Examinations; Policy Guidance No. 6; HEC: Islamabad, Pakistan, 2020. [Google Scholar]
  35. Szymanski, D.M.; Hise, R.T. E-satisfaction: An initial examination. J. Retail. 2000, 76, 309–3222. [Google Scholar] [CrossRef]
  36. Güldeş, M.; Gürcan, F.; Atici, U.; Şahin, C. A fuzzy multi-criteria decision-making method for selection of criteria for an e-learning platform. Eur. J. Sci. Technol. 2022, 797–806. [Google Scholar] [CrossRef]
  37. Şahin, S.; Kargın, A.; Yücel, M. Hausdorff Measures on Generalized Set Valued Neutrosophic Quadruple Numbers and Decision Making Applications for Adequacy of Online Education. Neutrosophic Sets Syst. 2021, 40, 86–116. [Google Scholar]
  38. Xu, X.; Xie, J.; Wang, H.; Lin, M. Online education satisfaction assessment based on cloud model and fuzzy TOPSIS. Appl. Intell. 2022. [Google Scholar] [CrossRef]
  39. Nanath, K.; Sajjad, A.; Kaitheri, S. Decision-making system for higher education university selection: Comparison of priorities pre-and post-COVID-19. J. Appl. Res. High. Educ. 2021, 14, 347–365. [Google Scholar] [CrossRef]
  40. Creswell, J.W. A Concise Introduction to Mixed Methods Research; SAGE Publications: London, UK, 2021. [Google Scholar]
  41. Cheung, R.; Vogel, D. Predicting user acceptance of collaborative technologies: An extension of the technology acceptance model for e-learning. Comput. Educ. 2013, 63, 160–175. [Google Scholar] [CrossRef]
  42. Tosuntaş, Ş.B.; Karadağ, E.; Orhan, S. The factors affecting acceptance and use of interactive whiteboard within the scope of FATIH project: A structural equation model based on the Unified Theory of acceptance and use of technology. Comput. Educ. 2015, 81, 169–178. [Google Scholar] [CrossRef]
  43. Chu, T.-H.; Chen, Y.-Y. With good we become good: Understanding e-learning adoption by the theory of planned behavior and group influences. Comput. Educ. 2016, 92, 37–52. [Google Scholar] [CrossRef]
  44. Zogheib, B.; Rabaa’I, A.; Zogheib, S.; Elsaheli, A. University Student Perceptions of Technology Use in Mathematics Learning. J. Inf. Technol. Educ. Res. 2015, 14, 417–438. [Google Scholar] [CrossRef] [Green Version]
  45. Kisanjara, S.; Tossy, T.M.; Sife, A.S.; Msanjila, S.S. An integrated model for measuring the impacts of e-learning on students’ achievement in developing countries. Int. J. Educ. Dev. Using Inf. Commun. Technol. 2017, 13, 109–127. [Google Scholar]
  46. Ramli, N.H.H.; Alavi, M.; Mehrinezhad, S.A.; Ahmadi, A. Academic Stress and Self-Regulation among University Students in Malaysia: Mediator Role of Mindfulness. Behav. Sci. 2018, 8, 12. [Google Scholar] [CrossRef] [Green Version]
  47. Pajarianto, D. Study from home in the middle of the COVID-19 pandemic: Analysis of religiosity, teacher, and parents support against academic stress. Talent. Dev. Excell. 2020, 12, 1791–1807. [Google Scholar]
  48. Bilal, M.; Ali, M.K.; Qazi, U.; Hussain, S.; Jahanzaib, M.; Wasim, A. A multifaceted evaluation of hybrid energy policies: The case of sustainable alternatives in special Economic Zones of the China Pakistan Economic Corridor (CPEC). Sustain. Energy Technol. Assess. 2022, 52, 101958. [Google Scholar] [CrossRef]
  49. Hasnain, S.; Ali, M.K.; Akhter, J.; Ahmed, B.; Abbas, N. Selection of an Industrial Boiler for a Soda Ash Plant using Analytic Hieararchy Process and TOPSIS Approaches. Case Stud. Therm. Eng. 2020, 19, 100636. [Google Scholar] [CrossRef]
  50. Ahmad, T.; Ali, M.K.; Malik, K.A.; Jahanzaib, M. Sustaining Power Production in Hydropower Stations of Developing Countries. Sustain. Energy Technol. Assess. 2020, 37, 100637. [Google Scholar]
  51. Akram, W.; Adeel, S.; Tabassum, M.; Jiang, Y.; Chandio, A.; Yasmin, I. Scenario Analysis and Proposed Plan for Pakistan Universities–COVID–19: Application of Design Thinking Model; Cambridge Open Engage: Cambridge, UK, 2020. [Google Scholar]
  52. AlQudah, A.A. Accepting Moodle by academic staff at the University of Jordan: Applying and extending TAM in technical support factors. Eur. Sci. J. 2014, 10, 183–200. [Google Scholar]
  53. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  54. DeVellis, R.F. Scale Development: Theory and Applications, 3rd ed.; Sage Publications: Saunders Oaks, CA, USA, 2016; Volume 26. [Google Scholar]
  55. Ozili, P.K. The Acceptable R-Square in Empirical Modeling for Social Science Research. 2022. Available online: https://ssrn.com/abstract=4128165 (accessed on 19 April 2022).
  56. Shahrabi, M.A.; Ahaninjan, A.; Nourbakhsh, H.; Ashlubolagh, M.A.; Abdolmaleki, J.; Mohamadi, M. Assessing psychometric reliability and validity of Technology Acceptance Model (TAM) among faculty members at Shahid Beheshti University. Manag. Sci. Lett. 2013, 3, 2295–2300. [Google Scholar] [CrossRef]
  57. Choi, B.; Jegatheeswaran, L.; Minocha, A.; AlHilani, M.; Nakhoul, M.; Mutengesa, E. The impact of the COVID-19 pandemic on final year medical students in the United Kingdom: A national survey. BMC Med. Educ. 2020, 20, 206. [Google Scholar] [CrossRef]
  58. Hasan, L.F.; Elmetwaly, A.; Zulkarnain, S. Make the educational decisions using the analytical hierarchy process AHP in the light of the corona pandemic. Eng. Appl. Artif. Intell. 2020, 2, 1–13. [Google Scholar]
  59. Mailizar, M.; Burg, D.; Maulina, S. Examining university students’ behavioural intention to use e-learning during the COVID-19 pandemic: An extended TAM model. Educ. Inf. Technol. 2021, 26, 7057–7077. [Google Scholar] [CrossRef]
  60. Shah, S.; Diwan, S.; Kohan, L.; Rosenblum, D.; Gharibo, C.; Soin, A.; Sulindro, A.; Nguyen, Q.; Provenzano, D.A. The technological impact of COVID-19 on the future of education and health care delivery. Pain Physician 2020, 23, S367–S380. [Google Scholar] [CrossRef]
  61. Siraj, H.H.; Salam, A.; Roslan, R.; Hasan, N.A.; Jin, T.H.; Othman, M.N. Stress and Its Association with the Academic Performance of Undergraduate Fourth Year Medical Students at Universiti Kebangsaan Malaysia. IIUM Med. J. Malays. 2014, 13. [Google Scholar] [CrossRef]
  62. Wang, C.; Zhao, H. The impact of COVID-19 on anxiety in Chinese university students. Front. Psychol. 2020, 11, 1168. [Google Scholar] [CrossRef]
  63. Kaden, U. COVID-19 School Closure-Related Changes to the Professional Life of a K–12 Teacher. Educ. Sci. 2020, 10, 165. [Google Scholar] [CrossRef]
  64. Fuller, S.; Vaporciyan, A.; Dearani, J.A.; Stulak, J.M.; Romano, J.C. COVID-19 Disruption in Cardiothoracic Surgical Training: An Opportunity to Enhance Education. Ann. Thorac. Surg. 2020, 110, 1443–1446. [Google Scholar] [CrossRef] [PubMed]
  65. Basnet, S.; Basnet, H.B.; Bhattarai, D.K. Challenges and Opportunities of Online Education during COVID-19 Situation in Nepal. Rupantaran Multidiscip. J. 2021, 5, 89–99. [Google Scholar] [CrossRef]
  66. Tran, T.; Hoang, A.-D.; Nguyen, Y.-C.; Nguyen, L.-C.; Ta, N.-T.; Pham, Q.-H.; Pham, C.-X.; Le, Q.-A.; Dinh, V.-H.; Nguyen, T.-T. Toward Sustainable Learning during School Suspension: Socioeconomic, Occupational Aspirations, and Learning Behavior of Vietnamese Students during COVID-19. Sustainability 2020, 12, 4195. [Google Scholar] [CrossRef]
  67. Aziz, A.; Sohail, M. A bumpy road to online teaching: Impact of COVID-19 on medical education. Ann. King Edw. Med. Univ. 2020, 26, 181–186. [Google Scholar]
  68. Bisht, R.K.; Jasola, S.; Bisht, I.P. Acceptability and challenges of online higher education in the era of COVID-19: A study of students’ perspective. Asian Educ. Dev. Stud. 2020, 11, 401–414. [Google Scholar] [CrossRef]
  69. Clark, R.A.; Jones, D. A comparison of traditional and online formats in a public speaking course. Commun. Educ. 2001, 50, 109–124. [Google Scholar] [CrossRef]
  70. James, R. Tertiary student attitudes to invigilated, online summative examinations. Int. J. Educ. Technol. High. Educ. 2016, 13, 19. [Google Scholar] [CrossRef] [Green Version]
  71. Muthuprasad, T.; Aiswarya, S.; Aditya, K.; Jha, G.K. Students’ perception and preference for online education in India during COVID-19 pandemic. Soc. Sci. Humanit. Open 2021, 3, 100101. [Google Scholar] [CrossRef]
  72. Hazari, S.; Schnorr, D. Leveraging student feedback to improve teaching in web-based courses. Journal 1999, 26, 30–38. [Google Scholar]
  73. Muhammad, A.; Shaikh, A.; Naveed, Q.N.; Qureshi, M.R.N. Factors Affecting Academic Integrity in E-Learning of Saudi Arabian Universities. An Investigation Using Delphi and AHP. IEEE Access 2020, 8, 16259–16268. [Google Scholar] [CrossRef]
  74. Ogunode, N.J. Impact of COVID-19 on Private Secondary School Teachers in FCT, Abuja, Nigeria. Electron. Res. J. Behav. Sci. 2020, 3, 72–83. [Google Scholar]
  75. Adnan, M.; Anwar, K. Online Learning amid the COVID-19 Pandemic: Students’ Perspectives. J. Pedagog. Sociol. Psychol. 2020, 2, 45–51. [Google Scholar] [CrossRef]
  76. Zeeshan, M.; Chaudhry, A.G.; Khan, S.E. Pandemic preparedness and techno stress among faculty of DAIs in COVID-19. SJESR 2020, 3, 383–396. [Google Scholar] [CrossRef]
  77. Cassidy, S. Assessing ‘inexperienced’students’ ability to self-assess: Exploring links with learning style and academic personal control. Assess. Eval. High. Educ. 2007, 32, 313–330. [Google Scholar] [CrossRef]
  78. García, P.; Amandi, A.; Schiaffino, S.; Campo, M. Evaluating Bayesian networks’ precision for detecting students’ learning styles. Comput. Educ. 2007, 49, 794–808. [Google Scholar] [CrossRef]
  79. Huang, Y.-M.; Lin, Y.-T.; Cheng, S.-C. An adaptive testing system for supporting versatile educational assessment. Comput. Educ. 2009, 52, 53–67. [Google Scholar] [CrossRef]
  80. Lin, H.-F. An application of fuzzy AHP for evaluating course website quality. Comput. Educ. 2010, 54, 877–888. [Google Scholar] [CrossRef]
  81. Rasmitadila, R.; Aliyyah, R.R.; Rachmadtullah, R.; Samsudin, A.; Syaodih, E.; Nurtanto, M.; Tambunan, A.R.S. The Perceptions of Primary School Teachers of Online Learning during the COVID-19 Pandemic Period: A Case Study in Indonesia. J. Ethn. Cult. Stud. 2020, 7, 90–109. [Google Scholar] [CrossRef]
  82. Durbach, I. Scenario planning in the analytic hierarchy process. Futures Foresight Sci. 2019, 1, e1668. [Google Scholar] [CrossRef]
Figure 1. Overview of recent research on factors, alternatives and analysis techniques in online education.
Figure 1. Overview of recent research on factors, alternatives and analysis techniques in online education.
Sustainability 14 10387 g001
Figure 2. Merging the TAM and DM for technology-enabled online education.
Figure 2. Merging the TAM and DM for technology-enabled online education.
Sustainability 14 10387 g002
Figure 3. Research methodology.
Figure 3. Research methodology.
Sustainability 14 10387 g003
Figure 4. The AHP decision hierarchy with screened alternatives and criteria.
Figure 4. The AHP decision hierarchy with screened alternatives and criteria.
Sustainability 14 10387 g004
Figure 5. Stakeholder-wise Computed Weights for the Influencing Criteria. C1 = Fairness, C2 = Accuracy, C3 = Costs, C4 = Convenience, C5 = Mental Health, C6 = Infrastructure Availability.
Figure 5. Stakeholder-wise Computed Weights for the Influencing Criteria. C1 = Fairness, C2 = Accuracy, C3 = Costs, C4 = Convenience, C5 = Mental Health, C6 = Infrastructure Availability.
Sustainability 14 10387 g005
Figure 6. Computed weights of policy alternatives. PV = Projects and Viva’s, AMAS = Automated MCQ’s and Short Questions, EP = E-Proctored and OB = Open Book.
Figure 6. Computed weights of policy alternatives. PV = Projects and Viva’s, AMAS = Automated MCQ’s and Short Questions, EP = E-Proctored and OB = Open Book.
Sustainability 14 10387 g006
Figure 7. Original overall results (All stakeholders).
Figure 7. Original overall results (All stakeholders).
Sustainability 14 10387 g007
Figure 8. Performance Sensitivity Analysis of Alternatives. (a) With adjusted weight of Fairness (C1) equal to 60%. (b) With adjusted weight of Accuracy (C2) equal to 60%. (c) With adjusted weight of Cost (C3) equal to 60%. (d) With adjusted weight of Convenience (C4) equal to 60%. (e) With adjusted weight of Mental Health (C5) equal to 60%. (f) With adjusted weight of Availability of Infrastructure (C6) equal to 60%.
Figure 8. Performance Sensitivity Analysis of Alternatives. (a) With adjusted weight of Fairness (C1) equal to 60%. (b) With adjusted weight of Accuracy (C2) equal to 60%. (c) With adjusted weight of Cost (C3) equal to 60%. (d) With adjusted weight of Convenience (C4) equal to 60%. (e) With adjusted weight of Mental Health (C5) equal to 60%. (f) With adjusted weight of Availability of Infrastructure (C6) equal to 60%.
Sustainability 14 10387 g008aSustainability 14 10387 g008b
Table 1. Developed research hypotheses.
Table 1. Developed research hypotheses.
FactorsAbbRelationshipHypothesis
AttitudeATTATT -> EH1: Attitude has an impact on the evaluation of technology-enabled online learning.
Computer EfficacyCECE -> EH2: Computer Efficacy has an impact on the evaluation of technology-enabled online learning.
Facilitating ConditionsFCFC -> EH3: Facilitating Conditions have an impact on the evaluation of technology-enabled online learning.
Information QualityIQIQ -> IUH4: Information Quality has an impact on Intention to Use.
IQ -> USH5: Information Quality has an impact on User Satisfaction.
Intention to UseIUIU -> EH6: Intention to Use has an impact on the evaluation of technology-enabled online learning.
Service QualitySQSQ -> IUH7: Service Quality has an impact on Intention to Use.
SQ -> USH8: Service Quality has an impact on User Satisfaction.
System QualitySYQSYQ -> IUH9: System Quality has an impact on Intention to Use.
SYQ -> USH10: System Quality has an impact on User Satisfaction.
Technological AnxietyTATA -> EH11: Technological Anxiety has an impact on the evaluation of technology-enabled online learning.
User SatisfactionUSUS -> EH12: User Satisfaction has an impact on the evaluation of technology-enabled online learning.
Table 2. Demographic details of the respondents.
Table 2. Demographic details of the respondents.
Variables Indicators FrequencyPercentage%
GenderFemale26231.3
Male57568.7
Institute typePublic68181.4
Private15618.6
Table 3. SEM outputs.
Table 3. SEM outputs.
ConstructsIndicatorsLoadings (>0.50)Cronbach’s Alpha
(0.7–0.88)
R SquareComposite Reliability
(>0.82)
Average Variance Extracted (>0.50)
ATTATT10.8420.72 0.820.54
ATT20.578
ATT30.647
ATT40.837
TATA10.9090.7 0.860.75
TA20.823
CECE10.9120.82 0.920.85
CE20.929
FCFC10.8920.7 0.870.77
FC20.861
IQIQ10.8600.73 0.880.78
IQ20.910
SYQSYQ10.8980.7 0.870.76
SYQ20.850
SQSQ10.8490.83 0.90.74
SQ20.885
SQ30.854
IUIU10.8650.720.2910.880.78
IU20.900
USUS10.9140.880.7290.920.74
US20.838
US30.916
US40.766
EE10.8860.850.8790.90.69
E20.781
E30.820
E40.828
Table 4. Hypothesis-based decisions.
Table 4. Hypothesis-based decisions.
HypothesisRelationshipStd-BetaStd-Errort-ValueDecisionp Values
H1ATT -> E0.070.032.44Supported0.02
H2CE -> E0.080.032.5Supported0.01
H3FC -> E0.110.033.6Supported0
H4IQ -> IU0.170.053.26Supported0
H5IQ -> US0.210.036.07Supported0
H6IU -> E0.050.032.03Supported0.04
H7SQ -> IU0.20.063.52Supported0
H8SQ -> US0.290.047.25Supported0
H9SYQ -> IU0.220.054.32Supported0
H10SYQ -> US0.450.0313.25Supported0
H11TA -> E0.020.021.13Not Supported0.26
H12US -> E0.650.0320.4Supported0
Table 5. Screened factors influencing the prioritization of the assessment methods.
Table 5. Screened factors influencing the prioritization of the assessment methods.
FactorsDescriptionRationaleShreds of Evidence from Previous Studies on Education
Mental HealthEffect on the mental health of students and faculty during online assessments.Since the students and faculty are not fine-tuned with the online assessments, it may seriously affect their mental health and performance. [46,47,57,58,59,60,61,62]
CostIt includes all types of costs associated with different modes of online exam conduct.The system should be cost-effective to ensure its sustainability. Weaker financial circumstances make the ‘Cost’ factor more crucial in developing countries.[12,16,17,58,63,64,65,66]
ConvenienceThe comfort level of students and faculty during online exam conduct. Convenience always affects learning and assessment performance.[67,68,69,70,71]
Integrity and FairnessMaintaining the exam integrity and preventing unfair means.It is always one of the crucial parameters for the conduct of any exam.[32,33,72,73]
AccuracyThe adopted method must assess the student’s learning accurately.Accuracy is always one of the most important factors in the selection of any assessment method. [66,74,75,76,77,78,79,80]
Availability of InfrastructureAvailability of facilities, such as internet connectivity, personal computers, proctoring system and other necessary gadgets.Conduct of online exams is never possible without having sufficient technical infrastructure. [2,12,16,56,57,58,81]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khattak, A.S.; Ali, M.K.; Al Awadh, M. A Multidimensional Evaluation of Technology-Enabled Assessment Methods during Online Education in Developing Countries. Sustainability 2022, 14, 10387. https://doi.org/10.3390/su141610387

AMA Style

Khattak AS, Ali MK, Al Awadh M. A Multidimensional Evaluation of Technology-Enabled Assessment Methods during Online Education in Developing Countries. Sustainability. 2022; 14(16):10387. https://doi.org/10.3390/su141610387

Chicago/Turabian Style

Khattak, Ambreen Sultana, Muhammad Khurram Ali, and Mohammed Al Awadh. 2022. "A Multidimensional Evaluation of Technology-Enabled Assessment Methods during Online Education in Developing Countries" Sustainability 14, no. 16: 10387. https://doi.org/10.3390/su141610387

APA Style

Khattak, A. S., Ali, M. K., & Al Awadh, M. (2022). A Multidimensional Evaluation of Technology-Enabled Assessment Methods during Online Education in Developing Countries. Sustainability, 14(16), 10387. https://doi.org/10.3390/su141610387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop