1. Introduction
The proliferation of computer applications has given rise to educational software packages, such as simulation software packages deployed to complement teaching and research. Simulation packages have enhanced laboratory experiences, especially when dangerous chemicals are involved or when studying electricity phenomena. Deploying simulation software is cost-effective and fills the gap for the dearth of teachers [
1]. Simulation is a design process whereby the system’s behavior is represented by a model representative of design requirements using input and output behavior. It could be by mathematical or physical representation as a prototype, as the case may be. Simulation is used to represent system reality and to predict system reality based on various inputs and outputs. Simulation is a formal method of summarizing the specification of a system [
2]. With augmented reality, these simulation models can be overlaid onto real lab equipment and spaces, allowing students to visualize the simulated phenomena in context [
3]. Students wearing AR headsets can see virtual electric currents overlaid on physical circuit boards or animated chemical reactions projected onto real beakers and test tubes.
The availability of electricity is the main backbone of modern life; hence, power systems are an essential aspect of human life. Due to its interconnectedness across long distances, the electricity grid is very complicated and complex [
4]. Carrying out life-comprehensive experimental testing may be impossible on a power grid because of the safety of personnel and equipment. Shutting down electrical power systems for fault tracing or testing for a considerable time may negatively impact socio-economic activities. Therefore, simulation is a safer way to test power systems, especially real-time simulation [
2]. There are many simulation tools available to understudy and teach the field of power systems engineering. Choosing which one suits a particular problem can be daunting, especially considering the cost, user-friendliness, and what students can gain. Also, simulators use the black box approach; understanding is critical before application. In addition, no single simulator tool can answer all of a designer’s questions, especially in modern electrical power systems, which are complicated and dynamic. Therefore, in selecting the most preferred power system simulation tool for teaching, it is essential to consider various criteria to make an informed decision. This study attempts to determine the most preferred software package for instructing undergraduate power system modules using the multi-criteria decision-making method and expert opinions.
2. Literature Review
Power systems analysis and simulation can be conducted with various software tools. These tools are capable of processing structured instructions at various levels. Power systems analysis is integral to undergraduate education in electrical, electronics, and computer engineering programs. Utilizing these tools facilitates the precise and effective transmission of knowledge to students. Due to continuous advancements in technology and the competitive landscape among software vendors, numerous options are available for power systems analysis. Power systems analysis tools commonly used for teaching power systems simulations include the Electrical Transient Analyzer Program (ETAP), NEPLAN, POWER World, MATLAB, DIgSILENT, PSAT, PSCAD/EMTDC, and MATPOWER. Identifying the most suitable software solution entails assessing numerous criteria. This section examines prior research relevant to the objective of this paper. The literature examines several important attributes when evaluating software packages: assessment competence, usability, data storage quality, graphical interface, processing time, memory requirements, deployment ease, functionality, scalability, vendor support and training, long-term vendor viability, and cost. Different types of research have been devoted to building efficient methods in ranking and selecting tools used in teaching various modules at different levels in university.
Ref. [
5] proposed a method that university-level renewable energy educators can use to choose an HRES simulation software for teaching and learning. The study utilizes multi-criteria decision-making methods to create a framework that combines fuzzy entropy and Fuzzy VIKOR methodologies. This framework is used to assess HRES simulation software, providing significant insights for educators and academics working on scaling renewable energy systems. The results show that Fuzzy VIKOR prefers the BCHP screening tool, COPRAS prefers HOMER, and RETScreen (VIKOR) and ORCED (COPRAS) are least preferred. These findings can help educators and academics find appropriate renewable energy system sizing and optimization tools.
Ref. [
6] developed a comprehensive metric for choosing supply chain management software. The authors investigated decision-support systems, software solutions, and the elements that affect the choice of IT applications. The study examined the components and evolution of supply chain management software and described the operation of various modules within supply chain packages. Furthermore, they propose using a percentage-based weighted Tree technique to pick appropriate supply chain solutions, providing significant insights for practitioners in the industry. Ref. [
7] proposed a systematic approach for defining specific criteria and sub-criteria for enterprise resource planning (ERP) software selection, particularly emphasizing manufacturing enterprises in developing nations. The study highlighted the limitations of existing research in connecting ERP software selection to real-world decision-making processes such as Multiple-Criteria Decision Making (MCDM) and Fuzzy MCDM. Through three interconnected phases, the study discovered suitable criteria from previous literature, validated them with expert feedback, and ranked them using a Fuzzy Analytic Hierarchy Process (FAHP) method. Security, investment, software features, maintainability, a support center, and report features have all been cited as essential criteria for selecting ERP software in manufacturing environments.
Ref. [
8] presented a technique for selecting a blockchain platform to construct enterprise solutions, recognizing the constraints given by the abundance of accessible platforms. Demand for varied industry applications increased as Blockchain 3.0 expanded beyond Bitcoin transactions. The methodology, which included four stages—identification, selection, evaluation, and validation—used a multi-criteria decision-making method such as the Simple Multi-Attribute Rating Technique (SMART) to choose a suitable platform. The detailed examination considered system architecture, tools, domain-specific applications, and capability analysis. Validation through the development of a blockchain-based enterprise solution demonstrated the methodology’s effectiveness and scalability, assisting stakeholders in selecting acceptable blockchain platforms. Ref. [
9] addressed the essential challenge of determining the best GIS software package for a project, referring to it as a multi-criteria decision-making (MCDM) problem. The success of a GIS project is significantly dependent on this decision, which requires considering a wide range of elements and balancing multiple goals. The study proposed the Analytic Hierarchy Process (AHP) as a solution, providing a structured approach to help system developers make educated judgments. The practicality of the AHP decision model is proven using a hypothetical case study, demonstrating its potential to streamline decision-making and speed up GIS software selection.
Ref. [
10] used a multi-criteria decision-making technique to evaluate the performance of computer programming languages (CPLs) in higher education. They acknowledge the challenge of selecting acceptable languages amid the proliferation of programming packages. They emphasize the necessity of minimizing learning time and effort while discussing the essential characteristics of programming languages and using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The study presents a Mathematica function implementing TOPSIS to assess traditional CPLs across seven criteria. It compares TOPSIS results with Analytic Hierarchy Process (AHP) methods, offering insights into educational software selection regarding technical characteristics and learning efficiency. Ref. [
11] examined the challenges associated with choosing advanced planning and scheduling (APS) software from a wide range of alternatives, focusing on the software’s critical nature in resource allocation and operational vulnerabilities. By integrating the fuzzy quality function deployment (QFD), analytic hierarchy process (AHP), and VIKOR techniques, it unveiled an innovative APS software selection methodology. By integrating APS criteria and company requirements, this methodical approach utilized triangular fuzzy numbers and the house of quality to reduce uncertainties. The simplicity and efficacy of the concept are demonstrated through its application to a case involving an aero-derivative gas turbine.
The literature review offers an in-depth assessment of the challenges and approaches involved in choosing software tools for different fields, such as renewable energy, supply chain management, enterprise resource planning, blockchain development, GIS, computer programming, and advanced planning and scheduling [
12,
13,
14]. Each study examines the intricacy of decision-making while choosing the most suitable software solution, presenting strategies that range from multi-criteria decision-making approaches to fuzzy analytic techniques. These approaches seek to increase effectiveness, decrease ambiguity in software selection, and streamline the decision-making process. This study will add to this body of knowledge by presenting a fuzzy multi-criteria method for selecting simulation software for teaching power systems at the undergraduate level. By incorporating existing research insights and leveraging literature methodologies, the paper aims to provide a structured and reproducible approach to guiding educators and decision-makers in selecting the most appropriate simulation software for teaching power systems, ultimately improving the learning experience and effectiveness of power systems education at the undergraduate level.
Contributions to Knowledge
The importance of selecting suitable software for undergraduate power systems analysis instruction in developing countries cannot be overemphasized. The universities in these nations often face resource constraints, and allocating funds and technology is crucial for maximizing educational impact. Though MCDM techniques have been further extended to the area of software selection for most domains, only little systematic evaluation was carried out to evaluate the software alternatives for teaching power systems. This research fills this gap using Fuzzy-ARAS and Fuzzy-TOPSIS methods to rank power systems education software. An important contribution of this work is that it identifies and defines 12 evaluation criteria based on expert domain input. These 12 criteria comprehensively cover key considerations in selecting educational power systems software based on technical capabilities, usability, vendor support, and cost. This research employs two fuzzy MCDM techniques, ARAS and TOPSIS, to rank the software. To ensure the robustness and reliability of the results, we introduce a new index, the ‘combined rank coefficient,’ which harmonizes the rankings obtained from these two techniques. This dual methodology and the rank combination technique enhance the credibility of this study’s findings. This study offers valuable insights and clear directions, presenting a fresh, evidence-based framework for power systems educators and program administrators. This framework aids in selecting the most suitable software based on their specific needs and priorities, thereby enhancing power system pedagogy and educational outcomes in a more practical and effective manner.
3. Methodology
Figure 1 shows the framework used in this research; the suggested framework begins by identifying the relevant power systems simulation software used for teaching in Nigerian universities. An extensive survey is conducted among the relevant stakeholders to identify the tools and important criteria to consider when selecting the tools. The next step is to build a fuzzy-based system and specify the importance of the criteria using linguistic expressions. After that, a questionnaire is developed so that experts can evaluate the identified software based on the previously established linguistic terms.
Existing fuzzy multi-criteria decision-making (MCDM) techniques provide a variety of strategies for dealing with the uncertainties and imprecisions inherent in decision-making processes. The Fuzzy Analytic Hierarchy Process (FAHP) is a method that facilitates hierarchical decision-making by enabling decision-makers to articulate their preferences using language concepts [
15]. This enhances the comprehensibility of the outcomes [
16]. Another method, the Fuzzy Technique for Order Preference by Similarity to Ideal Solution (Fuzzy-TOPSIS), allows for an in-depth evaluation of alternatives by considering both positive and negative aspects [
17]. This approach provides valuable insights into the relative performance of alternatives in situations of uncertainty. The Fuzzy Additive Ratio Assessment approach (Fuzzy-ARAS) enables an objective assessment of alternatives by considering their relative ratios. This method offers an explicit assessment procedure that considers uncertainties [
18].
Meanwhile, the Multi-Attribute Ratio Comparison Optimization System (MARCOS) is an approach that combines ratio comparison and optimization approaches [
19]; it enables decision-makers to balance conflicting objectives successfully. The Multi-Attribute Border Approximation Area Comparison (MABAC) method provides a dynamic strategy to deal with incomplete and inaccurate information by estimating the limits of the decision space, enabling robust decision making in uncertain environments [
20]. The Multi-Attribute Ideal-Real Comparative Analysis (MAIRCA) offers an extensive structure for evaluating alternatives based on ideal and real benchmarks [
21]. This allows decision-makers to examine the relative performance of alternatives in dynamic choice situations. These fuzzy MCDM techniques provide decision-makers with useful tools to help negotiate challenging decision-making environments. However, carefully considering their advantages and limitations is necessary for their effective application. Each technique has its unique strengths and drawbacks, as outlined in
Table 1.
Two MCDM methods, namely the Additive Ratio Assessment System (ARAS) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), were used to rank the software, and the results were compared. The Fuzzy-ARAS and Fuzzy-TOPSIS approaches were selected to evaluate the most suitable power system software based on their technical capabilities, which aligned with this study’s specific objectives. Specifically, the utilization of fuzzy set theory played a crucial role in enabling these strategies to effectively manage the uncertainty and imprecision that naturally arise from expert criteria weights and software ratings [
24,
25]. Furthermore, Fuzzy-ARAS and Fuzzy-TOPSIS were selected because they adapted to quantitative and qualitative criteria and demonstrated the capacity to rank alternatives, simple computational processes, and successful prior implementations in software evaluation challenges. Additionally, using two complementary methodologies enabled a comparison of outcomes and yielded a further understanding of software preferences.
More recently, the Fuzzy-TOPSIS method has been successfully used in the selection of suppliers for speech recognition products in IT projects by combining techniques [
26], the evaluation and selection of open-source EMR software packages [
27], analysis of agricultural production [
28,
29], and the selection of wind turbine sites [
30]. Also, various researchers have adopted TOPSIS for conveyor equipment evaluation and selection [
24,
31], the evaluation of mobile banking services [
32], the selection of vendors for wind farms [
33], and sustainable recycling partner selection [
34]. The successful prior implementations of Fuzzy-TOPSIS and Fuzzy-ARAS in various software evaluation challenges, as evidenced by their use in supplier selection, analysis of agricultural production, wind turbine site selection, and other domains, underscores their suitability for the task at hand.
To leverage the strengths of each method, this study developed a combined ranking coefficient that harmonizes the differences in rankings from Fuzzy-ARAS and Fuzzy-TOPSIS [
24,
25]. This coefficient was derived by averaging the normalized outcomes from both methods, ensuring that the final ranking captures the balanced performance highlighted by Fuzzy-ARAS and the ideal-solution-oriented assessment of Fuzzy-TOPSIS. By merging these two techniques, the combined coefficient provides a more dependable and thorough ranking of the alternatives. This approach to combining rankings reduces the potential inconsistencies that can occur when relying on a single method and enhances the overall reliability of the study. The outcome is a more robust decision-making framework that considers both the overall performance and the unique advantages of each software alternative.
3.1. Fuzzy Additive Ratio Assessment (Fuzzy-ARAS)
The Fuzzy-ARAS assessment method is a multi-criteria decision method used for ranking different alternatives. The decision process is based on different experts’ opinions, and the summation of these opinions is used to determine their respective criterion weights and performance rating for optimal decision making. Several pieces of literature on Fuzzy-ARAS are available [
31,
35,
36,
37,
38,
39,
40]. To perform a multi-criteria analysis using the Fuzzy-ARAS method, the following steps were followed:
Step 1: decide the linguistic variables for criteria weights and performance rating.
Step 2: Convert the experts’ opinions to internally valued triangular fuzzy numbers.
where
denotes the corresponding interval-valued triangular fuzzy number,
(
denotes the triangular fuzzy obtained based on the opinion of the
participant (decision-maker),
and
is the number of participants.
Step 3: Form the decision-making matrix.
where
is a decision matrix,
is the performance rating of the
alternative of the
criterion, and
, where
is the number of criteria.
where
is a weight vector,
is the weight of the
criterion, and
, where
is the number of criteria.
where
is the performance rating of the
alternative to the
criteria given by the
decision-maker;
where
is a number of decision-makers and/or experts in the Multi-Criteria Group Decision Making (MCGDM).
Step 4: Determine the optimal performance rating for each criterion.
where
is the optimal performance rating of the
criterion,
is the benefit criteria (the higher the values, the better), and
is the set of non-beneficial criteria (the lower the values, the better).
where
If the criteria are beneficial, the maximum will be chosen, but if the criteria are non-beneficial, the minimum will be chosen.
Step 5: Normalize the decision-making matrix.
The preferable criteria, the values of which are minima, are normalized through a two-stage process:
where
is the normalized internal-valued fuzzy performance rating of the
alternative in relation to the
criterion,
,
and
Step 6: Weigh the internal-valued normalized fuzzy decision matrix.
where
is the weighted normalized interval-valued fuzzy performance rating of the
alternative in relation to the
criterion,
.
Step 7: Determine the overall interval-valued fuzzy performance rating.
where
is the overall interval-valued fuzzy performance rating of the
alternative,
.
Step 8: Carry out defuzzification of
.
where
is the interval-valued triangular fuzzy number of the form
.
Step 9: Determine the degree of utility
for each of the alternatives.
Step 10: Rank the alternatives based on the defuzzified values.
Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (Fuzzy-TOPSIS) is a multi-criteria decision-making (MCDM) method used to evaluate and rank alternatives based on multiple criteria [
41]. It is an extension of the well-known Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method, which uses fuzzy logic to deal with uncertainty and imprecision in the evaluation data. In the Fuzzy-TOPSIS method, the criteria are first converted into fuzzy numbers to represent their relative importance. The performance of each alternative is then rated on each criterion using fuzzy numbers to represent the degree to which the alternative satisfies each criterion. The similarity between each alternative and the ideal solution (the best alternative) and the worst solution (the worst alternative) is then calculated using fuzzy set theory. The final ranking of the alternatives is based on the distance between each alternative and the ideal solution, with the closest alternative being ranked as the best. The Fuzzy-TOPSIS method comprehensively evaluates the alternatives based on multiple criteria and considers the uncertainty and imprecision in the evaluation data [
42].
The Fuzzy-TOPSIS method has been applied in various fields, such as engineering, environmental management, and social sciences, and has been found to be an effective tool for solving complex decision-making problems. To perform a multi-criteria analysis using the Fuzzy-TOPSIS method, you need to follow the following steps [
29]:
Step 1: Define the problem and decision criteria.
Define the problem and the criteria that are relevant to the decision-making process.
Step 2: Develop the evaluation matrix.
Create a matrix that contains the fuzzy numbers representing the performance of each alternative on each criterion using
.
Step 3: Normalize the evaluation matrix.
Normalize each row of the evaluation matrix by dividing each value by the maximum value in that row using the following:
Step 4: Assign criteria weights.
Assign weights to each criterion to represent the relative importance of each criterion. The weights can be assigned as fuzzy numbers using the following:
Step 5: Weighted normalization of the evaluation matrix.
Multiply the normalized evaluation matrix by the criteria weights to get a weighted normalization matrix using the following:
Step 6: Identify ideal and negative-ideal solutions.
Identify the ideal solution as the alternative that has the highest values in the weighted normalization matrix and the negative-ideal solution as the alternative that has the lowest values in the weighted normalization matrix using the following:
Step 7: Perform distance calculation.
Calculate the distance between each alternative and the ideal solution and the distance between each alternative and the negative-ideal solution using the following:
Step 8: Perform relative closeness calculation.
Calculate the relative closeness of each alternative to the ideal solution using the following:
Step 9: Determine the final ranking.
Rank the alternatives in decreasing order of their relative closeness to the ideal solution.
Step 10: Interpret the results.
Finally, interpret the results of the analysis and make a decision based on the rankings.
3.2. Empirical Illustrations
To illustrate the efficacy of the proposed Fuzzy-ARAS framework in selecting the most preferred software for teaching power systems analysis at the undergraduate level, 8 popular software used in teaching power systems simulations were selected and compared based on 12 criteria.
Figure 2 shows the hierarchical structure of the MCDM problem to be solved.
3.3. Software Selection Criteria
The criteria selected for this study were based on insights from the literature. Ref. [
5] placed importance on criteria related to evaluation capacity (assessment competence), ease of deployment (implementation ease), memory requirements, storage quality, graphical interface, computational or process time, and usability (ease of use). Ref [
43] emphasized scalability as an important criterion when selecting software. Ref [
27] also validated the importance of functionality, support and training, and vendor viability, while Ref. [
44] stressed the need to include the cost criterion when evaluating software selection.
The primary criteria for assessing educational software solutions are educational quality, which evaluates how well the software supports student learning; usability, which assesses how user-friendly the interfaces and workflows are for learners; and technical capability, encompassing aspects like memory requirements, storage capacity, and scalability. Others include performance, particularly the application’s processing speed and responsiveness; capability, or the range of features and functions offered; vendor quality, which considers the quality of training and support offered by the vendor as well as the viability of the vendor; and cost regarding the software’s affordability and educational value proposition. Technical capability, performance, and capability cover complementary aspects of evaluating educational software. Technical capability refers to the underlying technical infrastructure that enables software functionality. Performance focuses on processing speed and how quickly the software can deliver results, while capability relates to the breadth and depth of features and functions provided by the software. Based on this, technical capability provides the technical foundations, performance measures speed, and capability assesses the functional scope. These seven categories, taken together, offer a thorough framework for evaluating the merits and demerits of any particular educational software platform in terms of its influence on education, ease of use, technical underpinnings, speed, functionality, vendor standing, and affordability.
Assessment Competence (C1): The kind of scrutiny/evaluation the software can handle while impacting a wide range of knowledge in the students.
Usability (C2): How fast and easily students will be able to use the software in the learning procedure.
Storage Quality (C3): How much capacity the solution has to store information without crashing or slowing down the learning process.
Graphical Interface (C4): How simple the interfaces are and how presentable they are in the software to handle chats and complex simulation results in a friendly manner to the students.
Process time (C5): How fast the software can process power systems solutions for large data.
Memory Prerequisites (C6): This considers the memory size required to install the software solution and process study-related tasks for the students.
Ease of Deployment (C7): How the results of simulation can be deployed and implemented easily.
Functionality (C8): How many functions are available to complement the power system knowledge required by the students.
Scalability (C9): How scalable the software is in multisystem and advanced environments where multiple instances of the system are required.
Support and Training (C10): How much support is available from the software vendors and trainer program availability for training.
Vendor Viability (C11): How sustainable the vendor of the software system is.
Cost (C12): How cost-effective the software is to be affordable for educational purposes.
3.4. Expert Opinion Collection
A questionnaire was designed to collect the experts’ opinions on the weights of the selected criteria and the ranking of the software based on the selected criteria. All with more than 5 years of experience, four of the experts consulted have a doctoral degree in electrical engineering, while one of them has a master’s degree in electrical engineering. The developed questionnaire was divided into 2 sections; section one focuses on obtaining experts’ opinions on software solutions for power systems analysis (on its adoption for teaching at the undergraduate level) against the criteria, while the second section consists of questions that rate the importance of the criteria on the selection of power systems analysis software for teaching and learning. The scale used for obtaining the opinions of the experts was a 10-point linguistic term, as shown in
Figure 3.
This study followed ethical guidelines; participants were made aware that their involvement was voluntary and that their answers would only be used for research purposes, with all data kept anonymous. By returning the completed questionnaires, participants implied their consent. Since the study did not include any physical or psychological interventions, we did not need formal ethical approval from the University of Ministry of Education. The study adhered to the principles outlined in the Declaration of Helsinki and complied with relevant data protection regulations to ensure the confidentiality and proper management of all data.
4. Results and Discussion
A survey across Nigerian institutions academics shows that the most common software that instructors use in teaching power systems analysis include ETAP (M1), NEPLAN (M2), POWER World (M3), MATLAB (M4), DIgSILENT (M5), PSAT (M6), PSCAD/EMTDC (M7), and MATPOWER (M8). To select the criteria, software and power systems engineers in the industry and academia were consulted; they suggested that the most relevant criteria for the present study are assessment competence (C1), usability (C2), storage quality (C3), graphical interface (C4), process time (C5), memory prerequisites (C6), ease of deployment (C7), functionality (C8), scalability (C9), support and training (C10), vendor viability (C11), and cost (C12). These are discussed briefly in the next subsection. According to the experts’ responses, validating the software used to teach power system analysis at the undergraduate level is essential. The experts also agreed that various criteria should be considered when selecting software solutions concerning power system analysis. The opinions of the five experts on the importance of the 12 criteria are presented in
Table 2 and
Table 3, while the opinions of the experts on the rank of the software based on the selected criteria are presented in
Appendix A.
Following steps 1–7 of the ARAS method, the overall interval-valued fuzzy performance rating was obtained and is given in
Table 4. Defuzzification is then performed on the values given in
Table 5 to obtain the following interval-valued fuzzy performance ratings:
|
|
By dividing the values of the various
corresponding to each alternative with
the degree of utility
for each of the alternatives is obtained. The values of the
are used in ranking the alternatives. The alternative with the lowest value is taken as the least preferred alternative, while the one with the highest value is taken as the most preferred alternative.
|
|
From the ARAS analysis, it is seen that MATLAB is the most preferred software that experts think is good for teaching power systems analysis, while the least preferred is MATPOWER. The TOPSIS method was also implemented using the responses obtained from the experts, and it produced a result different from that of the ARAS method. The TOPSIS method shows that the most preferred software is MATPOWER, while the least preferred software is NEPLAN (
Figure 4). The use of Fuzzy-ARAS and Fuzzy-TOPSIS methods led to varying rankings for the software alternatives assessed, which might prompt questions about the consistency of the methodology. However, this variation is not a drawback; it highlights the distinct advantages and focus of each approach. Fuzzy-ARAS offers a thorough evaluation by computing additive ratios, allowing for a well-rounded comparison of each alternative across all criteria. On the other hand, Fuzzy-TOPSIS assesses alternatives based on their closeness to an ideal solution, giving greater weight to how close each alternative is to optimal performance for each criterion. The differences in rankings between the two methods emphasize the need to consider various viewpoints when making decisions, especially in complex scenarios like selecting software for educational use. While Fuzzy-ARAS offers a comprehensive perspective by consolidating all criteria into a single metric, Fuzzy-TOPSIS ensures that options excelling in specific key areas are not missed.
To bridge the gap between these two methods and enhance the study’s reliability, the study introduced a combined ranking coefficient. This coefficient aligns the results from both methods by averaging the normalized scores. This approach guarantees that the final ranking reflects both the balanced performance captured by Fuzzy-ARAS and the ideal-solution-focused evaluation provided by Fuzzy-TOPSIS. The integration of these two methods results in a more thorough and nuanced assessment, minimizing the risk of any single method’s limitations or biases disproportionately influencing the final outcomes. The combined ranking coefficient highlights its importance by integrating the strengths of both methods, leading to a more dependable evaluation of the software options. This strategy guarantees that alternatives that consistently perform well across all criteria, along with those that excel in particular aspects, are both adequately represented in the final rankings.
From the details of the results obtained from the ARAS and TOPSIS method (
Table 5), by using Equation (39),
and
are used to derive another coefficient, which is proposed for combining the rank of the two methods used in this study. This new coefficient shows that MATLAB is still returned as the most preferred alternative for teaching power systems courses, while NEPLAN is the least preferred alternative (
Figure 5).
This study’s results are similar to what other research has reported about the ranking and selection of software using MCDM methods. However, there are also some noticeable differences because of the specific application and methods used. The result of this study agrees with Ref. [
6], Ref. [
7,
10] on the importance of technical capabilities, usability, and vendor support as key criteria for software selection. Although these studies focused on different areas like ERP and computer programming languages, they all reported that technical features, user-friendliness, and vendor factors are crucial in choosing software. Additionally, this study supports the use of structured MCDM approaches for decision making in software selection; this aligns with the findings of Ref. [
8,
9,
11]. Although the studies focused on different application areas, they show how fuzzy MCDM methods are practical for considering various factors and reducing uncertainty in the selection process.
Nonetheless, there are distinctions between this research and the examined literature. The MCDM methods employed in this paper (ARAS and TOPSIS) are not the same as those used in some of these studies. For example, Fuzzy VIKOR and COPRAS were utilized by Ref. [
5], while Ref. [
9] used AHP. This disparity in method might lead to different software being ranked highest in different studies. In addition, the context of application for which the present study was conducted (educational systems in electrical engineering) is different from what has been covered by other works under review, such as sustainable energy [
5], ERPs [
6,
7], and blockchain technology [
8]. This difference in domains may lead to variations in the specific software alternatives considered and the prioritization of certain criteria.
Despite these differences, the current study makes novel contributions to the existing body of knowledge on software selection using MCDM methods. To the best of the authors’ knowledge, this study is the first to apply MCDM methods, specifically ARAS and TOPSIS, to select software for power systems education. This extends the application of these MCDM techniques to a new domain and provides valuable insights for educators and decision-makers in this field. Furthermore, the introduction of a combined rank coefficient to reconcile the rankings obtained from ARAS and TOPSIS is a methodological innovation not observed in the reviewed literature. This approach offers a novel way to synthesize results from multiple MCDM methods and enhance the robustness of the software selection process.
5. Conclusions
This study used the Fuzzy-ARAS method to compare and evaluate various power systems analysis software for undergraduate instruction. A framework combining the Fuzzy-ARAS (Fuzzy Additive Ratio Assessment) approach and expert judgments was used to choose the most suitable software. Eight software products were evaluated using twelve criteria, including assessment competency, usability, storage quality, graphical interface, process time, memory requirements, ease of deployment, functionality, scalability, support and training, vendor viability, and cost.
This study makes notable contributions to the literature on power systems education and MCDM applications. To the best of our knowledge, it represents the first systematic application of fuzzy MCDM methods to evaluate and rank software alternatives for teaching power systems courses. Our rigorous evaluation approach, based on expert-defined criteria, provides a new methodological template for software selection in this domain. Secondly, by employing two different fuzzy MCDM methods (ARAS and TOPSIS) and proposing a novel ‘combined rank coefficient,’ this study demonstrates a robust and reliable approach to synthesizing rankings from multiple MCDM techniques. This methodological innovation enhances the trustworthiness of our software rankings.
Significantly, this work provides a practical, evidence-based framework for power systems educators and program leaders. By identifying the top-ranking software across a range of key criteria, these findings empower educational institutions to select tools that optimally support student learning, align with instructor priorities, and fit within budget constraints. This contribution has the potential to improve the quality and outcomes of power systems education substantially. This study builds upon existing MCDM methods, applying them in a novel context with a new set of expert-defined criteria. It also introduces a new rank combination approach, providing robust, practically useful power systems education software rankings. These contributions significantly advance the understanding of best practices in power systems pedagogy and underscore the value of MCDM methods for educational technology selection.
The results of the Fuzzy-ARAS method were compared with the TOPSIS method. The following are the study’s main findings:
Given its effectiveness and adaptability for educational objectives, MATLAB emerged as the most suitable software for teaching power systems analysis at the undergraduate level in Nigeria. NEPLAN was rated as the least preferred software, implying that meeting the standards and needs for efficient teaching and learning in the Nigerian academic context may be difficult.
In the area of power systems analysis education, the Fuzzy-ARAS method established its effectiveness in handling multi-criteria decision-making problems and provided a systematic methodology for software selection.
The study demonstrated the potential inconsistencies in rankings generated using various evaluation techniques, as seen by the discrepancy between the Fuzzy-ARAS and TOPSIS ranks. This emphasizes how important it is to consider various evaluation techniques to thoroughly understand software performance.
Furthermore, this study emphasizes the advantages of utilizing various methodologies to assess complex decision-making situations. By applying both Fuzzy-ARAS and Fuzzy-TOPSIS methods, the study was able to evaluate the software options from two complementary angles: one that looks at overall performance across all criteria and another that focuses on proximity to an ideal solution. The addition of a combined ranking coefficient enhances the reliability of the final results by integrating these two approaches. The combined ranking coefficient, which merges the outcomes from Fuzzy-ARAS and Fuzzy-TOPSIS, provides an additional layer of strength to the evaluation process. This method ensures that the final rankings are not disproportionately affected by the limitations or biases of any single approach. Consequently, educators and decision-makers can feel more assured in choosing the most appropriate software for teaching power systems analysis, as the ranking reflects both balanced performance and alignment with the ideal solution.
The findings of this study can help educators, universities, and curriculum developers choose appropriate software for instructing power systems analysis. Institutions can improve the standard of undergraduate instruction in power systems analysis, resulting in enhanced learning outcomes and better preparing students for real-world issues in the area, by considering the preferences and opinions of experts and implementing evaluation criteria.