1. Introduction
With the rapid development of the digital economy, how to cultivate composite talents with both technical ability and humanistic literacy has become a key focus of higher education reform. In this context, the concept of new liberal arts construction has gradually been introduced into the curriculum reform of universities, especially in the core courses of digital economy-related majors, which have been deeply applied [
1]. As one of the core courses in the digital economy major, “big data technology and applications” not only undertakes the task of cultivating students’ theory and application abilities in big data technology but also requires students to balance social responsibility and ethical thinking in technical practice. The important goal of teaching reform is to achieve interdisciplinary integration of course content and enhance students’ innovative thinking and ability to solve complex problems [
2].
In recent years, research on teaching reform of the course “big data technology and applications” has focused on how to improve students’ self-learning and practical operation abilities through new teaching models such as flipped classrooms and project-based learning. Saltz and Heckman (2015) [
1] proposed that project-based teaching methods can effectively enhance students’ practical abilities, but their promotion in large-scale classrooms still faces challenges. Besides, Gigante and Firestone (2008) [
3] pointed out the crucial role of teachers in teaching reform and believed that the improvement of teachers’ abilities and training support are the foundation for successful reform. However, in practical operation, there are significant differences in teachers’ adaptability and technical literacy. With the rapid development of the digital economy and the deepening of the construction of new liberal arts, how to effectively carry out the teaching reform of the course “big data technology and applications” in the digital economy major has become a key issue. In the processes of teaching reform plan evaluation, decision-making processes have become increasingly complex since decisions are usually made within the time frame of human perception under pressure and lack of data information in response to fuzzy and uncertain conditions [
4,
5,
6]. Therefore, a challenging issue of worldwide concern is how to effectively address teaching reform problems in a complex environment.
Teaching reform plan evaluation in the new liberal arts construction scenario has a complex evolution process, the traditional decision theory is very difficult to grasp in education decision-making, which usually involves multi-conflicting attributes, multi-stakeholder interests, semantic benefits, and a limited number of alternatives. In essence, teaching reform plan evaluation can be considered a complex multi-criteria decision-making (MCDM) problem. However, due to the complexity involved in evaluating teaching reform programs, which often includes multiple conflicting attributes, semantic benefits, the inherent complicacy of the evaluation system, and the need for precise judgments under conditions of fuzziness and uncertain information, traditional decision-making methods struggle to respond effectively. Additionally, there are currently various methods for determining weights, such as the Analytic Hierarchy Process (AHP) and Best–Worst method (BWM). The AHP method uses hierarchical decomposition and pairwise comparisons by experts to systematically transform subjective judgments into quantitative weights, generating a priority vector to build a scientifically grounded weighting system [
7]. The BMW method involves experts selecting the best and worst criteria, followed by relative preference comparisons for the remaining criteria, using an optimization model to minimize deviations and systematically generate a highly consistent and effective weight distribution [
8]. Both of these weighting methods primarily rely on expert opinions to perform pairwise comparisons in the decision matrix, overlooking the objective characteristics of each attribute’s data. In this article, the probabilistic linguistic entropy weight is proposed to emphasize the inherent probabilistic semantic characteristics of the data itself, as well as the concept and properties of information entropy, while accurately expressing this information in probabilistic form based on expert opinions. To aid decision-makers in promoting high-quality development of higher education, this study develops an extended probabilistic linguistic TODIM method with probabilistic linguistic entropy weight and Hamming distance to assess the teaching reform plan for the core course “big data technology and applications” in the digital economy major. The motivation for this study is to propose a good technology tool for enhancing scientific decisions by precisely expressing and depicting fuzzy and uncertain preference information in education management under the background of new liberal arts construction, involving the cognitive behaviors and psychology factors of decision-makers (DMs). In brief, the main works are listed as follows:
- (1)
An extended probabilistic linguistic TODIM with probabilistic linguistic entropy weight and Hamming distance is presented to evaluate the teaching reform plan for the core course “big data technology and applications” in the digital economy major.
- (2)
This extended approach is applied to address teaching reform problems under fuzzy and uncertain decision-making conditions, which gathers the cognitive behaviors and psychological factors of decision-makers.
- (3)
Probabilistic linguistic entropy weight, based on the entropy of the additive linguistic term set, is applied to generate weight information.
- (4)
Parameter sensitivity analysis proves the stabilization and effectiveness of the extended approach with a change in the attenuation parameter of loss .
- (5)
A case study on teaching reform plan evaluation under the background of new liberal arts construction is carried out, and a comparative analysis with different criteria weights and different methods is conducted to verify the extended approach.
The remaining organizational structure is as follows.
Section 2 reviews the relevant research.
Section 3 presents some preliminaries of the additive linguistic term set, the concept of the probabilistic linguistic term set (PLTS), and the TODIM approach. In
Section 4, an extended probabilistic linguistic TODIM with probabilistic linguistic entropy weight and Hamming distance is proposed and presented. In
Section 5, a case study of teaching reform plan evaluation in the new liberal arts construction scenario is executed to verify the extended approach. Finally,
Section 6 summarizes this paper.
2. Related Work
In recent years, with the promotion of the new liberal arts construction concept, the core course “big data technology and applications” in the digital economy major has become a hot research area in teaching reform. The new liberal arts emphasize the integration of traditional liberal arts and emerging technological disciplines to meet the demand for versatile talents in the digital age. In this context, some scholars have attempted and explored how to reform the teaching of big data technology and application courses. Saltz and Heckman (2015) [
1] applied a big data teaching model based on project-based learning, emphasizing the improvement of students’ practical and collaborative skills through actual projects. The value of this study lies in the combination of theory and practice, which meets the practical requirements of big data courses. However, the article lacks personalized guidance for students at different levels, which cannot fully meet their different needs. Demchenko et al. (2014) [
9] proposed that the application of online learning platforms in big data courses greatly improves students’ learning flexibility, but the low interactivity of online learning affects students’ participation and learning effectiveness. Chen et al. (2024) [
10] studied the application of virtual laboratories in teaching and believed that virtual experiments can provide students with a flexible experimental environment, especially in situations where equipment resources are limited. Although virtual laboratories have improved the flexibility of learning, the gap between virtual experiments and real-life interactions has not been fully addressed in research, resulting in insufficient operability. Ma and Liu (2024) [
11] proposed that big data courses should be continuously updated and adjusted according to industry demand to ensure that students have the skills required by the market upon graduation. This study provides theoretical support for course design, but how to track industry demand in real-time and dynamically adjust course content remains an unresolved issue.
Although some progress has been made in the research on teaching reform for big data technology and application courses, several shortcomings remain. These include difficulties in achieving teaching objectives, aligning teaching methods, improving big data teaching abilities for teachers, providing personalized learning experiences for students, maintaining the sustainability of teaching reforms, and the lack of a quantitative tracking and evaluation mechanism. Moreover, balancing the cultivation of big data skills and humanistic literacy within a limited course time remains a challenge. Besides, teaching reform plan evaluation usually involves multi-conflicting attributes, multi-stakeholder interests, semantic benefits, fuzzy and uncertain conditions, and a limited number of alternatives. Therefore, it is considered a complex MCDM problem.
MCDM mainly studies the procedures and methods used in the management process, which can involve multi-expert opinions, multi-conflicting criteria, and a limited number of alternatives [
12,
13]. Kahneman and Tversky (1979) [
14] indicated that DMs have bounded rationality in complex decision-making processes; that is to say, the important factors affecting the decision-making results include the behaviors and psychologies of DMs. In view of this, Gomes and Lima (1991) [
15] proposed the TODIM method, which is one of the most meaningful MCDM methods considering the cognitive behaviors and psychological factors of DMs [
16,
17,
18,
19]. TODIM method can integrate the cognitive behaviors of DMs based on prospect theory, and it has been applied to address certain decision problems [
20,
21,
22,
23,
24].
Real decision-making processes are complex since decisions are usually made within the time frame of human perception under pressure [
4,
5,
6]. Moreover, many decision attributes or criteria are very difficult to quantify [
25,
26,
27]. Furthermore, due to the complexity of the intricacy of systems and decision processes, their effectiveness may be constrained [
6]. Besides, there are many qualitative criteria that are not based on accurate numerical evaluation. Chen et al. (2016) [
28] proposed a proportional hesitant fuzzy linguistic term set for multi-criteria group decision-making that included the proportional information of each generalized linguistic term. Liang et al. (2019) [
29] extended a multi-granular proportional hesitant fuzzy linguistic TODIM approach for emergency decision-making problems. Cao et al. (2023) [
16] proposed a product selection method based on intuitionistic fuzzy soft sets and TODIM. However, the calculations in such methods are relatively complicated, and DMs often use natural language to express their views and opinions, such as “poor”, “good”, and other similar linguistic term sets (LTSs) [
17]. Sometimes, it is very difficult to describe qualitative information or express preferences only through LTSs. For example, when experts evaluate the teaching reform plan, one expert might consider it “very good” or “good” but the expert is not sure how good it really is. In this situation, Pang et al. (2016) [
30] proposed probabilistic linguistic term sets (PLTSs), which can more precisely express this type of preference information with different importance degrees. For instance, the expert for the teaching reform plan evaluation might be 50% deem that the teaching reform plan is “very good”, 20% deem that it is “good”, and 30% deem that it is “general”. Obviously, the teaching reform plan evaluation problems need to be fully integrated with PLTSs to adequately convey complicated fuzzy linguistic information. Accordingly, this paper adopts an extended probabilistic linguistic TODIM approach for more precise assessment.
Meanwhile, regarding the evaluation of distance measures for PLTSs, Pang et al. (2016) [
30] proposed the Euclidean distance measure based on normalized PLTSs, which could influence the distance value to some extent via the added linguistic terms. Also, using normalized PLTSs, Zhang et al. (2016) [
31] gave the Hamming distance measure of PLTSs, which makes it possible to misuse originally important information via the added linguistic term with zero probability. Lin and Xu (2017) [
32] calculated the score of each probabilistic linguistic element to further modify the Hamming distance of PLTSs [
33]. Besides, Liu and You (2017) [
17] determined the index weight based on information entropy measurement in the probabilistic language environment to extend the TODIM method with PLTSs for addressing MCDM problems, and the distance between PLTSs was determined based on the definition proposed by Pang et al. (2016) [
30]. Zhang et al. (2019) [
24] considered water security evaluation based on TODIM with PLTSs, and the attribute weights were derived by a mathematical programming model, which could be separated from data characteristics. Wu and Xu (2020) [
23] proposed a hybrid TODIM method with crisp numbers and PLTS for urban epidemic situation evaluation. He et al. (2021) [
34] presented a risk ranking of wind turbine systems through an improved FMEA based on probabilistic linguistic information and the TODIM method. Lei et al. (2023) [
35] designed the TODIM-VIKOR model in probabilistic uncertain linguistic term conditions to solve the MAGDM problems. The hybrid nature of the TODIM-VIKOR model requires multiple layers of matrix computations and distance measures, which can limit its scalability for problems with numerous alternatives and attributes. Wen and Liao (2024) [
36] proposed a multiple-criteria-decision-aiding model for selecting a suitable blockchain service platform, based on the PLTS, MCHP, 2-additive Choquet integral, and ExpTODIM method. This model combined multiple methods, which makes the model computationally complex, cumbersome, and time-consuming, especially when dealing with a large number of decision criteria and alternative solutions.
Given the research above, this study proposes an extended probabilistic linguistic TODIM approach with probabilistic linguistic entropy weight and Hamming distance to evaluate teaching reform plan for the core course “big data technology and applications” in the new liberal arts construction scenario. This extended approach can aid DMs involved in education management during the complex environment of ambiguity and uncertainty, and the calculation is relatively simple with strong logic and high efficiency.
3. Preliminaries
Here, some concepts of the additive LTS, PLTS, normalization of PLTS, comparison between PLTSs, and TODIM method are introduced.
3.1. Additive Linguistic Term Set
An LTS considering linguistic decision making, can be used to express opinions on the considered objects [
37]. It consists of a limited and completely ordered set of linguistic terms. The additive LTS is further defined by Xu (2005; 2012) [
38,
39] as a subscript symmetric LTS:
where
and
represent the lower value and upper value of the LTS, respectively;
is a positive integer. The linguistic terms have the following characteristics [
40]:
- (1)
If , then .
- (2)
The negation operator is defined as .
3.2. Probabilistic Linguistic Term Set
Based on the additive LTS:
[
38,
39], the definition of the PLTS is given by Pang et al. (2016) [
30] as follows:
where
represents the linguistic term
associated with probability
, and
is the number of all of the different linguistic terms in
.
Note that if , then the PLTS has the complete probabilistic information of all possible linguistic terms; if , then the PLTS has partial probabilistic information; if , then the PLTS has completely unknown probabilistic information.
In addition, the detailed process regarding the normalization of PLTS and the comparison between PLTSs can be obtained based on Pang et al. (2016) [
30].
3.3. TODIM Method
The TODIM method belongs to an interactive MCDM method, which is proposed according to the prospect theory [
14]. The main advantage is that the method can capture the cognitive behaviors and psychology factors of DMs. The alternatives can be ranked by their overall dominance degree according to the dominance degree of each alternative over others. Suppose that there are
alternatives and
criteria. The specific steps are presented as follows [
15]:
Step 1: Construct the original decision matrix , where are all crisp numbers, and is the jth criteria value with respect to the ith alternative.
Step 2: Calculate the standardized decision matrix . The standardized decision matrix is calculated by:
- (1)
If the criteria are the benefit criteria, the criteria can be normalized as:
- (2)
If the criteria are the cost criteria, the criteria can be normalized as:
Step 3: Obtain the relative weight
of the criterion
to reference criterion
as follows:
where
is the weight of the criterion
and
.
Step 4: Generate the dominance degree of alternative
over alternative
as follows:
where
is the dominance degree of alternative
over alternative
with respect to criterion
, which can be generated by
where
is about the attenuation parameter of loss. There are three situations: (1) if
, then
indicates a gain; (2) if
, then
indicates a nil; and (3) if
, then
indicates a loss.
Step 5: Obtain the overall dominance degree
of alternative
as follows:
Step 6: Rank the schemes by the overall dominance degree .
The schemes can be ranked by the overall dominance degree. The greater the value of , the better the scheme.
4. Probabilistic Linguistic TODIM Method
Here, an extended probabilistic linguistic TODIM with probabilistic linguistic entropy weight and Hamming distance is elaborated.
For an MCDM problem with PLTSs, suppose that there are criteria and alternatives. Based on the additive LTS , DMs can evaluate the alternatives for criterion . The results can be given by to construct decision matrix . The specific processes of the extended TODIM approach are listed as follows:
Step 1: Construct the original decision matrix .
The original decision matrix can be constructed according to ; is the jth criteria value with respect to the ith alternative.
Step 2: Calculate the standardized decision matrix
as follows:
Each criterion value can be normalized by the definition according to Pang et al. (2016) [
30].
Step 3: Calculate the probabilistic linguistic entropy weight. Five sub-steps are included as follows:
- (1)
Based on the additive LTS:
[
38,
39], six methods for computing the entropy value of a single linguistic term are proposed by Farhadinia (2016) [
41]. One of the six methods is presented as follows:
- (2)
Let
be a PLTS, the probabilistic linguistic entropy is calculated by:
- (3)
Standardize probabilistic linguistic entropy as follows:
- (4)
Obtain the criterion weight as follows:
where
.
- (5)
Normalize the criterion weight as follows:
Step 4: Obtain the relative weight
of the criterion
to reference criterion
as follows:
where
is the probabilistic linguistic entropy weight of the criterion
, and
.
Step 5: Generate the dominance degree of alternative
over alternative
as follows:
where
is the dominance degree of alternative
over alternative
with respect to criterion
, which can be generated by
where
is about the attenuation parameter of loss. There are three situations: (1) if
, then
indicates a gain; (2) if
, then
indicates a nil; (3) if
, then
indicates a loss. In this subsection, the distance measurement is calculated by the modified Hamming distance of PLTSs proposed by Lin and Xu (2017) [
32].
Step 6: Obtain the overall dominance degree
of alternative
as follows:
Step 7: Rank the schemes by the overall dominance degree .
The schemes can be ranked by the overall dominance degree. The greater the value of , the better the scheme.
This paper aims to come up with a solution for effectively addressing teaching reform plan evaluation problems during the complex, fuzzy, and uncertain environment in order to improve scientific decision-making level. In the extended probabilistic linguistic TODIM approach, firstly, PLTSs are applied to precisely express fuzzy and uncertain preference information in the case of teaching reform plan evaluation by depicting different importance degrees or probability degrees of all the possible linguistic information or preference information. Secondly, probabilistic linguistic entropy weight, based on the entropy of the additive linguistic term set, is adopted to generate weight information. Thirdly, based on TODIM, the cognitive behaviors and psychology factors of DMs are gathered in the extended approach for effectively responding to education management in the new liberal arts construction scenario.
5. Case Study
5.1. Case Analysis
In recent years, with the rapid development of the global digital economy, higher education is facing the challenge of cultivating interdisciplinary and versatile talents. As a core course in the digital economy major, “big data technology and applications” not only undertakes the task of cultivating students’ mastery of big data technology, but also requires students to possess critical thinking, innovation ability, and sensitivity to social ethical issues. Therefore, how to achieve innovation in curriculum content and teaching methods through teaching reform, and enhance students’ comprehensive literacy, has become the focus of current research. Consequently, it is urgent to deal with the teaching reform plan evaluation problems and provide scientific decision basis, however, decision-making processes have become increasingly complex in real assessment environments, often giving rise to fuzzy and uncertain information. Here, an extended probabilistic linguistic TODIM with probabilistic linguistic entropy weight and Hamming distance is presented for a case study on teaching reform plan evaluation for the core course “big data technology and applications” in the digital economy major. The detailed assessment processes are described below.
Four distinct teaching reform plans have been selected for evaluation, focusing on the core course “big data technology and applications”. These plans are assessed using PLTSs, an effective technology for handling fuzziness and uncertainty in decision-making. The evaluation process is grounded in five critical criteria: (1) : Achievement of Teaching Objectives; (2) : Effectiveness of Teaching Methods; (3) : Teaching Competency of Faculty; (4) : Learning Experience of Students; (5) : Sustainability of Teaching Reform. Obviously, , , , , and are all benefit criteria. Each is carefully chosen to capture the multifaceted challenges and opportunities inherent in teaching reform for this rapidly evolving field, based on the analysis of literature review presented in the related work. By applying PLTSs, the evaluation not only considers quantitative outcomes, but also addresses the ambiguous aspects of educational effectiveness.
The detailed processes are listed below:
Step 1: Construct the original decision matrix.
In this section, the most crucial thing is to obtain evaluation information for the teaching reform plan evaluation in four plans
. The evaluation information from expert opinion can be obtained by the following LTS:
.
Table 1 presents the original evaluation information according to the concept of PLTSs.
Step 2: Calculate the standardized decision matrix. C
1, C
2, C
3, C
4, and C
5 are clearly the benefit criteria. Therefore, the standardization processes and the criteria values are calculated according to Equation (9) and sort in ascending order based on the subscript of
, as shown in
Table 2.
Step 3: Obtain the relative weight. The relative weight can be obtained according to Equations (10)–(15), as shown in
Table 3. The criterion
is the reference criterion.
Step 4: Generate the dominance degree
of alternative
over alternative
using Equations (16) and (17), where parameter
.
Step 5: Obtain the overall dominance degree
of each alternative
by Equation (18), as shown in
Table 4.
Step 6. Arrange all schemes according to the overall dominance degree
, as presented in
Table 4. It is obvious that the best alternative is
, and the ranking is
. That is to say, the best teaching reform plan for the core course “big data technology and applications” in the digital economy major is
.
5.2. Sensitivity Analysis
The proposed extended probabilistic linguistic TODIM approach with probabilistic linguistic entropy weight and Hamming distance takes a single parameter. A sensitivity analysis of parameter
, which is about the attenuation parameter of loss, is very important for verifying the validity of the extended approach. Besides, the dominance degree
of alternative
over alternative
is calculated based on the parameter
. Thus, the sensitivity analysis is carried out by different values
in step 1 for each simulation.
Table 5 shows the simulation results.
In
Table 5, we can see that the ranking results are consistent when the parameter
takes different values from 1 to 10. The simulation results penetrate the stabilization and effectiveness of the extended approach with the change in the attenuation parameter of loss
.
5.3. Comparative Analysis and Discussion
Based on step 3 in
Section 4, it is obvious that there are six methods to compute entropy values, proposed by Farhadinia (2016) [
41]. In this study, therefore, probabilistic linguistic entropy weight could be induced and determined in six ways. To further illustrate the influence of criteria weight on the validity of the extended TODIM approach, a different probabilistic linguistic entropy weight is induced and determined by Farhadinia (2016) [
41] as follows:
Probabilistic linguistic entropy weight is calculated based on step 3 in
Section 4. The relative weight is obtained according to Equations (10)–(15), as shown in
Table 6. Here, the criterion
is the reference criterion.
Now, based on the different probabilistic linguistic entropy weights, the evaluation results for teaching reform plans, generated by the extended probabilistic linguistic TODIM approach, can be easily obtained using Equations (16)–(18), as presented in
Table 7.
In
Table 7, it is noticeable that the rankings of the evaluation results are consistent with different criteria weights generated by the extended probabilistic linguistic TODIM approach, therefore verifying the effectiveness of the extended approach again.
Moreover, a comparative analysis of different methods is conducted to verify the effectiveness of the extended approach. A probabilistic linguistic TOPSIS [
42] combined with AHP is presented for comparative analysis with the probabilistic linguistic entropy weight, as shown in Algorithm 1.
Algorithm 1: Probabilistic Linguistic TOPSIS with AHP Method |
- Step
.
|
- Step
|
- Step
and negative ideal solution , based on the probabilistic linguistic entropy, presented in (11) and (12).
- Step
4: Obtained the weighted decision matrix formation. The index weights are determined by the AHP method, which is applied for comparative analysis.
|
- Step
5: Determine the distance measures to positive ideal solution and negative ideal solution .
|
- Step
6: Calculate the relative closeness of each alternative .
|
|
Table 8 displays the pairwise comparison matrix, derived from the expertise and experience of decision-makers, to effectively determine the index weights shown in
Table 9 for evaluating the teaching reform plan.
Table 10 presents the evaluation results generated by the probabilistic linguistic TOPSIS method, which is based on the AHP method to determine index weights.
In
Table 10, we see that the rankings of the evaluation results are the same when using the probabilistic linguistic TOPSIS method with AHP, further verifying the effectiveness and validity of the extended TODIM approach.
The advantages of the approach presented in this paper are that it may outperform traditional methods in data processing, accuracy, and efficiency. It can handle more complex datasets, particularly providing better decision-making results in fuzzy and uncertain complex situations. Its disadvantages may lie in the complexity of the method, which could limit its practical application. For users without a background in operations research or decision science, understanding and implementing the method may pose challenges, increasing the learning costs.
Recently, researchers have conducted some research and exploration on the teaching reform of big data technology and application courses and have achieved some research results. However, despite providing important references for curriculum reform, there are still shortcomings in current research. First, most studies lack an evaluation mechanism for students’ long-term learning outcomes, especially in terms of how to accurately assess students’ learning status and adjust teaching strategies through big data-driven approaches, which still need further exploration. Second, there are difficulties in promoting many teaching models in large-scale classrooms, especially how to cater to the personalized learning needs of students at different levels, which has not been effectively addressed. Third, how to keep up with industry demands in the design of big data courses, especially in the context of rapid development of big data technology, the updating mechanism of course content is still not perfect. Fourth, the interdisciplinary integration and ethical education in curriculum reform have not received sufficient attention, especially in core issues such as data privacy and security, which has led to insufficient cultivation of students’ awareness and abilities. Besides, the effectiveness of decision-making may be constrained due to the intricacy of systems and the complexity of decision processes [
6]. Moreover, decisions are usually made under uncertain conditions within human perception time frame under pressure and lack of data information [
4,
5,
6]. So, how to effectively deal with teaching reform plan evaluation has become an urgent global issue, especially in the era of rapid development of the digital economy worldwide. Therefore, this study aims to propose an effective technical tool for teaching reform plan evaluation for the core course “big data technology and applications” in the digital economy major, which can provide good technical support for scientific decision-making in order to promote high-quality development of education.
6. Conclusions
In the context of the construction of new liberal arts, the rapid development of the digital economy has put forward new requirements for higher education, especially in cultivating students’ technical abilities and innovative thinking. The course of big data technology and application, as a core course of the digital economy, has become a key link in cultivating composite talents with data analysis and application abilities. The research on teaching reform of this course can not only enhance students’ practical operation ability but also provide important support for the innovative development of the new liberal arts education model in order to promote the high-quality development of education. This article aims to evaluate the teaching reform plan of “big data technology and applications”, explore how to optimize teaching design, improve course quality and students’ comprehensive abilities under the background of new liberal arts construction, and provide reference and suggestions for future education reform.
Teaching reform plan evaluation usually involves multiple conflicting attributes or criteria, multi-stakeholder interests, semantic benefits, and a limited number of alternatives, which can be deemed as an MCDM problem. Based on the TODIM method, as well as PLTSs expressing DMs’ preference information, an extended probabilistic linguistic TODIM with probabilistic linguistic entropy weight and Hamming distance is presented to evaluate the teaching reform plan for the core course “big data technology and applications” in the digital economy major. In this extended approach, probabilistic linguistic entropy weight, based on the entropy of the additive LTS, is applied to generate weight information. In addition, parameter sensitivity analysis is conducted to verify the stabilization and effectiveness of the extended TODIM approach, with a change in the attenuation parameter of loss . Moreover, the evaluation results with different criteria weights and different methods are obtained for comparative analysis to further verify the extended TODIM approach. This extended approach provides good technical support for scientific decision-making to evaluate teaching reform plans in the new liberal arts construction scenario.
Theoretical significance: This study has the potential to enrich and enhance the existing theoretical framework used for evaluating educational reform programs. By addressing the complexities of real-world issues within fuzzy and uncertain environments, it contributes to a deeper understanding of these challenges. Furthermore, the method’s ability to represent natural semantic information in probabilistic terms establishes a solid foundation for future research endeavors in this area.
Practical significance: From a practical perspective, the findings of this research provide critical insights for policymakers, industry practitioners, and educational institutions. They serve as a valuable resource for effectively implementing the extended method in the evaluation of actual educational reform initiatives. By doing so, these stakeholders can improve their assessment processes and make more informed, data-driven decisions that enhance educational outcomes.
To improve scientific management and decision-support services in teaching reform plan evaluation, future research will focus on the development and innovation of fuzzy and uncertain intelligent decision-making methods. First, fuzzy and uncertain intelligent decision-making methods can dynamically adjust teaching content and strategies through a comprehensive evaluation of multidimensional fuzzy variables, which can meet the learning needs of students at different levels. Second, fuzzy and uncertain intelligent reasoning systems can monitor and evaluate students’ learning status in real-time, which can provide personalized teaching suggestions for teachers. Third, fuzzy and uncertain intelligent decision-making methods can also help evaluate students’ ethical awareness and social responsibility, and guide students to understand the complex relationship between technology and society more comprehensively through fuzzy logic. Therefore, fuzzy and uncertain intelligent decision-making methods can effectively compensate for shortcomings in current research and enhance the intelligence and personalization of teaching reform, thereby promoting the high-quality development of education.