1. Introduction
Managing uncertain information is a critical component of decision-making, as it directly influences both the reliability and precision of outcomes [
1]. Accurate modeling and effective utilization of such uncertainty not only enhance decision accuracy but also improve the overall efficiency of the decision-making process. Driven by practical needs, particularly in engineering, extensive research has focused on developing robust theoretical frameworks to address uncertainty. Among these, evidence theory [
2,
3] and Random Permutation Set Theory (RPST) [
4] have gained prominence due to their effectiveness in various applications [
5,
6], including decision-making [
7], risk analysis [
8], and fault diagnosis [
9].
Random Permutation Set Theory (RPST), as an extension of evidence theory, introduces a novel approach by expanding the space of combined events to include permutations, thereby enriching the representation of information through event order [
10]. This enhanced representation enables more precise decision-making by incorporating additional layers of information and reducing uncertainty. Similar to evidence theory, RPST excels in handling incomplete or ambiguous information without requiring predefined probability distributions. It supports constructing decision models based on available evidence and provides clear metrics for plausibility, conflict, and uncertainty, thereby facilitating result evaluation and transparency. Furthermore, RPST enables the fusion of diverse information sources via combination rules, while its sensitivity to event order introduces unique advantages, allowing for more nuanced and accurate decision-making.
The theoretical foundations of RPST have been steadily advancing, providing a solid basis for its application in decision-making. Chen et al. [
11] introduced the concept of distance between RPS, drawing parallels with the Jousselme distance in evidence theory, to quantify divergence between sets. Deng et al. [
12,
13] proposed the entropy of RPS to measure the quality of information within permutation sets, while other researchers have developed multiple divergence measures [
14,
15] and efficient matrix operations [
16] to enhance computational performance. These advancements lay the groundwork for RPST’s application in complex engineering scenarios [
17]. Furthermore, Deng et al. [
18] proposed a method for generating permutation quality functions by modeling features with Gaussian distributions, demonstrating the potential of RPST in practical decision-making tasks.
Despite these advancements, challenges remain in using RPST for final decision-making through fusion. Existing methods, such as the left orthogonal sum (LOS) and right orthogonal sum (ROS) [
4], are highly sensitive to the order of permutations, leading to significant variability in outcomes depending on the fusion sequence. While weighted fusion methods based on RPS differences [
11,
15] have been proposed to mitigate extreme scenarios, they may compromise RPST’s core advantage of incorporating event permutations. Exhaustive evaluations of all possible fusion orders can identify optimal sequences but are computationally prohibitive due to factorial growth in complexity as the number of RPS sources increases.
To address these limitations, this paper proposes a novel method that leverages Fisher Scores to evaluate the class sensitivity of information sources, establishing a fusion order based on this sensitivity ranking. The use of Fisher Scores is particularly suitable, as the initial RPS construction involves Gaussian distribution models of data features, aligning well with the method’s characteristics while offering computational efficiency. The fusion process is further refined by integrating RPS differences to produce the final decision. Experimental evaluations on the Iris dataset demonstrate the effectiveness and practicality of the proposed approach, highlighting its advantages over existing techniques in generating permutation quality functions. This study provides a fresh perspective on utilizing sensitivity rankings to guide permutation order, thereby enhancing decision accuracy.
The structure of this paper is as follows:
Section 2 introduces the foundational concepts and background of this study.
Section 3 discusses the motivation for the study and details the proposed method for determining fusion order based on Fisher Scores. In
Section 4, we validate the proposed approach through experiments and comparisons with state-of-the-art RPST-based decision-making methods. Finally,
Section 5 summarizes the key contributions of this study and outlines future research directions.
2. Preliminaries
This section focuses on the basic preparatory knowledge for the understanding of the paper, mainly the introduction of some concepts and definitions of D-S Evidence Theory, Random Permutation Set and Fisher Scores.
2.1. D-S Evidence Theory
Evidence Theory (Dempster–Shafer Theory) is a mathematical framework for handling uncertain information, introduced by Dempster [
2] and Shafer [
3] in the 1960s and 1970s. This theory plays a crucial role in multi-source information decision-making, and its theoretical properties have been extensively studied by numerous scholars, including aspects such as entropy [
19], distance [
20], and divergence [
21]. Below, we will provide some important foundational knowledge regarding Evidence Theory.
Definition 1 (Frame of discernment)
. Frame of Discernment (FOD) refers to a set in which the elements satisfy mutually exclusive and exhaustive elements. FOD can be represented by this formula,The power set of Θ
refers to all possible subsets of Θ
, which can be represented as follows: In the above formula, ∅ represents the empty set. In classical evidence theory, no belief is assigned to the empty set, while in generalized evidence theory, belief may be assigned to the empty set.
Definition 2 (Basic Probability Assignment)
. Basic Probability Assignment, also known as a mass function, refers to a function mapping that assigns belief to the power set under a given frame of discernment. It can be mathematically expressed as,And it satisfies the following conditions, where represents the subset of the power set: In the conditions outlined above, classical evidence theory requires that both sides of the and condition are satisfied. In contrast, generalized evidence theory only necessitates satisfying the latter condition. Additionally, is referred to as a focal element if it satisfies .
Definition 3 (Dempster’s combination rule)
. The mathematical expression for Dempster’s combination rule for two BPAs under the same frame of discernment is given by,In the formula above, , often denoted as K, is called the conflict factor, representing the degree of conflict between the two bodies of evidence. When there is no conflict between the evidence, K is equal to 0. As the conflict between the evidence increases, the value of the conflict factor also becomes larger.
By using Dempster’s combination rule, multiple BPAs within the same frame of discernment can be fused into a single BPA. This fusion result is independent of the fusion order; therefore, in an ideal scenario, the BPAs used for fusion are assumed to be accurate and reliable.
2.2. Random Permutation Set
The Random Permutation Set (RPS) presents a unique conceptual framework that incorporates permutations of elements, expanding upon traditional evidence theory. To aid in understanding this paper, this section will introduce key definitions associated with RPS.
Definition 4 (Permutation Event Space)
. Consider a predefined set of N elements, denoted as , where each element is mutually exclusive and collectively exhaustive. The permutation event space (PES) of this set is defined as follows,In this framework, denotes the i-th permutation of N, defined by the expression . The permutation event, represented by the element within the PES, corresponds to a tuple that signifies a possible permutation of . Here, the index i indicates the cardinality of , while j designates the specific permutation within the set.
Definition 5 (Random Permutation Set)
. In the presence of a defined set of N mutually exclusive and exhaustive elements, denoted as , the Random Permutation Set (RPS) manifests as a grouping of pairs, Definition 6 (Permutation Mass Function)
. A Permutation Mass Function is a functional mapping of elements within the space of permutation events, which can be mathematically represented as,And, similar to evidence theory and probability theory, it satisfies analogous constraint conditions, as outlined below, Definition 7 (Intersection of Permutation Events)
. The Left Intersection (LI) and Right Intersection (RI) of events A and B, both elements of the Permutation Event Set (PES) of the set , are defined as follows,In this context, denotes the removal of from through the permutation of elements in . Notably, it is evident that is equal to .
Definition 8 (Left Orthogonal Sum of Permutation Mass Functions)
. For two RPSs defined on , and , the left orthogonal sum is denoted by , and is mathematically expressed as follows,In this expression, , with representing the left intersection. Furthermore, the mathematical term is defined as follows, Definition 9 (Right Orthogonal Sum of Permutation Mass Functions)
. For two RPSs defined on , and , the right orthogonal sum is denoted by , and is mathematically expressed as follows:In this context, , with representing the right intersection. Furthermore, the mathematical term is defined as follows: Similar to K in evidence theory, both and represent the degree of conflict between two RPSs, with larger values indicating a higher level of conflict.
Definition 10. Ordered degree of permutation events. Given two permutation events , the ordered degree between A and B is defined as follows: is referred to as the pseudo-deviation distance. Clearly, the range of is .
Definition 11 (Distance of RPS)
. Let and be two RPSs on the same event set Θ
. The distance between and is defined as,where and are two vectors in the permutation event space with coordinates . The matrix is a matrix, whose elements are given by, 2.3. Fisher Score
The Fisher Score is a feature selection algorithm based on similarity, first proposed by Fisher. It has advantages such as computational simplicity and no requirement for additional parameters. In this subsection, we will provide its specific definition.
Definition 12 (Fisher Score)
. The Fisher Score is a supervised attribute selection algorithm [22]. The core of the algorithm is to evaluate the importance of attributes for classification tasks by comparing the mean difference and variance of attributes within the target category. Its mathematical expression is as follows:where represents the mean value of attributes and represents the mean value of attributes for samples in class j. represents the variance value of attributes for samples in class j. In addition, indicates the number of samples in class j, The mean and variance are formulated as follows: The larger the Fisher Score, the stronger the attribute’s ability to distinguish the target category. The smaller the Fisher Score, the less powerful the attribute is in distinguishing the target class. From the point of view of the Fisher Score, a desirable attribute is to have the smallest within-class variance and the largest out-of-class variance.
3. Decision Fusion Based on Fisher Score with Permutation Information
In this section, we will first discuss the two motivations behind the work presented in this paper. Based on these discussions, we propose our method, which leverages Fisher Scores to obtain fusion order information and performs the final fusion and decision-making.
3.1. The Motivation of This Study
The primary motivation for this study stems from two considerations. First, in real-world scenarios, different information sources exhibit varying sensitivities to specific classes or decisions [
23,
24,
25]. Thus, an important question is how to incorporate this differential sensitivity into the final decision-making process. Second, unlike Dempster–Shafer Theory (DST), Random Permutation Set Theory (RPST) introduces an asymmetry in fusion, which, if leveraged effectively, could enhance decision outcomes.
In evidence theory, the fusion process typically assumes that all information sources contribute equally to the final decision. However, this assumption is often unrealistic, as the reliability and sensitivity of each information source can vary significantly depending on the context of the decision. For example, both hyperspectral images [
26] and LiDAR data [
27] are commonly used to identify remote sensing objects, but their effectiveness differs due to their unique operational characteristics. LiDAR, for instance, is particularly sensitive to height information [
28,
29], making it more accurate for distinguishing topographical features like “mountains” and “grasslands”. On the other hand, hyperspectral imaging, which captures subtle variations in color, is more effective for tasks such as assessing plant maturity, where color distinctions are critical. In the field of computer science, it is well recognized that information sources are not always equally reliable or relevant to decision-making. This understanding has spurred the development of various feature selection algorithms [
30] that leverage statistical measures [
31], information theory [
32], Sparse-Learning-Based Methods [
33], and similarity metrics [
34]. While these algorithms primarily focus on feature selection rather than direct decision-making, they underscore a crucial insight: the importance of each information source varies depending on the decision context, and thus, sources should be weighted accordingly. This observation leads to a central motivation for our study: how to integrate varying levels of reliability and sensitivity of different information sources into the final decision-making process. The challenge lies in appropriately accounting for these differences to improve the quality and accuracy of the final decision. To put it directly, we should prioritize more sensitive data in more important positions.
The second key consideration is to use Random Permutation Set Theory (RPST) to reflect this non-equivalence in the contributions of different information sources to the final decision. From the discussion above, it is clear that different information sources vary in their influence on decision-making. For each information source, we can attempt to identify an algorithm that quantifies its ability to differentiate outcomes. However, one drawback of purely quantitative methods is that different algorithms or evaluation frameworks often yield varied quantification results. Moreover, using purely quantitative methods may lead to overfitting in our decision-making model. Quantitative approaches may inadvertently compromise robustness, so converting this information into a qualitative format can mitigate overfitting risks and reduce computational complexity. We choose RPST to encapsulate such qualitative information, enabling us to align the fusion order with the influence each information source has on decision-making. As suggested in [
4], more reliable information sources should ideally be fused earlier in the process. When using the LOS, more trusted sources should appear on the left side, with fusion proceeding sequentially to the right according to their level of reliability. To better understand this asymmetric fusion property, we will illustrate the asymmetry of the classic fusion rules in RPST with an example. This example is also derived from Reference [
4].
Example 1. There exist two RPSs defined within the same Permutation Event Space, with their specific permutation mass functions are given as follows: For these two RPSs, if the classic LOS fusion rules are directly applied in different orders, the results will differ. The specific outcomes are presented in Table 1. As shown in the above example, the fusion order has a significant impact on the RPS fusion results. Specifically, when using LOS, RPSs that are positioned closer to the left contribute more to the final fusion result.
3.2. Fusion and Decision-Making Based on Fisher Score and Random Permutation Set Theory
In this subsection, we outline how to utilize the Fisher Score and known Random Permutation Sets for the final decision-making process. The reason for using the Fisher Score as our algorithm to distinguish the sensitivity of different information sources to various decisions is that each RPS originates from a distinct feature. By applying a feature selection algorithm, we can obtain the relevant support levels, which then provide ordering information that facilitates the fusion and decision-making process. The specific steps are as follows:
Step 1. Calculate the Fisher Score for each feature with respect to each category. For the specific formula, refer to Equation (
19).In this step, we calculate the Fisher Score for each feature with respect to each category. Specifically, for a given class, we compute the Fisher Score by treating it as a binary problem of “class” vs. “non-class”. This approach can also be expressed using the following formula:
Although in this step we calculate the specific values of the Fisher Score using an algorithm, we do not directly utilize these values. Instead, we convert the values into ranking information based on their relative magnitudes. If average-weighted methods were employed, the weights would be highly influenced by the training set and feature selection algorithm. Different feature selection algorithms yield varying relative numerical results due to their differing theoretical foundations; however, their core approach remains the same—using the magnitude of the values to determine which features are superior for obtaining the final result. In our approach, we do not utilize the exact numerical values, but instead rely on their relative relationships to aid in our final decision-making process.
Step 2. For each calculated Fisher Score across all features within a specific class, sort the features in descending order of their Fisher Scores. Repeat this process individually for each class. This step can be expressed using the following formula,
Step 3. Based on the sorting in the previous step, calculate the RPS fusion result for each class individually. Specifically, employ the left orthogonal sum (LOS) to fuse the original RPSs, following a left-to-right order based on Fisher Scores sorted in descending order. This process yields an RPS optimized for identifying each class more effectively. The specific formula is given as,
where
is the index corresponding to descending Fisher Scores. In this step, the RPS we obtain is theoretically capable of optimally recognizing each category. At this juncture, if there are originally
n categories, there will be
n RPSs. However, direct decision-making remains infeasible at this stage.
Step 4. Calculate the distance between each fused RPS and the RPS corresponding to its assigned belief value of 1 for the respective category. For specifics, please refer to Equation (
17).
The underlying principle of this step in natural language is that the fused RPS we obtain is the one that best distinguishes a particular class. At this point, the distance between the fused RPS corresponding to that class and the ideal RPS for that class most accurately represents whether the result belongs to that class. For example, the RPS we obtain is the one that most effectively distinguishes whether it belongs to Class A. At this point, the smaller the distance between it and the RPS fully assigned with belief to Class A, the more likely it is to be Class A.
Step 5. By comparing the distances calculated for each category, the class corresponding to the smallest distance is our final decision result. The specific formula is given as,
Here, represents the category i that yields the smallest distance . Through the aforementioned steps, we have utilized the Fisher Score and the known RPSs to make the final decision.
4. Experiment
In this section, we will employ the decision-making method mentioned above to complete the classification task. Specifically, we use the Iris dataset for the experiment. Under the condition that the sources of the initial RPS are completely identical, we compare our method with the results obtained by directly using the LOS method.
The Iris dataset is one of the most famous multivariate datasets in machine learning and statistics. It was introduced by British statistician and biologist Ronald Fisher in his 1936 paper and is commonly used as an example dataset for statistical learning methods. This dataset contains 150 samples, each from one of three different species of iris: Setosa, Versicolor, or Virginica. Four features were recorded for each sample: sepal length, sepal width, petal length, and petal width, which are continuous variables in centimeters. In addition, each sample has a corresponding category label, which is the type of flower. The purpose of using this dataset is to be able to easily present the implementation process of our method to the reader and demonstrate the effectiveness of our method. The details of the experiment are as follows.
4.1. Generation of RPS
In this paper, the generation of RPS is not the main focus. However, to ensure the completeness of the experiment, we provide the specific steps for generating RPS here. The detailed content of the algorithm can be referenced in [
18]; we will only present the corresponding calculation steps.
Step 1. Begin by calculating the mean and variance for each attribute across categories. Use the following equations, where
indicates the total number of training samples in the
i-th category, and
denotes the
j-th sample of that category. The specific formulas can be referenced in Equations (
19) and (
20).
Step 2. Construct a Gaussian discriminant model (GDM) for each attribute associated with its respective category, as defined in the equation below:
Step 3. Develop the membership vector for each attribute using the equation below, where
refers to the Gaussian distribution model for the
j-th attribute of the
i-th class:
Step 4. Calculate the degree of membership and normalize the membership vector using the following equations. Here,
represents the
j-th attribute value of the test sample
:
The normalized membership vector (NMV) can be expressed as follows:
Step 5. Rank the elements of the
j-th normalized membership vector in descending order to create the ordered normalized membership vector (ONMV):
Step 6. Compute the support degree of the
u-th GDM for the
j-th attribute using the following formula:
Step 7. Derive the weight vector
for the elements ordered in the
j-th attribute, as shown in:
In this equation,
serves as the weight factor, reflecting the relative significance of the permutation events. The specific expression for
is given by:
Step 8. Calculate the weighted PMF
based on the weight vector
and the ordered normalized membership vector
:
This concludes with the generation of the weighted RPS based on the
j-th weighted PMF:
4.2. Specific Decision-Making Process
In the previous subsection, we presented the specific method for generating RPSs, and in
Section 3.2, we provided the detailed decision-making steps. To clarify the experimental process, we will illustrate with examples and present the numerical results for each step, facilitating further understanding for readers.
Step 1. Determine the Fisher Score of each feature concerning every category. This can be referenced as Equation (
22). For the Iris dataset, the specific calculation results are shown in
Table 2. As shown in
Table 2, there is a significant disparity between the calculated Fisher Scores. In this case, using a weighted approach to incorporate this sensitivity information may lead to some information sources being assigned excessively high weights, while others may receive disproportionately low weights. This situation can result in the final decision being overly dependent on certain information sources, while other sources may not be effectively utilized. Such an imbalance in weighting can excessively amplify the importance of a single information source, and relying solely on this source often causes the model to lose the advantages of multi-source decision-making, leading to overfitting to certain distributions.
Step 2. Sort the features in descending order of their Fisher Scores for each class. Repeat for all classes. The computed results can be sorted as shown in
Table 3.
Step 3. Calculate the RPS fused in order. The specific formulas can be referenced in Equations (
12) and (
24). To better understand the calculation process intuitively, we randomly selected RPSs generated during the experiment, with several specific RPS shown in
Table 4, and our calculated fusion results shown in
Table 5.
In
Table 5,
and
are obtained through the sequential fusion given by
Meanwhile,
is obtained through the sequential fusion given by
Step 4. Calculate the distance between each fused RPS and the RPS which assigned a belief value of 1 for the respective category. Since
,
, and
are the fused RPSs sensitive to category 1, category 2, and category 3, respectively, it is sufficient to calculate the distances three times. The specific calculation results are shown in
Table 6.
Step 5. Determine the class corresponding to the minimum distance as the final decision result. Based on the table above, the minimum distance is with the class Virginica. Thus, we ultimately conclude that this set of RPS is classified as Virginica. Compared to the original labels, this decision result matches the sample label.
4.3. Experimental Results and Analysis
In this subsection, we present the specific experimental results. This experiment used the same dataset, and under the condition of generating identical RPS, we applied different decision-making methods to obtain the final classification outcomes.
In previous literature, the weighted RPS was commonly obtained by calculating the distances between RPSs and applying weights to achieve a fused result. Studies on such methods can be referenced in papers [
11,
14]. Our approach, however, emphasizes leveraging the asymmetrical properties of RPS fusion to integrate additional information more effectively. Under the condition of identical RPS generation, we obtained the experimental results shown in
Figure 1 and
Table 7. In these figures and tables, “Distance” and “Divergence” correspond to the methods discussed in papers [
11] and [
14], respectively.
By comparing the experimental results, it can be observed that our method outperforms the other two approaches on the Iris dataset, regardless of whether the training set proportion is small or large. This is because our approach fully exploits the asymmetric nature of RPS fusion, leveraging additional information to aid in the final decision. In contrast, the other two methods both apply weighting to the RPS to obtain a single RPS, which is then fused with itself. Since there is only one RPS, these methods lack the asymmetry inherent to fusion. Although such methods can partially address certain issues, such as conflicts within the RPS sources, their weighting approach also results in some loss of information. Moreover, for methods that enumerate all possible fusion scenarios, the computational cost is prohibitive. For the Iris dataset, there are 24 possible fusion outcomes, which is not only inefficient but also diminishes the interpretability of the model.
Another demonstration that using permutations to represent more detailed information can yield better decision results is the comparison between the experimental results of evidence theory and RPST. Since BPA and the permutation quality function are generated differently, direct comparison is inappropriate. However, as seen in paper [
35], both use a similar Gaussian distribution model to obtain either a permutation quality function or a basic probability assignment function. The highest accuracy achieved using evidence theory is only 95.5%, whereas using RPST, accuracy can exceed 96%. This highlights the advantage of using permutation information for decision-making.
In summary, experimental evidence confirms that our method of utilizing additional information transformed into permutation information for decision-making is effective. It not only fully leverages the asymmetric fusion properties of the random permutation set but also enhances the interpretability of the decision model. Compared to the most commonly used method of weighting based on RPS differences, our approach retains more information. Furthermore, the experimental results demonstrate a clear advantage across all training set proportions, proving the effectiveness of our method for real-world decision-making.
5. Conclusions
In this paper, we proposed a method for converting additional information into ranking information for multi-source information fusion and decision-making under the framework of RPS theory. Specifically, we focused on the inequivalence of different information sources in decision-making and the asymmetric nature of fusion within RPS. By utilizing the Fisher Score, we calculated the sensitivity of RPS and decisions from various information sources. The final RPS fusion was sorted based on the computed Fisher Scores, followed by calculating the distances between RPS to facilitate the final decision.
Our experimental comparisons demonstrated that our approach offers higher interpretability compared to traditional methods that compute the differences between RPS and apply weighted averages for decision-making. To our knowledge, there has been no prior research addressing the fusion order of RPS, and our study contributes to filling this gap, providing insights for future information fusion and decision-making processes. Given the inherent imprecision and fuzziness of the real world, leveraging multi-source information fusion for decision-making holds significant practical value. Our research, grounded in RPST as a generalization of evidence theory, showcases wide applicability.
However, while our experiments validate the effectiveness of our method, the challenge of converting additional information into ranking information for decision-making remains an open question. Additionally, our method is contingent upon knowing the original distribution of information sources. The generation of RPS remains an area worthy of further exploration. When the generation of RPS is independent of data features, it becomes infeasible to use the Fisher Score to assess the sensitivity of different RPSs to various classes.
In future work, we will continue to explore more effective ways to utilize information for decision-making and aim to develop broader methods to assist multi-source information decision-making in uncertain conditions.
Author Contributions
Conceptualization, M.L., L.L. and Q.Z.; Methodology, M.L.; Formal analysis, M.L. and L.L.; Writing—original draft, M.L. and L.L.; Writing—review & editing, Q.Z.; Visualization, L.L.; Supervision, Q.Z.; Funding acquisition, M.L. and Q.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This work is supported by the National Natural Science Foundation of China (Grant No. 62303198), the Research Initiation Fund for Senior Talents of Jiangsu University (No. 23JDG010) and the Scientific Research Funding of Jiangsu University of Science and Technology (No. 1052932204).
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Acknowledgments
The authors greatly appreciate the feedback from the Editor and the Reviewers.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Yue, Z. Extension of TOPSIS to determine weight of decision maker for group decision making problems with uncertain information. Expert Syst. Appl. 2012, 39, 6343–6350. [Google Scholar] [CrossRef]
- Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Springer: Berlin/Heidelberg, Germany, 2008; pp. 57–72. [Google Scholar]
- Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976; Volume 42. [Google Scholar]
- Deng, Y. Random permutation set. Int. J. Comput. Commun. Control 2022, 17. [Google Scholar] [CrossRef]
- Chen, H.; He, W.; Zhou, G.; Cui, Y.; Gao, M.; Qian, J.; Liang, M. A novel game-based belief rule base. Expert Syst. Appl. 2024, 254, 124374. [Google Scholar] [CrossRef]
- Deng, X.; Li, X.; Jiang, W. Evidence representation of uncertain information on a frame of discernment with semantic association. Inf. Fusion 2024, 111, 102538. [Google Scholar] [CrossRef]
- He, Z.; Jiang, W. An evidential dynamical model to predict the interference effect of categorization on decision making results. Knowl.-Based Syst. 2018, 150, 139–149. [Google Scholar] [CrossRef]
- Chemweno, P.; Pintelon, L.; Muchiri, P.N.; Van Horenbeek, A. Risk assessment methodologies in maintenance decision making: A review of dependability modelling approaches. Reliab. Eng. Syst. Saf. 2018, 173, 64–77. [Google Scholar] [CrossRef]
- Lin, Y.; Li, Y.; Yin, X.; Dou, Z. Multisensor fault diagnosis modeling based on the evidence theory. IEEE Trans. Reliab. 2018, 67, 513–521. [Google Scholar] [CrossRef]
- Zhou, J.; Li, Z.; Cheong, K.H.; Deng, Y. Limit of the maximum random permutation set entropy. arXiv 2024, arXiv:2403.06206. [Google Scholar]
- Chen, L.; Deng, Y.; Cheong, K.H. The distance of random permutation set. Inf. Sci. 2023, 628, 226–239. [Google Scholar] [CrossRef]
- Chen, L.; Deng, Y. Entropy of random permutation set. Commun. Stat.-Theory Methods 2024, 53, 4127–4146. [Google Scholar] [CrossRef]
- Deng, J.; Deng, Y. Maximum entropy of random permutation set. Soft Comput. 2022, 26, 11265–11275. [Google Scholar] [CrossRef]
- Chen, L.; Deng, Y.; Cheong, K.H. Permutation Jensen–Shannon divergence for random permutation set. Eng. Appl. Artif. Intell. 2023, 119, 105701. [Google Scholar] [CrossRef]
- Chen, Z.; Cai, R. Symmetric Renyi-Permutation divergence and conflict management for random permutation set. Expert Syst. Appl. 2024, 238, 121784. [Google Scholar] [CrossRef]
- Yang, W.; Deng, Y. Matrix operations in random permutation set. Inf. Sci. 2023, 647, 119419. [Google Scholar] [CrossRef]
- Zhou, Q.; Cui, Y.; Li, Z.; Deng, Y. Marginalization in random permutation set theory: From the cooperative game perspective. Nonlinear Dyn. 2023, 111, 13125–13141. [Google Scholar] [CrossRef]
- Deng, J.; Deng, Y.; Yang, J.B. Random permutation set reasoning. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 10246–10258. [Google Scholar] [CrossRef]
- Zhao, T.; Li, Z.; Deng, Y. Linearity in Deng entropy. Chaos Solitons Fractals 2024, 178, 114388. [Google Scholar] [CrossRef]
- Li, M.; Li, L.; Zhang, Q. A new distance measure between two basic probability assignments based on penalty coefficient. Inf. Sci. 2024, 677, 120883. [Google Scholar] [CrossRef]
- Fan, W.; Xiao, F. A complex Jensen–Shannon divergence in complex evidence theory with its application in multi-source information fusion. Eng. Appl. Artif. Intell. 2022, 116, 105362. [Google Scholar] [CrossRef]
- Gu, Q.; Li, Z.; Han, J. Generalized fisher score for feature selection. arXiv 2012, arXiv:1202.3725. [Google Scholar]
- Jackson, C.; Presanis, A.; Conti, S.; De Angelis, D. Value of information: Sensitivity analysis and research design in Bayesian evidence synthesis. J. Am. Stat. Assoc. 2019, 114, 1436–1449. [Google Scholar] [CrossRef]
- Zhang, Q.; Li, M. A betweenness structural entropy of complex networks. Chaos Solitons Fractals 2022, 161, 112264. [Google Scholar] [CrossRef]
- Zhang, Q.; Garlaschelli, D. Strong ensemble nonequivalence in systems with local constraints. New J. Phys. 2022, 24, 043011. [Google Scholar] [CrossRef]
- Li, X.; Ding, M.; Pižurica, A. Deep feature fusion via two-stream convolutional neural network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2615–2629. [Google Scholar] [CrossRef]
- Higuti, V.A.; Velasquez, A.E.; Magalhaes, D.V.; Becker, M.; Chowdhary, G. Under canopy light detection and ranging-based autonomous navigation. J. Field Robot. 2019, 36, 547–567. [Google Scholar] [CrossRef]
- Huang, S.; Zeng, H.; Chen, H.; Zhang, H. Spatial and Cluster Structural Prior Guided Subspace Clustering for Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5511115. [Google Scholar] [CrossRef]
- Li, M.; Huang, S.; De Bock, J.; De Cooman, G.; Pižurica, A. A robust dynamic classifier selection approach for hyperspectral images with imprecise label information. Sensors 2020, 20, 5262. [Google Scholar] [CrossRef]
- Venkatesh, B.; Anuradha, J. A review of feature selection and its methods. Cybern. Inf. Technol. 2019, 19, 3–26. [Google Scholar] [CrossRef]
- Liu, H.; Setiono, R. Chi2: Feature selection and discretization of numeric attributes. In Proceedings of the 7th IEEE International Conference on Tools with Artificial Intelligence, Herndon, VA, USA, 5–8 November 1995; pp. 388–391. [Google Scholar]
- Brown, G.; Pocock, A.; Zhao, M.J.; Luján, M. Conditional likelihood maximisation: A unifying framework for information theoretic feature selection. J. Mach. Learn. Res. 2012, 13, 27–66. [Google Scholar]
- Hara, S.; Maehara, T. Enumerate lasso solutions for feature selection. Proc. AAAI Conf. Artif. Intell. 2017, 31. [Google Scholar] [CrossRef]
- Zhao, Z.; Liu, H. Spectral feature selection for supervised and unsupervised learning. In Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, USA, 20–24 June 2007; pp. 1151–1157. [Google Scholar]
- Xu, P.; Deng, Y.; Su, X.; Mahadevan, S. A new method to determine basic probability assignment from training data. Knowl.-Based Syst. 2013, 46, 69–80. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).