4. The Use of Generalized Distance, Divergence, and Similarity Parameters in Fuzzy Measure Identification Problems for and Fuzzy Measures Classes
Let us briefly review fuzzy measure identification problems and results that are widely used in interactive MADM models.
In [
41], the practical applicability of two probability representations of a finite fuzzy measure—the Campos-Bolanos representation (CBR), equivalent to the APC of a fuzzy measure and the Murofushi-Sugeno representation (MSR)—is examined within the context of multi-criteria decision-making (MCDM) models. This work constructs a new MSR-type representation-interpreter specifically for a particular class of finite fuzzy measures. In [
41], a universal interpreter for a capacity (fuzzy measure) in probability MSR, under the Choquet integral framework and second-order dual capacities, is explored. The next research focus is on analyzing fuzzy measure non-additivity indexes, which are relevant for interactive MCDM models where attribute interactions are observed. The non-additivity index effectively measures the degree of attributes interaction. In [
42], this index is employed to evaluate the range of advantages concerning decision-maker alternatives. [
43] introduces the use of the non-additivity index to replace the Shapley concurrent interaction index and develop an undated MADM decision scheme. A method for calculating the non-additivity index and a decision support algorithm to establish dominance relationships for optimal alternatives ranking are also presented. Key properties of the non-additivity index are considered in [
44], along with a capacity identification algorithm based on this index. The algorithm uses linear constraints to reflect decision-maker advantages over alternatives and formulates a linear programming problem to determine the optimal capacity. A capacity identification simulation algorithm based on the non-additivity index is developed in [
45]. Another research direction involves the additivity defectiveness of capacities, as discussed in [
46]. This paper introduces the concept of capacity defectiveness, representing the degree of capacity non-additivity, and calculates or approximates the defectiveness coefficient for certain capacity classes. An optimal approximation approach for fuzzy integrals is also developed for replacing fuzzy measures with classical measures. The identification of fuzzy measures, including interaction indexes and importance values (Shapley values), is further examined in [
47], where various fuzzy measure representations, such as Mobius transformations and
k-order additive measures, are considered. Ref. [
48] shows that every discrete fuzzy measure can be represented as a
k-order additive fuzzy measure, and presents alternative representation methods using interaction indexes and Shapley values. A learning algorithm for identifying
k-maxitive measures based on heuristic least mean squares is given in [
49]. Ref. [
50] explores the structure and properties of a specific fuzzy measure type applicable to interactive MADM models, utilizing interaction coefficients, Möbius representation and dual fuzzy measures. Ref. [
49] introduces the generalized interaction index, or
g-index, which requires significant computational resources, and presents algorithms to calculate the
g-index for
k-maxitive measures. Ref. [
51] provides a new visualization scheme for understanding fuzzy measures, while Ref. [
52] examines a joint Choquet integral-fuzzy measures operator that uses attribute interactions. Ref. [
53] utilizes a hesitant fuzzy linguistic term set to describe attribute interactivity in fuzzy measure identification. Finally, Ref. [
47] reviews current approaches to fuzzy measure identification and their advantages and limitations, with Ref. [
54] discussing fuzzy measure representations for learning Choquet and Sugeno integrals.
In this section, we present a completely new approach to fuzzy measure identification problems. Conditional optimization problems will be constructed, where the similarity, distance, and divergence parameters defined in the previous sections for two fuzzy measures will be used as objective functions, and the requirements and data of the interactive MADM will be considered in the constraints. The identification fuzzy measure APC will be considered as optimization unknown variables. Specific identification will be performed for specific fuzzy measure classes.
As a first example, let us consider the class
of two-additive fuzzy measures. Suppose we are given an interactive MADM model [
55,
56] with possible alternatives
and attributes
. Suppose that in the interactive MADM model, attributes importance values
and pairwise interactions are known in the form of a symmetric matrix
[
57,
58,
59,
60,
61,
62]. As is known in [
26], if
, then its associated probabilities are calculated by the following formula:
In (76), if
, then the second addend is equal to 0, and if
, then the third addend is equal to 0. Assume that in the decision-making matrix there are alternatives with ratings
from [0, 1] (see
Table 1):
Suppose that the index of uncertainty of expert evaluations on attributes
is represented by
fuzzy measure. If there is a dependence between the attributes in the form of interactions, then it is completely acceptable to use the Choquet finite integral as an aggregation tool [
20]. Then, to obtain a scalar evaluation of the ranking of each alternative
and to aggregate its ratings
to make a final decision, we will use the value of the Choquet integral with respect to the fuzzy measure
[
20]. We highlight the following publications of the authors of this study in the direction of using associated probabilities based on extensions of the Choquet and Sugeno integral operators [
63,
64]. We also know that this population parameter is known in fuzzy statistics as monotone expectation
[
23], i.e.,
or
where
is a permutation for which
.
Now let us consider the concepts of the best approximation of a particular fuzzy measure from a given class of fuzzy measures. The idea is that this approximation is achieved for such fuzzy measure from the given fuzzy measures subclass that the distance or divergence to a particular fuzzy measure is the smallest or this approximation is achieved for such fuzzy measure from the subclass of given fuzzy measures that the similarity to the fuzzy measure is greatest. In doing so, we determine a similar to concrete approximate fuzzy measure from the given class of fuzzy measures. Consider definitions (16)–(19), which are different cases of approximation.
Definition 16. The fixed two-additive fuzzy measure is called the —best approximation of the fuzzy measure according to the similarity relation for the —distance generator Therefore, the best, highly similar fuzzy measure among the two-additive fuzzy measures is the known fuzzy measure .
Analogous definitions can be made in relation to the distance and divergence between fuzzy measures.
Definition 17. A fixed two-additive fuzzy measure is called the —best approximation of the fuzzy measure according to the —distance relation for the —distance generator Note that the —distance in Definition 17 can be replaced by the —distance defined above or other distances generalized here.
Definition 18. The fixed two-additive fuzzy measure is called a —best approximation of the fuzzy measure according to the —divergence for the —distance generator Note here again that the —divergence in Definition 18 can be replaced by divergences defined above or other generalized ones mentioned here.
Note that the area of approximation in Definitions 16, 17, and 18 can be replaced by other practically important subclasses of fuzzy measures and presented here: —Sugeno —additive fuzzy measures, —Choquet second-order capacity, —possibility measures, —fuzzy measures associated with the body of evidence and other classes. Obviously, such subclasses of fuzzy measures are considered, for which their associated probability classes are derived, whose representations include only the basic data of these classes. For example, formulas: For the possibility measures (31); for the Sugeno —additive measure (17); for the measure associated with the data body (37). Therefore, let us introduce analogous definitions for these fuzzy measures’ classes.
Definition 19. The fixed —Sugeno —additive fuzzy measure is called the —best approximation of the fuzzy measure according to the —distance relation (according to the —divergence relation), if for the —distance generator Let us now introduce a completely new type of fuzzy measure identification problem in the MADM environment based on the definitions given here. In general, the idea is as follows: a scalar problem of conditional optimization will be constructed, the objective function of which will be the concept of one of the approximations. The problem constraints are constructed from the particular MADM problem data and constraints, and the optimization unknown variables will be the associated probabilities from the APC which should be identified; specifically definitions 16–18 in the MADM environment allow us to construct the model uncertainty index—fuzzy measure
with the best approximation from the given fuzzy measures class, while taking into account other MADM data as constraints. For example, from a practical point of view it is possible to create an approximate interval evaluation of their monotone expectation for some alternatives (with confidence intervals and others). Consider a particular problem with respect to the DSI index when considering a class
. To solve this problem, let us make a conditional optimization problem, in which the associated probability class
of the fuzzy measure
will be unknown:
where
are the monotone expected values for the alternatives
evaluated at intervals by the experts. In the formulas of (83) for each
, that
.
The conditional optimization problem’s (83) solution fuzzy measure is the closest, similar fuzzy measure to the second-order additive capacity , which takes into account the constraints associated with probabilities in the form of attribute interaction indexes and MADM data on expert evaluations of expected ratings of alternatives. Consider the numerical example of (83):
Example 1. Let and . Suppose that the decision-making matrix is as follows (Table 2): Assume that the importance values
of the attributes and the indexes of their pairwise interaction
are as follows (
Table 3):
Using
Table 3 and Formulas (76), calculate the associated probabilities class
of the fuzzy measure
(see
Table 4).
Then (83) will take a specific form with respect to the unknown associated probabilities values
:
where
The numerical solution of the conditional optimization problem (84) as an associated probabilities class of the fuzzy measure
is presented in
Table 5.
The maximum similarity index is equal to .
Of course, it was possible to use any distance or divergence Formulas (47)–(52) or (53)–(64) as the objective function in the problem (83)–(84). Additionally, we could use the distances as a minimization criterion. Also, we could use the divergences , , and as a minimization criterion. Consider another example:
Example 2. We consider the conditional optimization problem (83)–(84), but we choose the Jeffrey’s —divergence as the objective function, and the constraints will be the same. MADM data are not changed, that is, the class of associated probabilities is the same. We get the following conditional optimization problem: Table 6 presents the results of the numerical solution of the conditional optimization problem (85) in the form of the class of associated probabilities of the fuzzy measure
.
The minimum divergence value is equal to .
Of course, for the identification of the associated probabilities class of a fuzzy measure, one could also consider multi-objective conditional optimization problems by considering several objective functions. This should be derived from the issues of the synthesis of a specific research problem.
Consider one last example:
Example 3. Consider another example. Let us choose the domain of approximation of a fuzzy measure as all classes of fuzzy measures associated with the body of data defined on , where for each fuzzy measure its associated probabilities are calculated as follows (Formula (38)), : As the index of the difference between the approximation fuzzy measure
and the fuzzy measure
for the objective function of the corresponding conditional optimization problem, let us choose the Jeffrey’s symmetric
—divergence
:
To construct a numerical example, let us take the same matrix from
Table 2 as the decision-making matrix of the MADM model. Consider the Choquet integral with respect to the fuzzy measure
as an aggregation operator. Consider the following body of evidence
:
and the vector of allocation weights
,
.
Table 7 presents the fuzzy measure
associated with this body of evidence and weights, and more precisely, its associated probability class
, for this we used Formula (38).
To identify the fuzzy measure
as the MADM uncertainty index, consider the conditional optimization problem with the constraints of the previous examples and the divergence
minimization criterion:
The associated probability class of the fuzzy measure
for the solution of the problem (88) is given in
Table 8:
The minimum value of the objective function is:.
For all three examples, at the last stage, we complete the MADM problem presented in
Table 2 and rank the alternatives
using the identified fuzzy measure
. Aggregation will be accomplished again with the Choquet integral
where for each alternative
there is a permutation
such that
Table 9 summarizes the aggregation values for all three examples, and
Table 10 presents the ranking of the alternatives.
It becomes clear that the alternative is the best for all problems.
Comparative Analysis: We note again that the examples given here are for illustrative purposes only and their purpose is to make it easy for the reader to construct the analogous optimization model they need. However, a little comparative analysis can be conducted. Examples 1 and 2 are practically indistinguishable examples in that the MADM environment is the same, only the objective functions differ. e.g., In 1, the objective function is the similarity parameter
and its maximization is considered. However, the maximum similarity parameter is quite high—
. And, e.g., in 2, the objective function is the divergence parameter
and its minimization is considered. Here too, the minimum divergence parameter is quite high—
. Both examples consider interactive MADM in the approximation of pairwise interaction of environmental attributes, where the two-additive fuzzy measure is considered as the uncertainty index (
Table 4). In both examples, the approximation class is the same, and the similarity between the identified fuzzy measures is high (
Table 5 and
Table 6). Therefore, the ranking of the alternatives should be almost identical (in our case it is completely identical (
Table 10).
As for the third example, here the MADM environment is partially different from the case of the previous two examples. Here, the uncertainty index is considered to be the body of evidence and the class of fuzzy measures associated with it (86). A specific allocation weight vector and a specific body of evidence are considered. Probabilistic representations of the associated fuzzy measure are given in
Table 7. There is agreement in all three examples, where all conditional optimization problems’ constraints are the same. In the third example, the objective function is the same as the divergence parameter
of the second example, and its minimization is considered. Here too, the minimum divergence parameter is quite high—
. As a result, the ranking of the alternatives in the third example partially coincided with the rankings of the alternatives in the first and second examples.
We tested different divergence or similarity parameters in the objective function role for the same MADM environment. We also changed the environment of MADM. As a result, the rankings of the alternatives for the fixed MADM environment are almost identical when we take the Choquet finite integral as the aggregation operator. Which indicates the sensitivity of the identification problem constructed in the article.
5. Conclusions
The presented paper discusses the binary relations of distance, divergence and similarity defined on the space of finite fuzzy measures, which in a certain sense represent generalizations of similar binary relations defined on the space of finite probability distributions. The correctness of the generalizations is proved. The generalizations are based on the class of fuzzy measure—associated probabilities. More precisely, the distance, divergence, and similarity parameters between two fuzzy measures are determined by the distance, divergence, and similarity parameters between their associated probability classes. For the definition of the mentioned parameters between the associated probability classes, the concept of distance generator is introduced, which scales the values of the associated probability class in the scalar value of the corresponding parameter.
The concepts of distance, divergence and similarity between fuzzy measures are used in the fuzzy measure identification problem for a certain multi-attribute decision-making (MADM) environment. For this, a conditional optimization problem with a single objective function that actually represents a parameter of distance, divergence, or similarity, is formulated. The constraints of the optimization problem represent constraints on the MADM data, as well as constraints on the associated probabilities of identification fuzzy measure. Given the extrema of the objective function, the identification fuzzy measure is the best approximation for the class of fuzzy measures allowed in the MADM environment.
The classes of the second-order Choquet capacities, two-additive fuzzy measures, Sugeno —additive fuzzy measures, possibility measures, and fuzzy measures associated with the body of evidence are considered. Numerical examples are discussed and a comparative analysis of the obtained results is presented. In the conditional optimization problem corresponding to a simple example of three parallel MADMs, one and the same constraints are considered. The only difference is in the allowable classes of the search fuzzy measure, which are associated with the nature of the specific problem. There is also a difference in the selection of the objective function, which derives from the choice of preferences of the decision-maker.
The results are obtained at the level of ranking of MADM alternatives. For ranking, data aggregations of alternatives are created using the fuzzy measure identified in it from the Choquet aggregation operator. The results show some differences, which are due to the selection of the search class of the fuzzy measure approximation and the preferences of the decision-maker in the selection of the objective function.
We have considered the same examples in the case of the Sugeno finite integral. The ranking of the alternatives was found to be in great agreement with the case of the Choquet integral. It would also be interesting to consider other aggregation integral operators that use fuzzy measures in their calculations.
Future studies will consider other classes of fuzzy measure approximations related to specific important practical problems. Other important new divergence and distance generalizations for fuzzy measures that are not within the scope of this article will also be considered. Changing the aggregation operator was sensitive.
In future studies, we will consider the use of metrics, similarity and divergence relations defined in the fuzzy measures space in aggregation relations with non-additive and interacting parameters or attributes, such as users’ similarity relation in collaborative filtering systems, phase space metrics of non-additive components of machine learning, measures of clustering and classification of complex objects, etc. In our future studies, it would also be natural to describe the relationships between the divergences mentioned here and the entropy measures.