1. Introduction
In a group decision making (GDM) situation, the decision makers (DMs) are usually required to select the desirable alternative(s) from a collection of alternatives. To cope with this problem, DMs would compare alternatives with each other and provide the preference information, and a judgement matrix can be constructed [
1,
2,
3].
In order to model DMs’ knowledge and preferences, preference relations have been introduced. To characterize fuzziness and uncertainty, some kinds of extended preference relations have been introduced, including fuzzy preference relation (FPR) [
4,
5,
6,
7,
8], multiplicative preference relation (MPR) [
9,
10,
11,
12] and linguistic preference relation (LPR) [
13,
14]. The experts describe their preference information with a 0–1 scale crisp numbers in FPR, and they utilize a 1–9 scale to express their preference information in MPR [
15]. It is noted that the elements in MPRs are crisp values. However, considering the fuzziness and hesitation involved in practical decision-making problems, it may be difficult for DMs to express their evaluated information with crisp values. To describe the imprecision, the interval multiplicative preference relation [
16] and the intuitionistic multiplicative preference relation (IMPR) [
17] are introduced to express their decision-making preference information. Xia et al. [
18] first defined the intuitionistic multiplicative preference relation (IMPR) and developed some IMPRs information aggregation techniques. Xia and Xu [
18] introduced the concepts of hesitant fuzzy preference relation (HFPR) and hesitant multiplicative preference relation (HMPR) and studied their properties, which are followed by the construction of some methods of group decision making (GDM).
For various forms of preference relations, the most two important issues are consistency analysis and consistency improvement [
19]. Ma et al. [
20] developed an approach to check the inconsistency and weak transitivity for the FPR and to repair its inconsistency for reaching weak transitivity. Herrera-Viedma et al. [
21] designed a method to construct consistent FPRs from a set of original preference data. For a given MPR, Xu and Wei [
16] proposed a convergent model to improve its consistency. With the help of the Abelian linearly ordered group, Xia and Chen [
22] established a general improving consistency and reaching consensus methods for different types of preference relations. By using the order consistency and multiplicative consistency, Jin et al. [
23] proposed two new approaches for GDM with IFPRs to produce the normalized intuitionistic fuzzy weights for alternatives. Wang [
24] proposed some linear programming models for deriving intuitionistic fuzzy weights. For the unbalanced LPRs, Dong et al. [
25] investigated an optimization model to increase the consistency level. Pei et al. [
26] developed an iterative algorithm to adjust the additive consistency of IFLPRs and derive the intuitionistic fuzzy weights for IFLPRs. Based on
β-normalization, Zhu et al. [
27] utilized the optimized parameter to develop a novel approach for inconsistent HFPRs. Under the hesitant fuzzy preference information environment, Zhang et al. [
28] constructed a decision support model to derive the most desirable alternative.
Similar to MPRs, studying the HMPR is an important research topic. However, there are few techniques in the existing literature have been done about it. Xia and Xu [
18] directly used the proposed operators to aggregate the HMPRs information. However, it is generally known that the unacceptable consistency preference relations easily lead to inconsistent conclusions. Therefore, the decision-making results that obtained by the method in Xia and Xu [
18] may be unreasonable. Based on the
β-normalization principle, Zhang and Wu [
28] investigated a new decision-making model to generate the interval weights of alternatives from HMPRs. However, with the algorithm in Zhang and Wu [
28], one must convert normalized HMPR into several MPRs, and it seems that the decision-making process is an indirect computation process. Therefore, deriving the priority weight vector of the HMPR efficiently and improving the consistency of HMPR are two most two important issues. This paper first introduces a new version of HMPR, and then the consistency of HMPR and consistency index of HMPR are presented. After that, two new algorithms are investigated to improve the consistency of HMPRs.
To do this, the remainder of this paper is organized as follows:
Section 2 reviews some of the basic concepts. In
Section 3, the definitions of HMPR, consistency of HMPR and consistency index of HMPR are presented. Two algorithms to improve the consistency level for HMPRs are investigated in
Section 4.
Section 5 provides an illustrative example to show the effectiveness and rationality of the proposed methods. Concluding remarks are presented in
Section 6.
2. Preliminaries
In this section, we review some related work of the MPR and hesitant multiplicative set (HMS). Saaty [
15] first introduced the concept of MPR, which is a useful tool to express evaluation information. For convenience, let
be a finite set of alternatives and
.
Definition 1. [15] An MPR on is represented by a matrix with , where denotes the ratio of preferred degree of alternative with respect to . In particular, as Saaty [
29] showed the 1–9 scale,
, and
denotes that there is no difference between
and
,
denotes
is absolutely preferred to
, and
(or
) indicates the intermediate evaluations [
30].
Definition 2. Let be a MPR, then is consistent, if it satisfies the transitivity, i.e., .
For a MPR
, if there exists a crisp priority weight vector
, such that
, then
is consistent [
15], where
, and
.
Because of the complexity and uncertainty involved in practical GDM problems, it may be difficult for DMs to express their preference information with only one crisp number, but they can be represented by a few different crisp values.
Definition 3. An HMS on is defined as , where is a hesitant multiplicative element (HME), which denotes all of the possible membership degrees of the element for the set , is the cardinal of .
Definition 4. [28]
Let be a HME, be the cardinal of , the score function of is defined by . Suppose that and are two HMEs, if , then . 3. Hesitant Multiplicative Preference Relations and Consistency Index
In what follows, inspired by MPR and score function of HME, we define a new version of HMPR, and then the consistency of HMPR and consistency index of HMPR are presented.
Definition 5. An HMPR on can be defined as reciprocal matrix , where is an HME, which indicates the possible preference degrees of alternative over , and it satisfies
where
and
are the score function of
and the number of values in
, respectively.
For a HMPR , since and , then . On the other hand, from Definition 5, one can obtain that . Therefore, by using score function, we can transform the HMPR into an MPR , where .
Therefore, the following consistency of HMPR is introduced.
Definition 6. Assume that is an HMPR, where is HME, then is called consistent HMPR, if there exists a normalized crisp weight vector , such that
where
.
From Equation (2), we have . However, for the HMPR provided by DMs, it is difficult to satisfy the consistency, and then Equation (2) cannot hold, it means that there exist , such that , then we can use to measure the deviation between and . Therefore, the values of can be used to measure the consistency level of the HMPR .
Definition 7. Assume that is an HMPR, is the priority weight vector derived from satisfying , then the consistency index of is defined as The smaller is the value of , the better is the consistency of HMPR . If , then is consistent. If we provide the threshold and , then is called of acceptable consistency.
4. Consistency Repaired Methods for an HMPR
Motivated by the logarithmic least squares model [
31], the priority weight vector can be derived by using the following optimization model:
In fact, since
, then we have
thus, the developed optimization model (M-1) can be converted into the following optimization model:
According to Definition 6, we know that an HMPR
is consistent, then there exists a normalized crisp weight vector
, such that
, i.e.,
, then we have
thus
That is
i.e.,
where
. Since
, then we have
, one can get that
Therefore, the following Algorithm 1 is designed to be adjusted the consistency of the HMPR
:
Algorithm 1: The consistency adjusting process of HMPR based on logarithm least squares model |
Step 1. and , and pre-set the threshold , the controlling parameter and the maximum number of iterations ; |
Step 2. Derive the priority vector by Equation (6); |
Step 3. by using Equation (3); |
Step 4., then go to Step 7. Otherwise, go to Step 5; |
Step 5., where |
|
Step 6. Let and return to Step 2; |
Step 7., , ; |
Step 8. End. |
In the following, we prove that the developed Algorithm 1 is convergent.
Theorem 1. Let be an HMPR, be the adjusted parameter, be a collection of HMPRs in Algorithm 1. If is the consistency index of , then we have Proof. Suppose that
is the priority weight vector of
for each
, from the above analysis, we know that
also is the optimal weight vector by solving model (M-2) for
. Thus, we have
Let
, then we have
In addition, according to Step 5 in Algorithm 1, we have
i.e., for each .
Furthermore, on the one hand, according to Equation (10), we get that
On the other hand, it is obvious that , hence .□
From Definition 6, if HMPR
is consistent, then Equation (2) holds. Hence, it can be rewritten as
. However, in many real situations, due to fuzziness and uncertainty, the HMPR provided by DMs is usually inconsistent, thus Equation (2) cannot hold, i.e., there exist
, such that
. In this case, some non-negative deviation variables
and
are introduced, such that
The smaller is the value of deviation variables
and
, the better is the consistency of HMPR. Therefore, we develop a linear optimization model to derive the smallest deviation variables and priority weight vector as follows:
From Definition 5 and Equation (11), one can obtain that
i.e.,
.
As
and
, it follows that
. Therefore, we can obtain the following simplified optimization model:
By using MATLAB or LINGO, we obtain the priority vector and optimal nonzero deviation values , then we have , i.e., .
Therefore, the following Algorithm 2 is designed to improve the consistency of the HMPR
:
Algorithm 2: The consistency adjusting process of HMPR based on linear optimization model |
Step 1’. See Algorithm 1; |
Step 2’., ; |
Step 3’–4’. See Algorithm 1; |
Step 4., then go to Step 7. Otherwise, go to Step 5; |
Step 5’., where |
|
Step 6’–8’. See Algorithm 1. |
Next, we will prove that the developed Algorithm 2 is convergent.
Theorem 2. Let be an HMPR, be the adjusted parameter, be a collection of HMPRs in Algorithm 2, be the consistency index of , then we have Proof. The proof of Theorem 2 is similar to Theorem 1. □
5. Illustrative Example Results and Discussion
5.1. Numerical Example
There is a city that was affected by fog-haze for a long time, and the scientists found that there are four main influence factors
for this city’s fog-haze. In order to determine the most important influence factor and rank these factors for fog-haze, a group scientist compares these four factors with each other and then provides the following preference information, HMPR
[
23]:
Now, we apply this paper’s Algorithms 3 and 4 respectively to select the most important factor for fog-haze.
Algorithm 3: The consistency adjusting process of HMPR based on logarithm least squares model |
Step 1.,, |
Step 2. By Equation (6), we obtain the priority vector: |
Step 3. By Equation (3), we determine |
Step 4. As , then we utilize Step 5 in Algorithm 1 :
|
Step 5. By Equation (3), we determine . , then by using Step 5 in Algorithm 1, we have
|
Step 6. By Equation (3), we obtain . , then according to Step 5 in Algorithm 1, we have
|
Applying Equation (6), the priority vector of . Step 7. . Step 8. , is acceptable consistent HMPR. Step 9. . , . . |
Algorithm 4: The consistency adjusting process of HMPR based on linear optimization model |
Step 1’. Let ,, and . |
Step 2’. Using a model (M-4), we get the optimal deviation values , and the priority weight vector can be obtained as follows: |
Step 3’. Utilizing Equation (3) to get the consistency index . Step 4’. As , then we apply Step 5’ in Algorithm 2 to adjust the consistency of HMPR , and one can obtain a new HMPR as follows:
|
Step 5’. Using model (M-4), we get the optimal deviation values By using Equation (3), we have . Thus, by Step 5’ in Algorithm 2, one can obtain |
Step 6’. Using model (M-4), we determine the priority weight vector . |
Step 7’. By using Equation (3), we have . As , then the iteration stops, and is acceptable consistent HMPR. |
Step 8’. Output . |
Step 9’. As , then we have , and the most important factor for fog-haze is . |
5.2. Discussions
In what follows, we utilize the Algorithm I proposed by Zhang and Wu [
28] to cope with the aforementioned problem, and then the following steps are involved:
Step 1”. Let optimized parameter
, then we obtain the normalized HMPR
as follows:
Step 2”. Utilize Equation (21) in Zhang and Wu [
28] to construct the MPRs
from
:
Step 3”. Acceptable consistency of
is checked by Algorithm I in Xu and Wei [
17]. Due to vast amount of computation, we would not list the iterative calculation process of adjusting the consistency of
. After six iterations of Algorithm I in Xu and Wei [
17], any of
is acceptably consistent.
Step 4”. Employ Equation (28) in Zhang and Wu [
28] to obtain the weight vectors of
:
Step 5”. By Equation (31) in Zhang and Wu [
28], we obtain the interval weight vector of
as follows:
Step 6”. Compute the degree of possibility of
by Equation (32) in Zhang and Wu [
28], we have
, and then the ranking of the four main influence factors is
. Therefore, the most important factor for fog-haze is
.
From the above numerical example and comparison with Algorithm I in Zhang and Wu [
28], the proposed decision-making algorithms have the following characteristics:
(1) According to the above decision-making process, it is observed that our algorithms and Zhang and Wu’s [
28] approach produce the same ranking of the four influence factors for fog-haze, which means that our algorithms are reasonable.
(2) It is clear that the decision-making process in our approaches would be more straight-forward and efficient than the Algorithm 1 proposed by Zhang and Wu [
28]. In fact, in the process of consistency-improving, our approaches utilize the original HMPR information provided by DMs and all the calculations directly using the HMEs to produce results, which can preserve the original information of DMs. However, with the Zhang and Wu’s [
28] method, one must transfer the original HMPR given by the DMs into its corresponding MPRs, therefore it seems to be an indirect computation process. Meanwhile, in the process of obtaining the interval weight vector, it may be derived the same interval weights for different alternatives when the number of alternatives is too large, which leads to the original information losses.
(3) Our approaches investigate effective methods to improve the consistency of HMPRs, so that the improved HMPRs are acceptably consistent. Furthermore, in order to obtain the acceptably consistent HMPRs (or MPRs), the required number of iterations by our algorithms are less than Zhang and Wu’s [
28] approach.
6. Conclusions
In this paper, we have reviewed some concepts of HMPR, consistency of HMPR and the consistency index of HMPR. Then, we have constructed the logarithmic least squares model and linear optimization model to obtain the priority weight vector of alternatives. Furthermore, in order to improve the consistency of HMPR, we have developed two algorithms to transform the unacceptable consistent HMPRs into the acceptable ones, which were followed by the discussion of the convergence of the developed algorithms. Finally, a numerical example of ranking the influence factors for fog-haze is provided, and the comparison with an existing approach is performed to validate the effectiveness of the proposed automatic iterative decision-making algorithms.
However, this paper does not discuss the situation where some DMs decide to not provide their evaluation information, that is how to construct a decision-making method with incomplete HMPRs in the GDM problems. Therefore, in the future, we would focus on investigating some novel algorithms to improve the consistency for incomplete HMPRs, designing the consensus-reaching models for incomplete HMPRs, and applying the incomplete HMPRs to solve practical applications in other areas such as pattern recognition, information fusion system, and image processing.