Next Article in Journal
Computer-Aided Sketching: Incorporating the Locus to Improve the Three-Dimensional Geometric Design
Previous Article in Journal
Comparison Methods for Solving Non-Linear Sturm–Liouville Eigenvalues Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Group Decision-Making Method Based on Expert Classification Consensus Information Integration

China Academy of Aerospace Systems Science and Engineering, Beijing 100035, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(7), 1180; https://doi.org/10.3390/sym12071180
Submission received: 29 June 2020 / Revised: 14 July 2020 / Accepted: 15 July 2020 / Published: 16 July 2020

Abstract

:
Existing decision-making methods are mostly a simple aggregation of expert decision information when solving large group decision-making problems. In these methods, priority should be given to expert weight information; however, it is difficult to avoid the loss of expert decision information in the decision-making process. Therefore, a new idea to solve the problem of large group decision-making by combining the expert group clustering algorithm and the group consensus model is proposed in this paper in order to avoid the disadvantages of subjectively assigning expert weights. First, expert groups are classified by the clustering algorithm of breadth-first search neighbors. Next, the decision information of the experts in the class is corrected adaptively using the group consensus model; then, expert decision information in the class is integrated using probabilistic linguistic translation methods. This method not only avoids the shortcomings of artificially given expert weights, but also reduces the loss of expert decision information. Finally, the method comprehensively considers the scale of the expert class and the difference between the classes to determine the weight of the expert class, and then it weights and integrates the consensus information of all expert classes to obtain the final decision result. This article verifies the effectiveness of the proposed method through a case analysis of urban water resource sustainability evaluation, and provides a scientific evaluation method for the sustainable development level of urban water resources.

1. Introduction

With the increasing complexity of social issues, decision-making is influenced by both subjective and objective factors. Generally, it is difficult for a small number of experts to comprehensively judge all decision-making objects due to the limitation of their knowledge structure and cognitive level. The rapid development of information technology has made the collection of decision-making opinions of expert groups more modern and systematic; moreover, huge group decision-making has become the optimal solution to solve complex decision-making problems. The opinions of decision-makers could be more accurately expressed by using linguistic variables, because it is usually hard for decision-makers to provide accurate quantitative judgments in the decision-making process of some complex problems [1]. However, it is also a challenge for a single-linguistic term to accurately describe such complex situations when the decision-making object is between two linguistic levels, or when the decision-makers in the group decision-making process have multiple uncertain linguistic judgments. Based on this, a decision-making framework for hesitant fuzzy linguistics has been proposed [2]. At present, the research on hesitant fuzzy linguistic decision-making mainly focuses on the representation of linguistic terms, arithmetic rules, set-ups, and related decision-making methods. Liao et al. [3] redefined hesitant fuzzy linguistics term set (HFLTS) and gave its mathematical form, because there are some abnormal results in the calculation process of asymmetric evaluation scales. Furthermore, Liao et al. [4] proposed a new score function for HFLTS based on hesitancy and linguistic scaling functions. In terms of decision-making methods, Chen and K. S. Chin [5] transformed hesitant fuzzy linguistic information into probabilistic linguistic information for information collection. Additionally, Zafeiris and Koman [6] presented a quantitative aggregation algorithm considering the ability of decision-makers for different decision scenarios. Wang et al.’s [7] system reviews the research progress of HFLTS decision-making. In particular, hesitant fuzzy linguistic multi-attribute consensus decision-making has attracted the attention of many scholars. Wu and Xu [8] proposed the consensus measure and the consensus model of hesitant fuzzy decision matrices from the perspective of the unification of expert opinions. Wu and Xu [9] defined a new degree of group consensus based on the possible distribution of HFLTS, and established a consensus model with non-consensus opinion recognition and feedback adjustment rules. Zhang et al. [10] defined the distance measure of HFLTS considering the width and center of the HFLTS envelope. Wei and Ma [11] defined the consensus level among hesitant fuzzy decision matrices based on the envelope of HFLTS.
These methods are just simple staging expert decision or consensus adjustment information to solve the complex problem of group decision-making, to a certain extent, which is a difficult to achieve consensus among all decision-making experts. It is necessary to repeatedly modify the expert decision information, which will inevitably lead to a big difference between the expert consensus decision-making information and the original expert decision information. In addition, for a group decision-making method that directly integrates all expert decision information, it is necessary to determine the weight assigned by experts and the method of information aggregation in the assembly process. It is difficult to avoid the loss of expert decision information caused by subjective factors in the decision-making process. The main reason for these shortcomings is that these decision-making methods do not consider expert group classification problems under hesitant fuzzy linguistic information. Therefore, according to the similarity classification method [12], a hesitant fuzzy linguistic huge group expert classification and decision information aggregation method based on cluster-consensus information integration is proposed by combining it with the class-center distance-based classification accuracy test index [13]. Through this method, the information loss in the process of expert decision information aggregation can be minimized; moreover, scientific and accurate decision results can be obtained.
In Section 2, the theoretical basis of the hesitant fuzzy linguistic group decision-making model is introduced on the basis of relevant literature. In Section 3, the proposed hesitant fuzzy linguistic group decision-making method is presented in two parts. Section 3.1 mainly introduces the classification method of expert groups, and Section 3.2 shows the aggregation process of expert evaluation information. Then, the detailed steps of the group decision-making model based on hesitant fuzzy linguistics are summarized. In Section 4, the effectiveness of the decision-making method is verified through its application to urban water resources sustainability evaluation. In Section 5, the decision-making method of this paper is compared with other methods to further discuss its advantages in the decision-making process of a large group of experts, and to show the innovation of this research. In Section 6, the research content of the paper is summarized and the research conclusion of the paper is discussed.

2. Basic Concepts

2.1. Hesitant Fuzzy Linguistic Terms

Let S = { s 0 , s 1 , , s g } denote an ordered set of odd language terms, where s i denotes the i + 1 term in set S ; s 0 < s 1 < s 2 < < s g , g + 1 is the granularity of the linguistic term in set S . Set S satisfies the following conditions: (1) s i > s j ; and (2) has the inverse negative operator: If j = g i , N e g ( s i ) = s j .
Based on the concept of a set of linguistic terms, the following set of hesitant fuzzy linguistic terms were defined.
Definition 1
([14]). Let S = { s 0 , s 1 , , s g } be a set of linguistic terms; then, a hesitant fuzzy linguistic term set H S in S can be defined as a set of a finite number of consecutive linguistic terms in S, that is, H S = { s i , s i + 1 , , s j | s k S , k = i , i + 1 , , j } .
In the process of decision analysis using the hesitant fuzzy linguistic term set, the relevant literature expands the representation of the hesitant fuzzy linguistic term set in Definition 1, enriches the arithmetic rules of the hesitant fuzzy linguistic term set, and avoids the actual information loss caused by the aggregation of evaluation results during the application process. By setting the subscript of the term to an odd number of consecutive integers with 0 as the center of symmetry, an asymmetric hesitant fuzzy linguistic term set is proposed [3].
Definition 2
([3]). Let S = { s i | i = τ , , 1 , 0 , 1 , , τ } be a set of odd language terms, where τ is a positive integer; 2 τ + 1 is the granularity of the set of linguistic terms. The set of symmetric language terms S satisfies the following conditions: (1) order or s i     s j i     j ; and (2) there is a negative operator: n e g ( s a ) = s a , where n e g ( s 0 ) = s 0 . If H S is a set of a finite number of consecutive linguistic terms ordered in S, H S is a set of hesitant fuzzy linguistic terms in S; H S = { s i , s i + 1 , , s j | s k S , k = i , i + 1 , , j } .
In addition, in order to retain the given linguistic information as much as possible and to avoid the loss of linguistic information, a continuous linguistic term set is proposed, S   - = { s α | α [ q , q ] } , where q ( q > τ ) is a sufficiently large positive integer. The extended language term set is called a virtual linguistic term set, which is only used in calculation processes; expert decision-making processes still use linguistic term set S .
For any two language terms in the extended linguistic term set s α , s β S , λ , λ 1 , λ 2 [ 0 , 1 ] , in the calculation process, the following rules are met:
  • s α s β = s α + β ;
  • λ s α = s λ α ;
  • ( λ 1 + λ 2 ) s α = λ 1 s α λ 2 s α ;
  • λ ( s α s β ) = λ s α λ s β .
Definition 3
([3]). Let H S , H S 1 , and H S 2 be the set of three hesitant fuzzy linguistic terms in S. The algorithm is defined as follows:
  • H S 1 and H S 2 take the big operator as H S 1 H S 2 = { m a x { s i , s j } | s i H S 1 , s j H S 2 } ;
  • H S 1 and H S 2 take the small operator as H S 1 H S 2 = { m i n { s i , s j } | s i H S 1 , s j H S 2 } ;
  • The upper boundary H S + and the lower boundary H S of H S are H S + = m a x { s i | s i H S } and H S = m i n { s i | s i H S } , respectively;
  • The envelope set of H S is e n v ( H S ) = [ H S , H S + ] .

2.2. Huge Group Decision-Making

Based on the fuzzy linguistic approach, decision plans or a certain indicator in a single language term can be evaluated by experts. However, experts are limited by their level of knowledge and the decision-making conditions in the actual decision-making process. They often hesitate between multiple language terms when choosing such terms for evaluation. It is difficult for the background knowledge of individual decision-makers to meet the requirements of decision-making judgments due to complex decision-making problems; moreover, huge group decision-making has become a new direction in the application of hesitant fuzzy linguistics.
Let H S be a set of hesitant fuzzy linguistic terms in S = { s i | i = τ , , 1 , 0 , 1 , , τ } ; then, H S is a continuously ordered subset of S, that is, H S = { φ | φ { ϕ , ( s τ ) , , ( s τ ) , ( s τ , s τ + 1 ) , , S } } . Let the expert decision set be E = { e 1 , e 2 , , e K } ( e k E , k = 1 , 2 , , K , K 20 ) , the decision object attribute set be A = { a 1 , a 2 , , a M } ( a i A , i = 1 , 2 , , M ) , and the corresponding attribute weight vector be W = { w 1 , w 2 , , w N } ( w j [ 0 , 1 ] , j = 1 N w j = 1 ) . Let φ be the set of hesitant fuzzy linguistic terms given by the expert; φ = { s l , s l + 1 , , s U } ( l U , l , U { τ , , 0 , τ } ) , where | φ | denotes the modulus of φ ; the larger the modulus of φ , the greater the degree of hesitation of the expert; H i j , S k denotes the decision information of the expert e k on the attribute f j for the decision object a i . In view of the above-mentioned decision-making problem, it is necessary to effectively gather the decision information of all experts and to rank the decision objects a 1 , a 2 , , a M .

3. Group Decision-Making Method

Generally, the decision-makers give corresponding decision information according to their own background knowledge of group decision-making problems; therefore, the decision information must be different. A direct aggregation of the different information may result in the loss of effective decision information if the number of decision-makers is large, resulting in great deviation in the decision results and the objective situation. Thus, this paper aims to solve the problem of the effective assembly of various decision-making information. The integration process of expert decision information is divided into two stages: In the first stage, the expert group is classified based on the decision information matrix, and then the consensus model is used to ensure the experts within the class reach consensus. In-class expert decision information is transformed into a probabilistic language combination to realize expert decision information integration within said class. In the second stage, the expert class weights are comprehensively calculated through the size of the expert group and the degree of deviation of decision information between classes, and the decision information integration results in the class are further weighted and integrated to realize the integration of decision information classes.

3.1. Group Classification Method Based on The Similarity in Hesitant Fuzzy Linguistics

3.1.1. Group Similarity Calculation Based on The Similarity in Hesitant Fuzzy Linguistics

The most general classification methods are used to measure the similarity between individuals to be classified by distance of individual attribute vectors; then, the group is divided into several clusters based on different similarity thresholds. Different from the general numerical calculation-based classification methods, the similarity between hesitant fuzzy linguistic terms cannot be directly numerically calculated; thus, the similarity relationship of hesitant fuzzy linguistic terms needs to be reasonably constructed for further analysis of hesitant fuzzy linguistic quantification methods. At present, the quantification methods of hesitant fuzzy language mainly include the hesitant fuzzy language expansion method [3] and the binary semantic method [15]. In the process of symmetrical hesitant fuzzy linguistic quantification, the scale theory is directly used by the symbol transfer method to process linguistic information, which has the characteristics of simple calculation and easy operation. Therefore, linguistic information was quantified by the symbol transfer method in this paper, and I ( s α ) was recorded as the subscript α value of the language term s α .
In the calculation of the similarity of hesitant fuzzy linguistics, most scholars consider the consistency of experts from the perspective of hesitating in the distance between fuzzy sets. Liu et al. [16] studied the measurement of the similarity of hesitant fuzzy linguistic sets with confidence intervals, and C. P. Wei and J. Ma [11] studied the similarity in hesitation based on the expected distance. However, these methods lead to some problems in the calculation process of similarity. When calculating the degree of similarity of the expert hesitation distance, the given information may be more contradictory, even if the two experts have the same degree of uncertainty, such as { s τ } and { s τ }. Moreover, the expected distance between { s 1 , s 2 , s 3 } and { s 2 } is 0 when making calculations based on the expected distance; however, it is obvious that the two are not completely consistent. Therefore, the similarity of hesitant linguistic information cannot be accurately measured by the distance measurement method; therefore, a combination with the similarity of hesitant fuzzy linguistic terms should be considered. In this paper, the similarity calculation formula of hesitant fuzzy linguistic term sets is defined from the two aspects of set similarity and distance similarity.
Definition 4.
Let H i j , S k and H i j , S l be the hesitant fuzzy linguistic terms of the two decision-makers on the attribute f j regarding the decision object a i ; then, the similarity s m i j , s k , l of the two can be defined as:
s m i j , s k , l = N s m i j , s k , l + ( 1 N s m i j , s k , l ) × ( 1 D i j , s k , l ) ,
where N s m i j , s k , l = | H i j , S k H i j , S l | | H i j , S k H i j , S l | represents the set similarity of H i j , S k and H i j , S l ; D i j , s k , l = 1 4 τ ( | I ( H i j , S k + ) I ( H i j , S l + ) | + | I ( H i j , S k ) I ( H i j , S l ) | ) represents the distance between H i j , S k and H i j , S l , D i j , S k , l [ 0 , 1 ] ; and H i j , S k + , H i j , S k , H i j , S l + , and H i j , S l represent the upper and lower bounds of H i j , S k and H i j , S l , respectively. According to Formula (1), the range of similarity between H i j , S k and H i j , S l is   s m i j , s k , l [ 0 , 1 ] .
The similarity in Formula (1) consists of two parts. The set similarity indicates the ratio of the number of terms at the intersection of the two to the number of union terms. The distance similarity indicates the similarity reflected by the difference between the two. Compared with the existing literature that considers the similarity degree of decision-making terminology and the hesitation of experts, this paper mainly focuses on the similarity of decision information given by experts. This means that the similarity between the two is mainly represented by the number of similar variables in the intersection.
According to Formula (1), the similarity between decision-makers e k and e l can be obtained:
s m s k , l = 1 M i = 1 M j = 1 N w j × s m i j , s k , l ,
In Formula (2), w j represents the weight of the attribute in the decision-making process; the role of the key attribute in the decision-making process can be reflected by the set weight, and thus, a more reasonable expert classification result can be obtained.

3.1.2. Expert Group Clustering Method Based On Similarity

In huge group decision-making problems, the complexity of the decision information aggregation algorithm increases sharply as the size of the expert group increases. Thus, the complexity of the problem can be greatly reduced by clustering experts and assembling each type of expert information, resulting in transforming large-scale complex group decisions into low-complex multi-stage expert information aggregation problems. The most commonly used clustering method is mainly the individual partitioning method. This method needs to preset the number of clustering categories and iterative conditions, which has a significant influence on the final classification results. Therefore, a clustering algorithm based on breadth-first search neighbors was used in this paper to classify expert groups, thus avoiding complex iterative calculations and estimating the input parameters involved using the relevant data, ultimately making the classification results more accurate.
Definition 5.
In the expert group set E = { e 1 , e 2 , , e K } , given the expert object e and the similarity determination coefficient r , if the similarity between the expert e and the expert x satisfies s m e , x r ( 0 r 1 ) for any expert x , the expert x is said to be the direct neighbor of the expert e ; expert y is not the direct neighbor of expert e , but is the direct neighbor of expert x , meaning expert y is the indirect neighbor of expert e .
Based on the relevant definitions in Definition 5, the expert group clustering algorithm flow based on breadth-first search neighbors is set as follows:
Step 1: Determine the cluster group E = { e 1 , e 2 , , e K } to be clustered; calculate the similarity between the two experts in the expert group; form the similarity matrix SM between the experts; and set the parameter r and the parameter λ , t = 1 .
Step 2: Create a new empty class set A t ; select any one of the experts to be clustered in set E to be the initial object; merge the expert e into the class set A t ; delete the expert e from the set E .
Step 3: According to the determination method of Definition 5, all of the direct neighbor and the indirect neighbor sets D of the expert e are obtained; all of the direct neighbors and the indirect neighbors of the expert e are sequentially searched if i = 1 m X i m     λ (m is the number of existing objects in the set A t . Search all direct and indirect neighbors x of expert e in order if s m e i , x     r , X i = 1 ; otherwise, X i = 0 ; e i     A t . The neighbor is merged into set A t and the eligible neighbors are deleted from set E .
Step 4: If the set E is an empty set, the flow ends; otherwise, return to Step 2 and let t = t + 1.
In the clustering process, the final clustering result is affected by the setting of parameters r and λ . The more appropriate r and λ parameters need to be determined as much as possible in order to obtain the optimal clustering scheme according to the statistical test method R 2 . The ratio of the squared deviation of the classes to the sum of the squares of all of the dispersions is used as the test standard for the clustering effect. The greater the proportion of the sum of the dispersion squares between the classes, the smaller the ratio of the dispersion squares in the class, proving that the classification effect is better.
The definition of a class center value is proposed in this paper in order to facilitate the calculation, as illustrated below. Let the expert e k give the hesitant fuzzy linguistic term set for the attribute f j of the object a i as H i j , s k ; N i j , v k denotes the number of times that the fuzzy linguistic term s v ( v = τ , , 0 , , τ ) appears in H i j , s k , that is, when s v H i j , s k , N i j , v k = 1 ; otherwise, N i j , v k = 0 . For a certain kind of expert set E , N i j , v E = k E N i j , v k represents the number of occurrences of the fuzzy linguistic term s v in the information given by the experts in the expert set E for the attribute f j of the object a i ; then, the central value of the expert class E on the attribute f j of the object a i can be expressed as o i j E ; the specific calculation formula is o i j E = v = τ τ   N i j , v E | E | v = τ τ N i j , v E I ( s v ) , where I ( s v ) is the numerical value of the subscript v of the linguistic term s v and the central value matrix of the class can be expressed as O E = ( o i j E ) M × N . The information value given by expert e k for attribute f j of object a i can be expressed as ρ i j k = v = τ τ N i j , v k v = τ τ N i j , v k I ( s v ) ; the center value of all experts under the attribute f j of object a i is o i j = k = 1 K ρ i j k K . The test indicator I p is defined as follows:
I p = r s = 1 R s N r s ( i = 1 M j = 1 N ( o i j r s o i j ) 2 ) r s = 1 R s k c r s s ( i = 1 M j = 1 N ( ρ i j k o i j ) 2 ) ,
where r s = 1 R s N r s ( i = 1 M j = 1 N ( o i j r s o i j ) 2 ) represents the sum of the squared deviations between classes; N r s is the total number of experts in the category r s ; r s = 1 R s k c r s s ( i = 1 M j = 1 N ( ρ i j k o i j ) 2 ) represents the sum of the squares of the total deviation between all expert information in all categories and all expert information center values.
The value of the test index I p is proportional to the sum of the squared deviations between classes, and inversely proportional to the sum of the squares of the total deviation. Among them, the classification is meaningless when the number of classifications is 1 or the experts are in one category. Moreover, I p is directly related to the classification; the classification result is determined by the values of parameters r and λ , and these parameters must be continuously adjusted according to the change of the I p to achieve the best classification effect. In the decision-making process, we can traverse all possible values of parameters r and λ , and select the value these parameters that corresponds to the maximum value of test index I p to obtain the optimal expert classification result.

3.2. Aggregation Method of Group Decision Information Based On Hesitant Fuzzy Linguistics

3.2.1. In-Class Information Aggregation Based on the Consensus Model

After obtaining the final expert group classification results in Section 2.1, the aggregation of information within the class of experts is a prerequisite for group decision-making. In the group decision-making process, it is generally necessary to give priority to an expert’s assigned weight of information when assembling the decision information of different experts due to the differences in said experts’ own level of knowledge and the external environment; then, the decision information can be weight-integrated. However, the majority of experts’ judgments rely on subjective determination; thus, it is difficult to guarantee the reliability of weight information. Moreover, the precise weight of each piece of information assigned by an expert in the huge group decision-making process is difficult to determine. In the process of assembling expert decision information into a class, it has been found that there is almost no difference in the obtained aggregate information as a result of how the weight assigned by an expert is adjusted when the similarity of the expert decision information in a class reaches a certain standard. Besides, it is often first internally agreed upon, and then the consensus information is integrated in order to improve the efficiency of group decision-making in the process of actual group decision-making. Therefore, in this paper, when the information in the class was assembled, the model of the consensus of the group was first established; then, the decision information within the class after a consensus was reached was assembled.
In this paper, the similarity s m k , l in Formulas (1) and (2) was used as the degree of consensus of decision experts e k and e l . If the expert decision information matrix is expressed as R k and R l , the consensus level of R k and R l can be expressed as:
C L ( R k , R l ) = s m k , l = 1 M i = 1 M j = 1 N w j × s m i j , s k , l ,
For the decision matrix { R 1 , R 2 , , R t } of all experts in class C r s , the group consensus is:
C L ( C r s ) = CL { R 1 , R 2 , , R t } = 1 t 2 k = 1 t l = 1 t C L ( R k , R l ) ,
If the similarity s m i j , s k , l is used as the degree of consensus of the two decision-makers on the attribute f j of the decision object a i , that is, C L i j k , l = s m i j , s k , l , the degree of consensus of the k th expert and the other experts of the class for the elements of ( i , j ) can be expressed as:
C L i j k = 1 t l = 1 t C L i j k , l ,
For the elements of ( i , j ) , the consensus of this type of expert is:
C L i j = 1 t k = 1 t C L i j k = 1 t 2 k = 1 t l = 1 t C L i j k , l ,
A consensus model was established to improve the consensus of group information { R 1 , R 2 , , R t } ; then, the group consensus was achieved by adjusting the expert decision information as little as possible. The specific adjustment process can be obtained as follows [11]:
Step 1: Determine the group consensus level standard C L ¯ , and adjust the number of times a = 0 and the initial group decision matrix R 0 k = ( H i j , 0 k ) n × m = ( H i j k ) n × m , k = 1 , 2 , , t .
Step 2: Calculate the degree of consensus of { R a 1 , R a 2 , , R a t } . If C L a C L ¯ , go to Step 4; otherwise, go to Step 3.
Step 3: Adjust the decision matrix information.
① Determine where decision information needs to be adjusted. Let R p q = { ( p , q ) | C L p q = m i n { C L i j } } and C L p q T = m i n 1 k t { C L p q k } . Adjust the hesitant fuzzy linguistic term set H p q , a T at the ( p , q ) position in the expert e T decision matrix.
② The adjustment result of the hesitant fuzzy linguistic term set H p q , a + 1 T is H p q , a + 1 T = r o u n d ( μ H p q , a T ( 1 μ ) H p q , a c ) T. The adjustment coefficient μ satisfies 0 < μ < 1 , and H i j , a c represents the information after the hesitation of the fuzzy linguistic arithmetic mean (HFLA) operator, that is, r o u n d ( μ H p q , a T ( 1 μ ) H p q , a c ) =   { r o u n d ( μ h p q , a T ( 1 μ ) h p q , a c | h p q , a T H p q , a T , h p q , a c H p q , a c } , H p q , a c = H F L A ( H p q , a 1 , H p q , a 2 , , H p q , a t ) .
③ Keep the other position elements unchanged. Adjust the decision matrix after the M position element as the new decision information matrix R a + 1 T , that is, R a + 1 k = { R a + 1 T , k = T R a k , k T ; let a = a + 1 ; then, go to Step 2.
Step 4: Let R k = R a k , k = 1 , 2 , , t , and output the hesitant fuzzy linguistic decision matrix after reaching a consensus.
For the same group of experts, after a consensus is reached, it can be considered that the weights assigned by the experts in the information aggregation process are equal because the expert decision information is relatively similar. There are usually two aggregation methods in terms of the aggregation of hesitant fuzzy linguistic terms. One expands the discrete hesitant fuzzy linguistic into a continuous set of linguistic terms and then aggregates the information. However, the uncertainty of expert information is enlarged in this process of information aggregation, resulting in the loss of expert decision information. Another method is to convert hesitant fuzzy linguistic information into probabilistic language combinations. This method treats each linguistic term in the set of hesitant fuzzy linguistic terms with equal probability processing. This means that these linguistic variables are considered as equally likely to occur when experts hesitate between multiple linguistic variables while making a decision. Moreover, all possible information about the opinions of the expert group can be retained by this method of transforming probabilistic linguistic information. Therefore, the probabilistic language transformation was used to gather expert information within a class in order to minimize information loss during the information assembly process. The specific assembly process is displayed as follows:
The probabilistic linguistic combination P H s = { ( s v , p v ) | v = τ , , 0 , , τ }   was proposed [17,18], where p v is the probability corresponding to the linguistic term s v , p v [ 0 , 1 ] and v = τ τ p v = 1 . If all of the experts in the C r s class consider that the attribute f j of the decision object a i belongs to the total number N i j , v r s of s v , then p i j , v r s = N i j , v r s r s = 1 R s N i j , v r s , where p i j , v r s [ 0 , 1 ] , v = τ τ p i j , v r s = 1 . In turn, the decision information of all of the experts in the C r s class can be transformed into a probabilistic linguistic combination:
P H s r s = { ( s i j , v r s , p i j , v r s ) | s i j , t r s S ; i = 1 , , M ; j = 1 , , N ; v = τ , τ ; r s = 1 , , R s }

3.2.2. Inter-Class Information Aggregation Based on a Class Comprehensive Weight

Different from collecting the decision information of experts in a class, the results of different classes of expert decision information aggregation often have large differences. Therefore, the influence of the differences between classes needs to be considered in the information aggregation process between classes. The class weight is determined by the ratio of the number of experts in the class to the total number of experts. The larger the expert number in the class, the greater the influence on the decision results and the greater the class weight. The weight calculation formula is as follows:
w n r s = N r s r s = 1 R s N r s ,
where N r s is the total number of experts in class r s .
The influence of the expert decision information on the final decision result can be directly reflected by the proportion of the number of experts in a class compared to the total number of experts; the decision result should be close to the consensus of most experts. However, there are only a few people who master the truth in the process of dealing with actual problems. Therefore, some decision-making problems are more likely to result in inconsistent information in an expert group. At this time, it is obviously inaccurate to determine the class weight based on the proportion of experts. In this paper, the class deviation weight w s m r s was further determined according to the degree of deviation between classes. The smaller the mean degree of deviation between a certain type of expert and other expert classes, the greater the weight given to the expert class. The distance between the matrix of the class center values in the calculation process can be calculated as a measure of the similarity weight of the class. The larger the distance, the lower the class similarity and the greater the class similarity weight. Therefore, the average distance between a certain type of central value and other class center values is proportional to the class deviation weight w s m r s . The specific formula is as follows:
D r s = l = 1 , l r s R s i = 1 M j = 1 N ( o i j r s o i j l ) 2 | R s | 1 ,
w s m r s = D r s r s R s D r s ,
where D r s represents the average distance between a class r s and other expert classes; R s is the set of expert classes; and | R s | represents the number of expert categories.
The weight of the expert group needs to be considered when calculating the weight of the expert class. For a specific decision problem, the preference coefficient σ ( σ [ 0 , 1 ] ) was set in the process of inter-class information aggregation [19]; then, the class weights w n r s and w s m r s was integrated to obtain the final expert class weight:
w r s = σ w n r s + ( 1 σ ) w s m r s ,
Under normal circumstances, when the decision result is biased toward the most expert opinions, take σ > 0.5 ; when the decision result needs to focus on the inconsistency, take σ < 0.5 ; if there is no special explanation, take σ = 0.5 .
The final decision results can be obtained by further integration of expert information. The probabilistic linguistic combination for each decision object is:
P H s = { ( s v , p i , v ) | i = 1 , , M ; v = τ , , τ }
where p i , v = j = 1 n w j r s = 1 R s w r s p i j , v r s
The expected value of each decision object’s probabilistic linguistic combination was calculated; besides, different decision objects were compared by comparing the expected values.
E i = v = τ τ I ( s v ) p i , v ,
For decision object a i i , j = 1 , , M , if E i > E j , the sort result is a i a j ; if E i < E j , the decision result is a i a j ; if E i = E j , the average sum of the mean squared deviation of each decision object in the probabilistic linguistic combination needs to be further calculated; the calculation formula is as follows:
S i 2 = 1 2 τ + 1 v = τ τ ( s v p i , v E i ) 2 ,
If S i 2 > S j 2 , the decision result is a i a j ; if S i 2 < S j 2 , the decision result is a i a j ; if S i 2 = S j 2 , it is a i a j .
In summary, the specific steps of the hesitant fuzzy linguistic decision-making method based on clustered consensus information integration are summarized as follows [20]:
Step 1: Generate a classification based on expert decision information. The similarity s m s k , l between the experts is calculated according to Formulas (1) and (2); then, the expert group is classified by the clustering algorithm based on the breadth-first search neighbor. The final expert classification result C = { c 1 , , c R s } can be obtained by continuously adjusting the parameters r and λ and passing the I p test classification effect.
Step 2: Assembly of expert decision information within the class. Determine the consensus level standard C L ¯ of the group; continuously adjust the expert decision information through the consensus achievement model; and reach a consensus on the decision-making opinions of experts in the class. Afterward, convert the decision information of all experts in the c r s class into the probabilistic language combination P H s r s .
Step 3: Assembly of expert decision information between classes. Set the preference coefficient σ and calculate the class comprehensive weight w r s through Formulas (8)–(11) according to the expert decision information before the model adjustment. After further integration of the expert information, the probabilistic language combination P H s of each decision object of the final decision result can be obtained.
Step 4: Sort the decision objects. Calculate the expected value of each decision object according to Formula (12); further calculate the sum of the mean squared deviations of the probability linguistic combinations of each decision object according to Formula (13). Then, sort all decision objects according to the comparison method in the text.

4. Example Analysis

In this paper, the water resource sustainability evaluation of three cities { a 1 , a 2 , a 3 } was taken as an example. The three aspects of resource endowment, social economy, and environment were comprehensively are evaluated. The attribute of the evaluation index is represented by { f 1 , f 2 , f 3 } , and the weight of the index given by experts is 0.35, 0.35, and 0.3. The predetermined set of evaluation language terms is S = { s 3 , s 2 , s 1 , s 0 , s 1 , s 2 , s 3 } , where s 3 to s 3 represent extreme bad, bad, slightly bad, general, slightly good, good, and excellent, respectively. Experts can judge the evaluation indicators through language information during the sustainable assessment of water resources in the three cities according to the above language set. In this case, 30 experts from relevant fields were selected to participate in the evaluation process. The expert language information was converted into hesitant fuzzy terms by a conversion function. The expert decision hesitant fuzzy linguistic term set information is illustrated in Table 1. Then, the water resource sustainability conditions of the three cities were sorted according to the hesitant fuzzy linguistic information of the corresponding attributes of the three cities given by the experts in Table 1.
The specific evaluation process is described as follows.
Step 1: Generate a classification based on expert decision information. The similarity s m s k , l between experts is calculated according to Formulas (1) and (2), where k , l = 1 , , 30 . On this basis, the expert groups were classified according to the clustering algorithm based on the breadth-first search neighbors. The values of the classification parameters r and λ were adjusted and the classification effect test index I p under the obtained classification results was calculated. Classification is meaningless when r < 0.8 ; thus, it should not be considered. Therefore, the effective value ranges of parameters r and λ are [0.8,1] and [0,1]. In the decision-making process, to traverse all possible values of parameters r and λ , 0.01 was taken as the step for numerical changes in order to visually analyze the correspondence between the value of the classification parameter r and the parameter λ and I p . Next, the parameter values were selected in turn within the parameter change range r [ 0.8 , 1 ] , λ [ 0 , 1 ] and the I p was calculated under each classification result, as shown in Figure 1.
The red marked point in Figure 1 is the maximum value of index I p . When r = 0.9 , λ [ 0.26 , 0.60 ] , I p takes the maximum value of 0.1332. At this time, the experts were divided into four categories, and the classification results were:
c 1 = { e 1 , e 5 , e 8 , e 11 , e 16 , e 19 , e 24 , e 29 } ;   c 2 = { e 2 , e 7 , e 10 , e 17 , e 20 , e 21 , e 26 , e 30 } ;
c 3 = { e 3 , e 6 , e 12 , e 15 , e 18 , e 23 , e 25 } ;   c 4 = { e 4 , e 9 , e 13 , e 14 , e 22 , e 27 , e 28 } .
Step 2: Assembly of expert decision information within the class.
① Calculate the degree of consensus of the four types of experts: C L 1 = 0.9593 , C L 2 = 0.9521 , C L 3 = 0.9476 ,   and   C L 4   = 0.8964 . Set the group consensus level to C L ¯ = 0.9 . Due to C L 4 < C L ¯ , the decision information of the four types of experts needs to be adjusted.
② Let R p q = { ( p , q ) | C L p q = m i n { C L i j } } , C L p q T = m i n 1 k t { C L p q k } ; the decision information needs to be adjusted through the calculation of the degree of consensus; set expert e 28 ’s hesitant fuzzy term H 22 , 1 28 = { s 1 , s 2 } for the attribute f 2 of the decision object a 2 .
③ Set the adjustment parameter μ = 0.5 . The information of the fourth type of expert’s hesitant fuzzy term set for the attribute f 2 of the decision object a 2 is H 22 , 1 c = H F L A ( H 22 , 1 4 , H 22 , 1 9 , H 22 , 1 13 , H 22 , 1 14 , H 22 , 1 22 , H 22 , 1 27 , H 22 , 1 28 ) = { s 1 , s 0 } , is gathered by the arithmetic average operator, while the modified hesitant fuzzy term set is H 22 , 2 28 = { s 0 , s 1 } ; thus, the fourth type of information is modified. The group consensus of the experts is C L 4 = 0.9357 , that is, the four types of expert groups reached intra-class consensus.
④ Keep the other decision information unchanged; replace the hesitant fuzzy term set of expert e 28 . with respect to the attribute f 2 of decision object a 2 in the expert decision matrix by the revised hesitant fuzzy term set H 22 , 2 28 to form a new expert decision matrix. Then, classify each type of expert hesitant fuzzy language decision information matrix into the probabilistic language combination P H s r s = { ( s i j , v r s , p i j , v r s ) | s i j , v r s S } , i = 1 , , 3 ; j = 1 , , 3 ; v = 3 , 3 ; r s = 1 , , 4 .
Step 3: Assembly of expert decision information between classes. Set the preference coefficient σ = 0.5 ; the class weights of the expert classes c 1 , c 2 , c 3 , and c 4 are 0.2598, 0.2768, 0.2328, and 0.2306, respectively, according to the calculation method of the expert class weight in the paper. On this basis, the inter-class information was assembled; the final decision results of the three cities are:
P H s , 1 = { ( s 3 , 0.3094 ) , ( s 2 , 0.4988 ) , ( s 1 , 0.1741 ) , ( s 0 , 0.0177 ) , ( s 1 , 0 ) , ( s 2 , 0 ) , ( s 3 , 0 ) } ;
P H s , 2 = { ( s 3 , 0.1770 ) , ( s 2 , 0.5153 ) , ( s 1 , 0.2522 ) , ( s 0 , 0.0467 ) , ( s 1 , 0.0231 ) , ( s 2 , 0.0058 ) , ( s 3 , 0 ) } ;
P H s , 3 = { ( s 3 , 0.1641 ) , ( s 2 , 0.5850 ) , ( s 1 , 0.2245 ) , ( s 0 , 0.0217 ) , ( s 1 , 0.0046 ) , ( s 2 , 0 ) , ( s 3 , 0 ) } .
Step 4: Sort the decision objects. According to Formula (12), the expected values of the three cities were calculated :   E 1 = 2.0999 ,   E 2 = 1.7190 , and E 3 = 1.8823 . The final ranking result was a 2 a 3 a 1 , that is, the sustainable development levels of water resources in the three cities were ranked as a 2 , a 3 , and a 1 . Thus, again, the best alternative is a 2 .

5. Discussion and Comparison

In order to further verify the effectiveness of the proposed method, the consensus model [11] and the classification aggregation model [19] were used to deal with the decision-making problem in the example. The final decision results of the three methods are as follows in Table 2.
It is necessary for the 30 experts to reach a consensus at the same time when using the consensus model to make a direct decision. The decision-making process needs to adjust the difference information of experts repeatedly to ensure the level of the group consensus. Compared with the decision-making method in which the expert classification is carried out first and then the experts are promoted to reach a consensus, the amount of expert information that needs to be adjusted by using the consensus model for decision-making is significantly increased. The difference between the expert decision information after consensus adjustment and the original information is large, which changes the decision-making opinions of the expert group. Table 2 shows that the ranking results of the consensus model are slightly different from the other two methods, because the consensus model needs to repeatedly modify the expert decision information during the decision-making process, and there is a certain error in the final decision results.
In the process of example decision-making by the classification aggregation model, although the difference between experts is reduced by the expert classification, the weight assigned by all experts still needs to be calculated first in the process of expert decision information aggregation, and this weight has a great impact on the final decision-making results. In addition, when the size of the expert group is large, problems associated with the average weight assigned by experts easily appear. We used the consensus model so as to reduce the amount of adjustment of the expert decision information, so that the classified expert group could reach a consensus within the class. In the actual decision-making process, we do not need to consider the weight difference of experts in the same class, which improves the efficiency of group decision-making. Table 2 shows that the ranking results of the classification aggregation method are same as that of the method in this paper, but the method in this paper does not need to pre-specify the weight assigned by experts, which avoids the human error caused by subjectively assigning a weight.
Therefore, the decision-making method proposed in this paper not only avoids the errors that may occur during the integration of decision information, but also has better applicability.

6. Conclusions

In this paper, a hesitant fuzzy linguistic group decision-making method based on a cluster consensus model was proposed in order to solve huge expert group decision-making problems. In this method, the subjective process of assigning weights by experts is avoided through the classification and assembly process and the reliability of the decision-making process is improved; moreover, the calculation process of the decision-making process becomes clear and simple. In the decision-making process, the similarity between expert information is calculated by the similarity in hesitant fuzzy language information; then, expert groups are classified using a clustering algorithm based on breadth-first search neighbors. Reasonable classification results are sought by adjusting the classification parameters r and λ and using the classification effect test indicators. According to the decision information of experts in a class, the adaptive consensus decision model adjusts the relevant decision information to reach a consensus among all of the experts in that class. Afterward, the expert decision information is transformed into probabilistic linguistic information. This method not only conforms to the general law of the group decision-making process, but also reduces the loss of information in the assembly process. For the aggregation of decision information between classes, the differences in the size of expert groups and the group opinion are considered in the calculation of expert class weights in order to obtain more reliable decision results. The effectiveness of the proposed method was further validated by the analysis of a case study. The decision-making problems of large expert groups can be solved more scientifically and effectively by the proposed method.

Author Contributions

Conceptualization, H.X. and L.W.; methodology, H.X. and L.W.; validation, L.W.; formal analysis, L.W.; data curation, L.W.; writing—original draft preparation, L.W. and H.X.; writing—review and editing, L.W. All authors read and agreed to the published version of the manuscript.

Funding

This research was funded by “the National Natural Science Foundation of China” (grant number U1501253) and “the Guangdong Science and Technology Program” (grant number 2016B010127005).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  2. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529. [Google Scholar] [CrossRef]
  3. Liao, H.; Xu, Z.; Zeng, X.J.; Merigó, J.M. Qualitative decision making with correlation coefficients of hesitant fuzzy linguistic term sets. Knowledge-Based Syst. 2015, 76, 127–138. [Google Scholar] [CrossRef]
  4. Liao, H.; Qin, R.; Gao, C.; Wu, X.; Hafezalkotob, A.; Herrera, F. Score-HeDLiSF: A score function of hesitant fuzzy linguistic term set based on hesitant degrees and linguistic scale functions: An application to unbalanced hesitant fuzzy linguistic MULTIMOORA. Inf. Fusion 2018, 48, 39–54. [Google Scholar] [CrossRef]
  5. Chen, Z.; Chin, K.S. Proportional hesitant fuzzy linguistic term set for multiple criteria group decision making. Inf. Sci. 2016, 357, 61–87. [Google Scholar] [CrossRef]
  6. Zafeiris, A.; Koman, Z. Phenomenological theory of collective decision-makin. Phys. A-Stat. Appl. 2017, 479, 287–298. [Google Scholar]
  7. Wang, H.; Xu, Z.S.; Zeng, X.J. Hesitant fuzzy linguistic term sets for linguistic decision making: Current developments, issues and challenges. Inf. Fusion 2018, 43, 1–12. [Google Scholar] [CrossRef]
  8. Wu, Z.; Xu, J. A Consensus Process for Decision Making with Hesitant Fuzzy Linguistic Term Sets. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015. [Google Scholar] [CrossRef]
  9. Wu, Z.B.; Xu, J.P. Possibility distribution-based approach for MAGDM with hesitant fuzzy linguistic information. IEEE Trans. Cybern. 2016, 46, 694–705. [Google Scholar] [CrossRef] [PubMed]
  10. Zhang, B.W.; Liang, H.M.; Zhang, G.Q. Reaching a consensus with minimum adjustment in MAGDM with hesitant fuzzy linguistic term sets. Inf. Fusion 2018, 42, 12–23. [Google Scholar] [CrossRef]
  11. Wei, C.P.; Ma, J. Consensus model for hesitant fuzzy linguistic group decision making. Control Decis. 2018, 33, 275–281. [Google Scholar]
  12. Ma, Z.; Zhu, J.; Ponnambalam, K.; Zhang, S. A clustering method for large-scale group decision-making with multi-stage hesitant fuzzy linguistic terms. Inf. Fusion 2019, 50, 231–250. [Google Scholar] [CrossRef]
  13. Wang, Y.; Mab, X.L. A fuzzy-based customer clustering approach with hierarchical structure for logistics network optimization. Expert Syst. Appl. 2014, 41, 521–534. [Google Scholar] [CrossRef]
  14. Rodriguez, R.M.; Martinez, L. Hesitant Fuzzy Linguistic Term Sets for Decision Making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef]
  15. Dong, Y.; Li, C.C.; Herrera, F. Connecting the linguistic hierarchy and the numerical scale for the 2—tuple linguistic model and its use to deal with hesitant unbalanced linguistic information. Inf. Sci. 2016, 367, 259–278. [Google Scholar] [CrossRef]
  16. Liu, Q.; Feng, X.Q.; Zhang, H. Multiattribute Decision-Making Method of Hesitant Fuzzy Language Based on Similarity. Stat. Decis. 2017, 19, 40–44. [Google Scholar]
  17. Zhang, G.Q.; Dong, Y.C. Consistency and consensus measures for linguistic preference relations based on distribution assessments. Inf. Fusion 2014, 17, 46–55. [Google Scholar] [CrossRef]
  18. Yan, H.B.; Ma, T. A group decision-making approach to uncertain quality function deployment based on fuzzy preference relation and fuzzy majority. Eur. J. Oper. Res. 2015, 241, 815–829. [Google Scholar] [CrossRef]
  19. Ma, Z.Z.; Zhu, J.J. Classification-based aggregation model on large scale group decision making with hesitant fuzzy linguistic information. Control Decis. 2019, 34, 167–179. [Google Scholar]
  20. Zhang, S.T.; Liu, X.D.; Zhu, J.J.; Wang, Z.Y. Adaptive consensus model with hesitant fuzzy linguistic information considering individual cumulative consensus contribution. Control Decis. 2019. [Google Scholar] [CrossRef]
Figure 1. Classification effect test indicator change trend.
Figure 1. Classification effect test indicator change trend.
Symmetry 12 01180 g001
Table 1. The hesitant fuzzy linguistic information given by 30 experts.
Table 1. The hesitant fuzzy linguistic information given by 30 experts.
a 1 a 2 a 2
f 1 f 2 f 3 f 1 f 2 f 3 f 1 f 2 f 3
e 1 { s 1 } { s 2 , s 3 } { s 2 } { s 1 } { s 2 } { s 1 , s 2 } { s 2 } { s 1 } { s 1 }
e 2 { s 1 , s 0 } { s 0 } { s 0 } { s 0 , s 1 } { s 2 } { s 2 } { s 2 , s 3 } { s 2 } { s 1 }
e 3 { s 0 } { s 1 , s 2 } { s 0 } { s 1 } { s 0 , s 1 } { s 2 } { s 2 } { s 1 , s 2 } { s 2 }
e 4 { s 2 } { s 2 , s 3 } { s 1 } { s 0 , s 1 } { s 2 } { s 1 } { s 2 } { s 1 } { s 1 , s 0 }
e 5 { s 2 , s 1 } { s 2 } { s 1 , s 2 } { s 2 , s 1 } { s 3 , s 2 } { s 1 , s 2 } { s 1 , s 2 } { s 1 , s 2 } { s 0 , s 1 }
e 6 { s 0 , s 1 } { s 1 } { s 0 , s 1 } { s 1 } { s 1 , s 0 } { s 2 } { s 2 , s 3 } { s 1 , s 2 } { s 1 , s 2 }
e 7 { s 0 } { s 0 } { s 1 , s 0 } { s 0 , s 1 } { s 2 , s 1 } { s 2 , s 3 } { s 3 } { s 2 , s 3 } { s 1 }
e 8 { s 1 } { s 1 , s 2 , s 3 } { s 2 } { s 1 , s 0 } { s 2 , s 1 } { s 1 , s 2 } { s 2 } { s 1 , s 2 } { s 0 , s 1 }
e 9 { s 1 , s 2 } { s 2 , s 3 } { s 1 , s 2 } { s 1 } { s 2 , s 1 , s 0 } { s 1 } { s 2 , s 3 } { s 1 , s 2 } { s 1 , s 0 , s 1 }
e 10 { s 1 , s 0 } { s 0 , s 1 } { s 0 , s 1 } { s 1 , s 0 , s 1 } { s 1 } { s 2 } { s 2 } { s 1 , s 2 } { s 2 , s 1 }
e 11 { s 2 , s 1 } { s 2 , s 3 } { s 2 , s 3 } { s 1 } { s 2 , s 1 } { s 1 , s 2 } { s 1 , s 2 } { s 1 } { s 1 , s 2 }
e 12 { s 0 , s 1 } { s 1 } { s 0 , s 1 } { s 0 , s 1 } { s 2 } { s 1 , s 2 } { s 1 , s 2 } { s 1 , s 2 } { s 1 , s 2 }
e 13 { s 1 , s 2 } { s 1 , s 2 , s 3 } { s 1 , s 2 } { s 1 } { s 1 , s 0 , s 1 } { s 0 , s 1 } { s 2 , s 3 } { s 1 } { s 1 , s 0 , s 1 }
e 14 { s 1 , s 2 } { s 3 } { s 1 } { s 0 , s 1 } { s 3 , s 2 } { s 0 , s 1 } { s 2 } { s 0 , s 1 } { s 1 , s 2 , s 3 }
e 15 { s 0 , s 1 } { s 1 , s 2 , s 3 } { s 1 , s 2 } { s 1 , s 2 } { s 0 , s 1 } { s 2 } { s 2 } { s 1 , s 2 } { s 2 }
e 16 { s 1 } { s 1 , s 2 , s 3 } { s 2 } { s 1 , s 0 } { s 2 , s 1 } { s 1 , s 2 , s 3 } { s 2 } { s 1 } { s 0 , s 1 }
e 17 { s 2 , s 1 } { s 1 , s 0 } { s 1 , s 0 } { s 1 } { s 2 , s 1 } { s 3 } { s 3 } { s 1 , s 2 } { s 1 }
e 18 { s 1 } { s 0 , s 1 , s 2 } { s 1 , s 0 } { s 1 } { s 1 } { s 2 , s 3 } { s 2 } { s 1 , s 2 , s 3 } { s 2 , s 3 }
e 19 { s 2 } { s 1 , s 2 } { s 1 , s 2 , s 3 } { s 2 , s 1 } { s 3 } { s 1 , s 2 } { s 2 } { s 2 } { s 0 , s 1 , s 2 }
e 20 { s 1 , s 0 } { s 1 , s 0 } { s 0 } { s 0 , s 1 } { s 3 , s 2 } { s 2 , s 3 } { s 2 , s 3 } { s 2 , s 3 } { s 1 }
e 21 { s 1 , s 0 , s 1 } { s 0 , s 1 } { s 1 , s 0 } { s 0 } { s 2 } { s 1 , s 2 } { s 2 } { s 2 } { s 2 , s 1 }
e 22 { s 1 , s 2 } { s 2 , s 3 } { s 1 } { s 0 , s 1 } { s 2 , s 1 } { s 1 } { s 1 , s 2 } { s 0 , s 1 } { s 0 }
e 23 { s 0 } { s 1 , s 2 } { s 0 } { s 0 , s 1 } { s 1 , s 0 , s 1 } { s 2 , s 3 } { s 2 } { s 0 , s 1 , s 2 } { s 2 , s 3 }
e 24 { s 2 , s 1 } { s 3 } { s 1 , s 2 } { s 1 , s 0 } { s 2 } { s 1 , s 2 , s 3 } { s 2 } { s 0 , s 1 } { s 0 , s 1 }
e 25 { s 0 , s 1 } { s 2 , s 3 } { s 0 , s 1 } { s 1 , s 2 } { s 0 , s 1 , s 2 } { s 2 } { s 3 } { s 1 , s 2 , s 3 } { s 2 , s 3 }
e 26 { s 1 , s 0 } { s 1 , s 0 } { s 0 } { s 0 , s 1 , s 2 } { s 2 } { s 2 , s 3 } { s 2 , s 3 } { s 2 } { s 1 , s 0 }
e 27 { s 1 , s 2 } { s 2 } { s 1 , s 2 } { s 1 } { s 1 } { s 1 , s 2 } { s 2 } { s 1 , s 2 } { s 1 }
e 28 { s 2 } { s 2 , s 3 } { s 1 } { s 0 , s 1 , s 2 } { s 1 , s 2 } { s 1 } { s 1 , s 2 } { s 1 } { s 1 , s 0 }
e 29 { s 1 } { s 1 , s 2 , s 3 } { s 2 } { s 1 , s 0 } { s 2 , s 1 } { s 1 , s 2 } { s 2 } { s 1 , s 2 } { s 0 , s 1 }
e 30 { s 1 , s 0 } { s 0 } { s 1 , s 0 } { s 0 , s 1 } { s 2 } { s 2 , s 3 } { s 2 , s 3 } { s 2 } { s 1 }
Table 2. Comparison of the decision-making methods.
Table 2. Comparison of the decision-making methods.
Decision-Making MethodSort ResultsOptimal Decision Object
Ref [11] a 2 a 1 a 3 a 2
Ref [19] a 2 a 3 a 1 a 2
This article a 2 a 3 a 1 a 2

Share and Cite

MDPI and ACS Style

Wang, L.; Xue, H. Group Decision-Making Method Based on Expert Classification Consensus Information Integration. Symmetry 2020, 12, 1180. https://doi.org/10.3390/sym12071180

AMA Style

Wang L, Xue H. Group Decision-Making Method Based on Expert Classification Consensus Information Integration. Symmetry. 2020; 12(7):1180. https://doi.org/10.3390/sym12071180

Chicago/Turabian Style

Wang, Lei, and Huifeng Xue. 2020. "Group Decision-Making Method Based on Expert Classification Consensus Information Integration" Symmetry 12, no. 7: 1180. https://doi.org/10.3390/sym12071180

APA Style

Wang, L., & Xue, H. (2020). Group Decision-Making Method Based on Expert Classification Consensus Information Integration. Symmetry, 12(7), 1180. https://doi.org/10.3390/sym12071180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop