Next Article in Journal
Optimization of Mass Concrete Construction Using a Twofold Parallel Genetic Algorithm
Previous Article in Journal
Dynamic Relaxation Method for Load Capacity Analysis of Reinforced Concrete Elements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hesitant Probabilistic Multiplicative Preference Relations in Group Decision Making

1
Department of Mathematics, Quaid-i-Azam University, Islamabad 45320, Pakistan
2
Department of Mathematics, University of Management and Technology, Lahore 54770, Pakistan
3
Faculty of Economics and Management, University of Szczecin, 71-101 Szczecin, Poland
4
Department of Artificial Intelligence Method and Applied Mathematics in the Faculty of Computer Science and Information Technology, West Pomeranian University of Technology, 71-210 Szczecin, Poland
5
Department of Computer Science, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(3), 398; https://doi.org/10.3390/app8030398
Submission received: 30 January 2018 / Revised: 28 February 2018 / Accepted: 4 March 2018 / Published: 8 March 2018

Abstract

:
The preference of one alternative over another is a useful way to express the opinion of the decision-maker. In the process of group decision-making, preference relations are used in preference modeling of the alternatives under given criteria. The probability is an important tool to deal with uncertainty and, in many scenarios of decision-making problems, the probabilities of different events affect the decision-making process directly. In order to deal with this issue, the hesitant probabilistic multiplicative preference relation (HPMPR) is defined in this paper. Furthermore, consistency of the HPMPR and consensus among decision makers are studied here. In this respect, many algorithms are developed to achieve consistency of HPMPRs, reasonable consensus between decision-makers and a final algorithm is proposed comprehending all other algorithms, presenting a complete decision support model for group decision-making. Lastly, we present a case study with complete illustration of the proposed model and discuss the effects of probabilities on decision-making validating the importance of the introduction of probability in hesitant multiplicative preference relations.

1. Introduction

Fuzzy set theory was initially introduced by Zadeh [1] in 1965 as an extension of the classical set theory. In classical set theory, an element either belongs to or does not belong to a set. In fuzzy set theory, a gradual assessment of elements of a set is described with a membership function that is in [ 0 , 1 ] , and it is successfully used in decision-making problems [2]. After this popular extension of set theory, several extensions and generalizations of fuzzy sets have been introduced. These extensions have been successfully used in several practical applications of both real life and scientific problems. Various applications of these extensions can be found in artificial intelligence, computer science, medicine, control engineering, decision theory, expert systems, logic, management science, operations research, pattern recognition, and robotics [3,4,5,6]. Torra felt that there is still a need to introduce one more extension of fuzzy set theory—the hesitant fuzzy set (HFS) theory [7]. This extension permits several possible membership degrees of an element in unit closed interval [ 0 , 1 ] . HFS provides a much better description than the other extensions of fuzzy sets when the difficulty of establishing the membership degree is tremendous and a specific set of possible values, instead of a single value, is chosen as possible membership values. Many studies on HFS have been conducted, see [8,9,10,11,12,13,14].
In the quest to model the real world more accurately, it was noted that the probabilities of occurrence for different membership values in HFS must be different. HFS does not take into account the occurrence probabilities, which can play a crucial role in decision-making. To overcome this flaw, Xu and Zhou proposed the concept of a hesitant probabilistic fuzzy set [15], an extension of HFS. Furthermore, they investigated several aggregation operators with properties for hesitant probabilistic fuzzy set. Additionally, a novel algorithm was developed to handle the multi-criteria group decision making problems for the hesitant probabilistic fuzzy set [16]. This concept was further extended to the continuous form of hesitant probabilistic fuzzy set [17]. While Pang et al. introduced the idea of a probabilistic linguistic term set [18], allowing a decision-maker (DM) to provide not only his opinion in linguistic terms but also the occurrence probabilities, in an attempt to comprehend the complexities of reality, Zhai et al. [16] proposed the notion of a probabilistic linguistic vector-term set with a novel algorithm for group decision-making with multi-granular linguistic information.
This research work confirms the vital role of occurrence probabilities. Apart from interval [ 0 , 1 ] , the Saaty’s scale 1–9 [19,20] is quite important and useful for expressing the opinion of the decision-maker. Xia and Xu [21] defined the hesitant multiplicative set (HMS) in order to express membership degrees in Saaty’s scale. However, the HMS has the same deficiency as HFS, as the occurrence probabilities are not studied. Therefore, it is an important research problem to study HMS in a hesitant probabilistic environment. In this paper, we extend HMS as a hesitant probabilistic multiplicative set (HPMS), by assigning the probabilities to membership degrees expressed in Saaty’s scale. For practical group decision-making with the proposed HPMS, our choice is the preference relations. The preference relations are quite elegant in making practical decision-making flawless. A number of preference relations have been developed in this manner and used in practical decision-making, such as multiplicative preference relations [19,20,22], fuzzy preference relations [23], multiplicative fuzzy preference relations [24], incomplete fuzzy preference relations [25,26], linguistic preference relations [27,28,29], intuitionistic fuzzy preference relations [30,31,32,33,34], interval-valued hesitant preference relations [10], hesitant fuzzy preference relations [21,35,36,37], and hesitant multiplicative preference relations [21,38,39].
The main goal of this paper is to develop an efficient group decision-making support system that will be capable of handling occurrence probabilities of multiplicative preference degrees. In this regard, a novel notion of hesitant probabilistic multiplicative preference relations is proposed that will model the vagueness and complexities of real life scenarios in a better way. In this line of research, Zhou and Xu [40] proposed and studied probabilistic hesitant fuzzy preference relations and dealt with their consistency by defining the expected consistency; however, they do not address group decision-making. The group decision-making [24,25,41,42,43,44] is more accurate than a single person’s decision. Any group decision support system of preference relations must address the following critical issues.
  • Consistency of preference relations: Inconsistent preference relations lead to unrealistic decisions. Consistency is like transitivity in some sense [22,27,35,38,45,46,47]. Therefore, the preference relation must satisfy a certain level of consistency.
  • Consensus among decision-makers: In group decision-making, the diversity in nature, social backgrounds, education level, expertise, and experience of decision-makers may lead to differences in opinions, but it is important in many scenarios of group decision-making to develop a reasonable consensus between the decision-makers [22,25,35,38,41,47,48,49]. Thus, some measures are needed to remove the difference in opinion to some extent.
To deal with both issues of consistency and consensus, in the HPMPR, multiplicative consistency and weak stochastic transitivity are used for multiplicative preference degree and their occurrence probabilities respectively, to define consistency. A novel deviation degree is introduced between two HPMPRs, based on a consistency measure of the HPMPR, and a consensus measure among decision-makers is defined. A series of novel algorithms are designed to find a consistent HPMPR, make the HPMPR acceptably consistent, and develop an acceptable consensus between decision-makers.
The paper is structured in the following way. In Section 2, some preliminary concepts are discussed to clarify our proposed consistency and consensus measure. In Section 3, the HPMS and HPMPR are defined, and the consistency issue of the HPMPR is discussed. Section 4 is devoted to resolving the consensus issue, and an algorithm is presented for reaching an acceptable consensus among decision-makers. Section 5 is dedicated to presenting a complete group decision-making model dealing with both issues of consistency and consensus. In Section 6, a numerical analysis of the developed model is performed in a case study in order to demonstrate the relevance of our proposal. Section 7 ends the paper with some concluding remarks.

2. Preliminaries

For convenience, let # h be the number of elements in h . Xia and Xu [21] also used Saaty’s scale 1–9 to express the membership degree in HFS, and provided the definition of a hesitant multiplicative set (HMS).
Definition 1.
[21] Consider that a fixed set X, a hesitant multiplicative set on X, is defined by a function that gives a subset of [ 1 / 9 , 9 ] when applied to X.
For easy understanding, Xia and Xu [21] represented the HMS as
M = { < x , b M ( x ) > | x X }
where b M ( x ) is the set of values from [ 1 / 9 , 9 ] according to Saaty’s scale [19], indicating possible multiplicative membership degrees of x X of the set M. They also called b = b M ( x ) a hesitant multiplicative element (HME) and M the set of hesitant fuzzy elements. For further work, let # b denote the number of elements in the HME b.
Example 1.
Consider X = { x 1 , x 2 , x 3 } , E = { < x 1 , { 0.2 , 0.4 , 0.9 } > , < x 2 , { 0.5 } > , < x 3 , { 0.1 , 0.6 } > } , M = { < x 1 , { 1 / 6 , 2 } > , < x 2 , { 1 , 4 , 7 } > , < x 3 , { 1 / 3 } > } , h = { 0.2 , 0.4 , 0.9 } , and b = { 1 , 4 , 7 } . Then, E is HFS, M is HMS, h is HFE, and b is HME.
The common problem in working in a hesitant fuzzy environment is that there is a different number of elements in different HFEs or HMEs. Take a fixed set X = { x 1 , x 2 , x 3 , , x n } of alternatives. The DMs can give their preferences as either HFE or HME. Xia and Xu [21] introduced the hesitant fuzzy preference relation (HFPR) and the hesitant multiplicative preference relation (HMPR):
Definition 2.
[21] For a fixed set X = { x 1 , x 2 , , x n } , an HMPR is expressed by a matrix B = ( b i j ) n × n X × X , where b i j = { b i j s | s = 1 , 2 , 3 , , # b i j } is an HFE giving all the possible preference degrees of the alternative x i over x j in Saaty’s scale. Additionally, b i j satisfies the following conditions for all i , j N :
b i j σ ( s ) × b j i ( # b i j σ ( s ) + 1 ) = 1 , b i i = { 1 } , # b i j = # b j i
where b i j σ ( s ) is the sth smallest value in b i j , and the elements of b i j are arranged in increasing order.
To make hesitant fuzzy sets more compatible with reality, Xu and Zhour [15] defined the hesitant probabilistic fuzzy element (HPFE) and the hesitant probabilistic fuzzy set (HPFS).
Definition 3.
 [15] Consider a fixed set R . The HPFS on R is defined as a mathematical symbol:
H P = { h ( γ i | p i ) | γ i , p i }
where h ( γ i | p i ) is the HPFE consisting of the elements of the form γ i | p i , expressing the hesitant fuzzy information with probabilities to the set H p , 0 γ i 1 , i = 1 , 2 , 3 , , # h , where # h is the number of elements in h ( γ i | p i ) , p i [ 0 , 1 ] is the respective hesitant probability for γ i , and i = 1 # h p i = 1 .

3. Hesitant Probabilistic Multiplicative Sets and Relations

With the introduction of probability in hesitant fuzzy sets [15] and linguistic term sets [18], it is important to improve other sets in this context. HME expresses the opinion of the decision-maker in Saaty’s scale. The aim of this section is first to introduce the HPMS and then define and build on its basis a basic framework for the HPMPR.
Definition 4.
For a fixed set X , the HPMS is defined as
M = { < x , h ( x ) > : x X }
where h ( x ) = { γ i | p i : i = 1 , 2 , 3 , , # h } denotes the hesitant probabilistic multiplicative element (HPME) such that γ i [ 1 / 9 , 9 ] and p i [ 0 , 1 ] with i = 1 # h p i = 1 , namely p j is the probability of γ j among all others γ i .
For clarification, the following example is given.
Example 2.
Let X = { a , b , c } . Consider
M = { < a , { 1 / 2 | 0 . 3 , 3 | 0 . 7 } > , < b , { 0 . 45 | 0 . 6 , 0 . 98 | 0.4 } > , < c , { 6.5 | 0.4 , 7 | 0.3 , 8 | 0.3 } > } .
Then, M is HPMS and { 1 / 2 | 0.3 , 3 | 0.7 } , { 0.45 | 0.6 , 0.98 | 0.4 } , { 6.5 | 0.4 , 7 | 0.3 , 8 | 0.3 } are HPMEs.
For comparison between two HPMEs, the score and deviation for the HPME is defined below, based on the definition of the score function of the HME and the variance of he random variable, respectively.
Definition 5.
Take HPME h = { γ i | p i : i = 1 , 2 , , # h } . Then,
s ( h ) = i = 1 # h γ i p i 1 # h
is defined as the score function. Additionally,
d ( h ) = i = 1 # h γ i p i s ( h ) 2 .
Consider two HPMEs, h 1 and h 2 . The order of the HPMEs is decided as follows:
  • If s ( h 1 ) > s ( h 2 ) , then h 1 > h 2 ( h 1 is superior to h 2 ).
  • For the case s ( h 1 ) = s ( h 2 ) , compare the deviations. If d ( h 1 ) > d ( h 2 ) , then h 2 > h 1 ; if d ( h 2 ) > d ( h 1 ) , then h 1 > h 2 ; if d ( h 1 ) = d ( h 2 ) , then h 1 = h 2 .
The main aim of this study is to introduce the HPMPR and to make a decision-making model with it. For this purpose, the following operations are made. Let h 1 and h 2 be HPMEs of the same length, i.e.,  # h 1 = # h 2 . Then,
h 1 h 2 = γ σ ( s ) 1 | p σ ( s ) 1 h 1 , γ σ ( s ) 2 | p σ ( s ) 2 h 2 { γ σ ( s ) 1 × γ σ ( s ) 2 | p σ ( s ) 1 + p σ ( s ) 2 }
h ω = γ σ ( s ) | p σ ( s ) h { γ σ ( s ) ω | ω p σ ( s ) } : ω > 0
where γ σ ( s ) 1 | p σ ( s ) 1 and γ σ ( s ) 2 | p σ ( s ) 2 are sth elements of h 1 and h 2 , respectively.
These operations satisfy the following properties.
Theorem 1.
Take three HPMEs, h 1 , h 2 and h 3 , with # h 1 = # h 2 = # h 3 and ω 1 , ω 2 , ω 3 > 0 . Then,
1.
h 1 h 2 .
2.
( h 1 h 2 ) h 3 = h 1 ( h 2 h 3 ) .
3.
( h 1 h 2 ) ω = h 1 ω h 2 ω .
4.
h ( ω 1 + ω 2 ) = h ω 1 h ω 2 .
(please see Appendix A for Proof 1).
To allow decision-makers to provide the preferences in a hesitant probabilistic environment, we define HPMPR.
Definition 6.
Let X = { x 1 , x 2 , x 3 , , x n } be the set of alternatives. The HPMPR is a matrix H = ( h i j ) n × n , where h i j = { h i j s | p i j s : s = 1 , 2 , 3 , , # h i j } is the HPME expressing the possible preference degrees of the alternative x i over x j with probabilities and with j > i satisfying the following conditions
h i j σ ( s ) h j i σ ( s ) = 1 , p i j σ ( s ) = p j i σ ( s ) , # h i j = # h j i , h i i σ ( s ) = 1
and
h i j σ ( s ) < h i j σ ( s + 1 ) , h j i σ ( s + 1 ) < h j i σ ( s )
where h i j σ ( s ) | p i j σ ( s ) and h j i σ ( s ) | p j i σ ( s ) are the sth elements in h i j and h j i , respectively.
Note that all the preference degrees in diagonal are 1 and # h i i can be greater than 1, whereas the probabilities can vary. Thus, it will help to maintain useful properties such as consistency and consensus, as will be seen. The diversity in probabilities of diagonal preference degrees does not cause any harm, and the net impact remains the same as the sum of all probabilities is 1 .
Often, the lengths of HPFEs are different, but, to apply the above definitions, the length needs to be equal. Some elements will be added to the HPFE, which has fewer elements, but the information provided should not be changed.
Definition 7.
An HPMPR H = ( h i j ) n × n is called the normalized hesitant probabilistic multiplicative preference relation (NHPMPR) if the length of all h i j is the same for all i , j = 1 , 2 , , n .
This problem also arises when working with hesitant fuzzy elements. Xu and Xia [50] and Zhu et al. [37] add an element to the shorter lengthened hesitant fuzzy elements. Let h = { h i | p i : i = 1 , 2 , 3 , , # h } be an HPME. For preference degrees h i , Zhang and Wu [38] find a way to add elements to the HME. For an optimized parameter 0 ζ 1 , the preference degree that will be added to h i is ( h + ) ζ × ( h ) ( 1 ζ ) ,where h + is the largest and h is the smallest among h i . The decision-maker can choose the value of ζ according to his risk preferences. The added element will be h + and h for ζ = 1 and ζ = 0 , respectively, which demonstrates the optimistic and pessimistic approach of the decision-maker proposed by Xu and Xia [50]. In a hesitant probabilistic multiplicative environment, some method is needed whereby probability is assigned to the added preference degree such that the information of the HPMPR is not changed. There are many ways to do it. One option is to assign 0 to the added preference degree ( h + ) ζ × ( h ) ( 1 ζ ) ; for extreme cases, that is, a pessimistic approach, i.e., ζ = 0 , and an optimistic approach, i.e., ζ = 1 , the added element in the HPME is h | p d # h + 1 and h + | p + d # h + 1 , respectively, where d is the required length of the HPMEs, and p and p + are the probabilities of h and h + , respectively.
For a given HPMPR H = ( h i j ) n × n , we normalize it as follows. Let d = max { # h i j } and i , j = 1 , 2 , , n . For optimized parameter ζ = 0 ,
h ¯ i j = h i j , # h i j = d h i j σ ( 1 ) | p i j σ ( 1 ) d # h i j + 1 , , h i j σ ( 1 ) | p i j σ ( 1 ) d # h i j + 1 d # h i j + 1 , h i j σ ( 2 ) | p i j σ ( 2 ) , h i j σ ( 3 ) | p i j σ ( 3 ) , , h i j σ ( # h i j ) | p i j σ ( # h i j ) # h i j 1 # h i j < d .
For ζ = 1 ,
h ¯ i j = h i j , # h i j = d h i j σ ( 1 ) | p i j σ ( 1 ) , h i j σ ( 2 ) | p i j σ ( 2 ) , . . . , h i j σ ( # h i j 1 ) | p i j σ ( # h i j 1 ) , # h i j 1 h i j σ ( # h i j ) | p i j σ ( # h i j ) d # h i j + 1 , , h i j σ ( # h i j ) | p i j σ ( # h i j ) d # h i j + 1 d # h i j + 1 # h i j < d .
For 0 < ζ < 1 , for i < j ,
h ¯ i j = h i j , # h i j = d h i j σ ( 1 ) | p i j σ ( 1 ) , , h i j σ ( t i j ) | p i j σ ( t i j ) t i j , ζ h i j σ ( # h i j ) + ( 1 ζ ) h i j σ ( 1 ) | 0 , , ζ h i j σ ( # h i j ) + ( 1 ζ ) h i j σ ( 1 ) | 0 d ( # h i j ) , h i j σ ( t i j + 1 ) | p i j σ ( t i j + 1 ) , , h i j σ ( # h i j ) | p i j σ ( # h i j ) ( # h i j ) t i j # h i j < d ,
and
h ¯ j i = h j i , # h j i = d h j i σ ( 1 ) | p j i σ ( 1 ) , , h j i σ ( t j i ) | p j i σ ( t j i ) t j i , ( 1 ζ ) h j i σ ( 1 ) + ζ h j i σ ( # h j i ) | 0 , , ( 1 ζ ) h j i σ ( 1 ) + ζ h j i σ ( h j i ) | 0 d ( # h j i ) , h j i σ ( t j i + 1 ) | p j i σ ( t j i + 1 ) , , h j i σ ( # h j i ) | p j i σ ( # h j i ) ( # h j i ) t j i # h j i < d .
where t i j = t j i = max s { 1 , 2 , , # h i j } h i j σ ( s ) ζ h i j σ ( # h i j ) + ( 1 ζ ) h i j σ ( 1 ) .
Now H ¯ = ( h ¯ i j ) n × n is NHPMPR. Next, we deal with consistency.
Example 3.
Let
H 1 = { 1 | 1 } { 1.7371 | 0.6309 , 2.2438 | 0.02041 , 5.2375 | 0.3487 } { . 25 | 1 } { 0.6411 | 0.9414 , 4.2986 | 0.0586 } { 0.5757 | 0.6309 , 0.4457 | 0.02041 , 0.1909 | 0.3487 } { 1 | 1 } { 4.748 | 1 } { 1.4432 | 0.3066 , 6.5328 | 0.2411 , 7.3782 | 0.4523 } { 4 | 1 } { 0.2106 | 1 } { 1 | 1 } { 4.1449 | 0.3438 , 7.2252 | 0.6562 } { 1.5599 | 0.9414 , 0.2326 | 0.0586 } { 0.6929 | 0.3066 , 0.1531 | 0.2411 , 0.1355 | 0.4523 } { 0.2413 | 0.3438 , 0.1384 | 0.6562 } { 1 | 1 }
H 2 = { 1 | 1 } { 8 | 1 } { 0.1333 | 0.8685 , 0.1379 | 0.1315 } { 0.1379 | 0.1315 , 0.2621 | 0.6826 } { 0.125 | 1 } { 1 | 1 } { 0.1129 | 0.6115 , 4.0197 | 0.6113 } { 2 | 0.2629 , 3 | 0.7371 } { 7.5012 | 0.8685 , 7.2521 | 0.1315 } { 8.8583 | 0.6115 , 0.2488 | 0.6113 } { 1 | 1 } { 0.1499 | 0.4927 , 0.3995 | 0.2863 , 1.7261 | 0.2210 } { 5.9499 | 0.3174 , 3.816 | 0.6826 } { 0.5 | 0.2629 , 0.3333 | 0.7371 } { 6.6698 | 0.4927 , 2.5033 | 0.2863 , 0.5794 | 0.221 } { 1 | 1 } .
Then, H 1 and H 2 are HPMPRs. Now by taking optimized parameters 0 , 1 for H 1 and H 2 , respectively, the following NHPMPRs are obtained.
H ¯ 1 = { 1 | . 3333 , 1 | . 3333 , 1 | . 3333 } { 1.7371 | 0.6309 , 2.2438 | 0.02041 , 5.2375 | 0.3487 } { 0.25 | 0.3333 , 0.25 | 0.3333 , 0.25 | 0.3333 } { 0.6411 | 0.4707 , 0.6411 | 0.4707 , 4.2986 | 0.0586 } { 0.5757 | 0.6309 , 0.4457 | 0.0204 , 0.1909 | 0.3487 } { 1 | . 3333 , 1 | . 3333 , 1 | . 3333 } { 4.748 | 0.3333 , 4.748 | 0.3333 , 4.748 | 0.3333 } { 1.4432 | 0.3066 , 6.5328 | 0.2411 , 7.3782 | 0.4523 } { 4 | 0.3333 , 4 | 0.3333 , 4 | 0.3333 } { 0.2106 | 0.3333 , 0.21060.3333 | , 0.2106 | 0.3333 } { 1 | . 3333 , 1 | . 3333 , 1 | . 3333 } { 4.1449 | 0.1719 , 4.1449 | 0.1719 , 7.2252 | 0.6562 } { 1.5599 | 0.4707 , 1.5599 | 0.4707 , 0.2326 | 0.0586 } { 0.6929 | 0.3066 , 0.1531 | 0.2411 , 0.1355 | 0.4523 } { 0.2413 | 0.1719 , 0.2413 | 0.1719 , 0.1384 | 0.6562 } { 1 | . 3333 , 1 | . 3333 , 1 | . 3333 }
H ¯ 2 = { 1 | . 3333 , 1 | . 3333 , 1 | . 3333 } { 8 | 0.3333 , 8 | 0.3333 , 8 | 0.3333 } { 0.1333 | 0.1315 , 0.1379 | 0.4342 , 0.1379 | 0.4342 } { 0.1681 | 0.6826 , 0.2621 | 0.1587 , 0.2621 | 0.1587 } { 0.125 | 0.3333 , 0.125 | 0.3333 , 0.125 | 0.3333 } { 1 | . 3333 , 1 | . 3333 , 1 | . 3333 } { 0.1129 | 0.6115 , 4.0197 | 0.1943 , 4.0197 | 0.1943 } { 2 | 0.2629 , 3 | 0.3685 , 3 | 0.3685 } { 7.2521 | 0.1315 , 7.5012 | 0.4342 , 7.5012 | 0.4342 } { 8.8583 | 0.6115 , 0.2488 | 0.1943 , 0.2488 | 0.1943 } { 1 | . 3333 , 1 | . 3333 , 1 | . 3333 } { 0.1499 | 0.221 , 0.3995 | 0.2863 , 1.7261 | 0.4927 } { 5.9499 | 0.6826 , 3.8160 | 0.1587 , 3.8160 | 0.1587 } { 0.5 | 0.2629 , 0.3333 | 0.3685 , 0.3333 | 0.3685 } { 6.6698 | 0.221 , 2.5033 | 0.2863 , 0.5794 | 0.4927 } { 1 | . 3333 , 1 | . 3333 , 1 | . 3333 } .

Consistency Measure of the Hesitant Probabilistic Multiplicative Preference Relation

In order to obtain a valuable decision from the preference relations, they should be consistent in a sense if say x 1 is preferable to x 2 and x 2 is preferable to x 3 , then x 1 must be preferable to x 3 . Several authors have pursued consistency issues for preference relations [22,28,35,37,38,46].
Consistency for HME is well known. Furthermore, many kinds of transitivity are proposed and studied for probabilities in comparing the preferences in the choice theory [51].
Consider HPMPR H = ( h i j ) n × n by Equations (7) and (8), and calculate NHPMPR H ¯ = ( h ¯ i j ) n × n . The weak stochastic transitivity for probability [51] means
p ¯ i k σ ( s ) 1 2 p ¯ k j σ ( s ) 1 2 p ¯ i j σ ( s ) 1 2 .
This will provide a platform to define consistency for the HPMPR.
Definition 8.
For a given HPMPR H = ( h i j ) n × n and its NHFPR H ¯ = ( h ¯ i j ) n × n with optimized parameter ζ, if
h ¯ i j σ ( s ) = h ¯ i k σ ( s ) × h ¯ k j σ ( s )
and
p ¯ i k σ ( s ) 1 2 p ¯ k j σ ( s ) 1 2 p ¯ i j σ ( s ) 1 2
for all i , j , k = 1 , 2 , 3 , , n , then H is called a consistent HPMPR with optimized parameter ζ .
However, preference relations are often not consistent, and, for meaningful decision-making, at least some level of consistency is required if it is not fully consistent. For preference degrees, take the multiplication of Equation (12) for all k
h ¯ i j σ ( s ) n = k = 1 n h ¯ i k σ ( s ) × h ¯ k j σ ( s ) .
Therefore,
h ¯ i j σ ( s ) = k = 1 n h ¯ i k σ ( s ) × h ¯ k j σ ( s ) 1 n .
Thus, Equation (14) is satisfied by a consistent HPMPR. If not,
h ˜ i j σ ( s ) = k = 1 n h ¯ i k σ ( s ) × h ¯ k j σ ( s ) 1 n .
One can see that the preference degrees h ˜ i j σ ( s ) obtained from the above equation are consistent. For probabilities, the matter is not that simple. Some mechanism is needed to make probabilities consistent with all of the restrictions of the HPMPR like p ¯ i j σ ( s ) = p ¯ j i σ ( s ) and s = 1 # h ¯ i j p ¯ i j σ ( s ) = 1 . Let p ¯ i k σ ( s ) 1 2 and p ¯ k j σ ( s ) 1 2 . Then, we define
p ˜ i j σ ( s ) : = p ¯ i k σ ( s ) + p ¯ k j σ ( s ) 2 1 2
to account for all k = 1 , 2 , , n . Keeping in mind s = 1 # h ¯ i j p ¯ i j σ ( s ) = 1 , we modify Equation (16) such that
p ˜ i j σ ( s ) = p ¯ i 1 σ ( s ) + p ¯ 1 j σ ( s ) 2 + p ¯ i 2 σ ( s ) + p ¯ 2 j σ ( s ) 2 + + p ¯ i n σ ( s ) + p ¯ n j σ ( s ) 2 n .
Hence, s = 1 # h ¯ i j p ¯ i j σ ( s ) = 1 , a n d   p ˜ i j σ ( s ) = p ˜ j i σ ( s ) . If p ¯ i k σ ( s ) 1 2 and p ¯ k j σ ( s ) 1 2 for all k = 1 , 2 , , n , then surely p ˜ i j σ ( s ) 1 2 . However, it is possible that p ¯ i k σ ( s ) 1 2 and p ¯ k j σ ( s ) 1 2 are not true for some k, which will lead to a situation where p ˜ i k σ ( s ) 1 2 p ¯ ˜ k j σ ( s ) 1 2 and p ˜ i j σ ( s ) < 1 2 . Now, if another convex combination is calculated by Equation (17), then the obtained probability will increase. These observations lead to the Algorithm 1, producing a sequence of HPMPRs convergent with a fully consistent HPMPR.
Algorithm 1 Algorithmic description to determine NHPMPR H ¯ , consistent HPMPR H ˜ ( t ) , and number of iterations t .
  • Input: HPMPR H and optimized parameter ζ .  
  • Output: NHPMPR H ¯ , consistent HPMPR H ˜ , and number of iterations t .
  • Step 1: Compute NHPMPR H ¯ by Equation (7) or (8). Let t = 0 , and H ˜ ( 0 ) = h i j ( 0 ) n × n is defined as
    h ˜ i j ( 0 ) = k = 1 n h ¯ i k σ ( s ) × h ¯ k j σ ( s ) 1 n | p ¯ i j σ ( s ) : s = 1 , 2 , 3 , , d .
    Step 2: If the following condition is true, then go to Step 4. Otherwise, go to Step 3.
    p ˜ i k ( t ) σ ( s ) 1 2 p ˜ k j ( t ) σ ( s ) 1 2 p ˜ i j ( t ) σ ( s ) 1 2 for all ( i , j , k = 1 , 2 , , n ) .
    Step 3: H ˜ ( t + 1 ) = h ˜ i j ( t + 1 ) n × n is defined as
    h ˜ i j ( t + 1 ) = h ˜ i j ( t ) σ ( s ) | 1 n k = 1 n p ˜ i k ( t ) σ ( s ) + p ˜ k j ( t ) σ ( s ) 2 : s = 1 , 2 , 3 , , d
    and t = t + 1 . Go to Step 2.
    Step 4: Output NHPMPR H ¯ , consistent HPMPR H ˜ ( t ) , and number of iterations t .
    Step 5: End.
Theorem 2.
Let H = ( h i j ) n × n be an HPMPR with its NHPMPR H ¯ = ( h ¯ i j ) n × n with optimized parameter ζ . Then the H ˜ = ( h ˜ i j ) n × n output of Algorithm 1 is a consistent HPMPR (please see Appendix A for Proof 2).
This result also yields the following theorem.
Theorem 3.
Consider an HPMPR H = ( h i j ) n × n , its NHPMPR H ¯ = ( h ¯ i j ) n × n with optimized parameter ζ and H ˜ = ( h ˜ i j ) n × n consistent HPMPR with optimized parameter ζ . Then, H is consistent if and only if H ¯ = H ˜ .
The above algorithm is quite efficient. To see this fact, we generate 1000 random HPMPRs with different values of n and d and apply Algorithm 1 to find their consistent HPMPRs. Table 1 shows the average value of the number of iterations in Algorithm 1.
Example 4.
Take H 1 , H 2 , and their optimized parameter, which is the same as that in Example 1. Then, by Algorithm 1, the following consistent HPMPRs H ˜ 1 and H ˜ 2 are obtained.
H ˜ 1 = { 1 | 0.4421 , 1 | 0.2894 , 1 | 0.2685 } { 0.5154 | 0.4216 , 0.4016 | 0.2607 , 0.9578 | 0.3177 } { 0.5314 | 0.3675 , 0.5665 | 0.2912 , 0.9806 | 0.3413 } { 1.0165 | 0.3814 , 1.5806 | 0.2969 , 5.9928 | 0.3218 } { 1.9402 | 0.4216 , 2.4900 | 0.2607 , 1.0441 | 0.3177 } { 1 | 0.4011 , 1 | 0.232 , 1 | 0.3669 } { 1.031 | 0.347 , 1.4106 | 0.2625 , 1.0239 | 0.3905 } { 1.9722 | 0.3608 , 3.9358 | 0.2681 , 6.257 | 0.371 } { 1.8819 | 0.3675 , 1.7653 | 0.2912 , 1.0198 | 0.3413 } { 0.961 | 0.347 , 0.7089 | 0.2625 , 0.9767 | 0.3905 } { 1 | 0.293 , 1 | 0.293 , 1 | 0.4141 } { 1.9129 | 0.3068 , 2.7902 | 0.2986 , 6.1112 | 0.3946 } { 0.9838 | 0.3814 , 0.6327 | 0.2969 , 0.1669 | 0.3218 } { 0.5071 | 0.3608 , 0.2541 | 0.2681 , 0.1598 | 0.371 } { 0.5228 | 0.3068 , 0.3584 | 0.2986 , 0.1636 | 0.3946 } { 1 | 0.3206 , 1 | 0.3043 , 1 | 0.3751 }
H ˜ 2 = 1 | 0.3702 , 1 | 0.3149 , 1 | 0.3149 0.3165 | 0.3777 , 0.8526 | 0.3111 , 0.8526 | 0.3111 0.3662 | 0.3473 , 0.7958 | 0.3135 , 0.552 | 0.3393 0.3083 | 0.3726 , 0.5489 | 0.3008 , 0.7914 | 0.3266 0.6299 | 0.3777 , 1.5111 | 0.3111 , 1.5111 | 0.3111 1 | 0.3853 , 1 | 0.3074 , 1 | 0.3074 0.2307 | 0.3548 , 1.2026 | 0.3097 , 0.8341 | 0.3355 0.1942 | 0.3801 , 0.8295 | 0.297 , 1.1959 | 0.3228 2.7304 | 0.3473 , 1.2566 | 0.3135 , 1.8117 | 0.3393 4.3346 | 0.3548 , 0.8315 | 0.3097 , 1.1989 | 0.3355 1 | 0.3243 , 1 | 0.312 , 1 | 0.3636 0.8418 | 0.3496 , 0.6897 | 0.2994 , 1.4338 | 0.351 3.2437 | 0.3726 , 1.8218 | 0.3008 , 1.2636 | 0.3266 5.1493 | 0.3801 , 1.2056 | 0.297 , 0.8362 | 0.3228 1.188 | 0.3496 , 1.4498 | 0.2994 , 0.6975 | 0.351 1 | 0.375 , 1 | 0.2867 , 1 | 0.3383 .
Remark 1.
To see the consistency of the HPMPR geometrically, three area graphs of multiplicative preference degrees [ h i j σ ( s ) ] , the probabilities of multiplicative preference degrees [ p i j σ ( s ) ]   and the score values h i j σ ( s ) p i j σ ( s ) were constructed. The procedure by which these graphs were made is explained for H ¯ 1 as follows. Three matrices M , P, and S of order 4 × 12 of multiplicative preference values, probability values, and score values are made from HPMPR H ¯ 1 :
M = 1 1 1 5.2375 2.2438 1.7371 0.25 0.25 0.25 4.2986 0.6411 0.6411 0.5757 0.4457 0.1909 1 1 1 4.748 4.748 4.748 6.5328 6.5328 1.4432 4 4 4 0.2106 0.2106 0.2106 1 1 1 7.225 4.1449 4.1449 1.5599 1.5599 0.2326 0.6929 0 . . 1531 0.1355 0.2413 0.2413 0.2413 1 1 1
P = 0.3333 0.3333 0.3333 0.6309 0.3487 0.0204 0.3333 0.3333 0.3333 0.4707 0.4707 0.0586 0.6309 0.3487 0.0204 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 0.4523 0.3066 0.2411 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 0.6563 0.1719 0.1719 0.4707 0.4707 0.0586 0.4523 0.3066 0.2411 0.6562 0.1719 0.1719 0.3333 0.3333 0.3333
S = 1 1 1 1.3089 1.1008 0.9944 1.8242 1.5254 1.423 2.3697 1.4922 1.4132 1.0057 0.9084 0.764 1 1 1 1.429 1.3827 1.3766 1.9745 1.444 1.2777 0.7024 0.6556 0.5482 0.7264 0.7232 0.6998 1 1 1 1.3474 1.0413 0.9285 0.7076 0.6701 0.422 0.7823 0.6925 0.5065 1.077 0.9603 0.6957 1 1 1 .
The area graphs are made of the above matrices using the MATLAB drawing toolbar. Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 show the comparison of area graphs for the multiplicative preference degrees, probability values, and score values between H ¯ 1 , H ˜ 1 , and H ¯ 2 , H ˜ 2 , respectively. The areas are smoother for consistent HPMPRSs H ˜ 1 and H ˜ 2 .
Once the consistent HPMPR is found, we are in a position to make HPMPR acceptably consistent by defining a consistency measure. In order to define the consistency measure, the first deviation degree between two HPMPRs is defined. Let H = h i j , 1 n × n and H 2 = h i j , 2 n × n be two HPMPRs, and H ¯ 1 = h ¯ i j , 1 n × n and H ¯ 2 = h ¯ i j , 2 n × n are their NHPMPRs where d is the common length. Then,
D H 1 , H 2 = 1 d n 2 i = 1 n j = 1 n s = 1 d h ¯ i j , 1 σ ( s ) × h ¯ j i , 2 σ ( s ) + 2 n d ( n + 1 ) i = 1 n j i n s = 1 d p ¯ i j , 1 σ ( s ) p ¯ i j , 2 σ ( s ) .
D ( H 1 , H 2 ) measures the deviation between HPMPRs in the sense that the first part 1 d n 2 i = 1 n j = 1 n s = 1 d h ¯ i j , 1 σ ( s ) × h ¯ j i , 2 σ ( s ) will be 1 if H 1 = H 2 and the second part 2 n d ( n + 1 ) i = 1 n j i n s = 1 d p ¯ i j , 1 σ ( s ) p ¯ i j , 2 σ ( s ) is 0 .  The properties of deviation are presented in the following theorem.
Theorem 4.
Consider two HPMPRs H = h i j , 1 n × n and H 2 = h i j , 2 n × n and their NHPMPRs H ¯ 1 = h ¯ i j , 1 n × n and H ¯ 2 = h ¯ i j , 2 n × n . The deviation degree defined in Equation (21) satisfies the following properties:
1.
D ( H 1 , H 2 ) 1 ;
2.
D ( H 1 , H 2 ) = 1 if and only if H 1 = H 2 ;
3.
D ( H 1 , H 2 ) = D ( H 2 , H 1 ) .
(please see Appendix A for Proof 3).
Before proceeding, let us present and prove some lemmas that will be useful in providing the theoretical framework.
Lemma 1.
Take 0 < x < 1 and y > 0 . Then
y x + 1 y x y + 1 y ,
and equality holds if and only if y = 1 (please see Appendix A for Proof 4).
Lemma 2.
[52]: Consider x i > 0 , ω i > 0 , and i = 1 , 2 , , n with i = 1 n ω i = 1 . Then,
i = 1 n ( x i ) ω i i = 1 n x i ω i ,
where equality holds iff x 1 = x 2 = = x n .
While the decision-maker provides its preference in the form of an HPMPR, it should be noted that it can be used for decision-making with good results only if it has sufficient consistency.
Definition 9.
Suppose an HPMPR H = ( h i j ) n × n , its NHPMPR H ¯ = ( h ¯ i j ) n × n and consistent HPMPR H ˜ = ( h ˜ i j ) n × n with optimized parameter 0 ζ 1 obtained from Algorithm 1. The constancy index of H is defined to be the deviation degree between H ¯ and H ˜ , denoted as
C I ( H ) = D ( H ¯ , H ˜ ) .
It is clear that C I ( H ) = 1 if and only if H is consistent. Ideally, the decision-maker should provide a consistent HPMPR so that it can be used for meaningful decision-making. However, some margin of error should be provided to the decision-maker, relative to the practical problems.
Definition 10.
Consider an HPMPR H = ( h i j ) n × n . For a given tolerance value C I r , HPMPR H is said to be an acceptably consistent HPMPR if
C I ( H ) C I r .
If an HPMPR is not even acceptably consistent, then the decision-maker should revisit and modify it. The Algorithm 2 is proposed to make an HPMPR acceptably consistent.
Algorithm 2 Algorithmic description to determine the acceptably consistent.
  • Input: The HPMPR H = ( h i j ) n × n , consistency tolerance value C I r and the parameter α ( 0 , 1 ) .
  • Output: The acceptably consistent HPMPR H ( t ) , consistency index C I ( H ( t ) ) , and number of iterations t .
  • Step 1: Compute H ¯ = ( h ¯ i j ) n × n with optimized parameter ζ. Let t = 0 , H ( 0 ) = h i j ( 0 ) n × n = H ¯ = ( h ¯ i j ) n × n .
    Step 2: Calculate the consistent HPMPR H ˜ ( t ) = h ˜ i j ( t ) n × n by applying Algorithm 1 to H ( t ) and consistency index C I ( H ( t ) ) , where
    C I ( H ( t ) ) = 1 d n 2 i = 1 n j = 1 n s = 1 d h ¯ i j ( t ) σ ( s ) × h ˜ j i ( t ) σ ( s ) + 2 n d ( n + 1 ) i = 1 n j i n s = 1 d p ¯ i j ( t ) σ ( s ) p ˜ i j ( t ) σ ( s ) .
    Step 3: If C I ( H ( t ) ) C I r , then go to Step 5; otherwise, go to Step 4.
    Step 4: Make the adjusted HPMPR H ( t + 1 ) = h i j ( t + 1 ) , where
    h i j ( t + 1 ) = h i j ( t ) α h ˜ i j ( t ) ( 1 α ) .
    Let t = t + 1 . Now return to Step 2.
    Step 5: Output t , H ( t ) , and C I ( H ( t ) ) .
    Step 6: End.
The proposed Algorithm 2 will be convergent as the following result shows.
Theorem 5.
Consider an HPMPR H, a consistency tolerance value C I r > 1 , and the sequence { H ( t ) } of HPMPRs generated by Algorithm 2. Then,
C I ( H ( t + 1 ) ) < C I ( H ( t ) ) for all t and lim t C I ( H ( t ) ) C I r .
(please see Appendix A for Proof 5).
The choice of parameters n , d , and α directly affects the performance of Algorithm 2. For performance measurement of Algorithm 2, 1000 random HPMPRs are generated and their acceptably consistent HPMPRs are computed by Algorithm 2, the average iteration in Algorithm 2 with respect to different parameters shown in Table 2. It is apparent that Algorithm 2 is quite efficient and as for the effects of parameters, the increase in value α leads to more iterations in Algorithm 2 to compute a consistent HPMPR; therefore, it is suggested that a small α is chosen. Additionally, the number of iterations in Algorithm 2 is inversely proportional to the consistency index C I r .

4. Consensus Measure in Group Decision Making

For group decision-making, let X = { x 1 , x 2 , , x n } be the set of alternatives, and let E = { e 1 , e 2 , e 3 , . , e m } be the set of decision-makers providing their preferences in HPMPRs H 1 = ( h i j , 1 ) , H 2 = ( h i j , 2 ) , , H m = ( h i j , m ) , respectively. Additionally, let ω = ( ω 1 , ω 2 , , ω m ) be the importance/weight vector of decision-makers in a normalized form, i.e., i = 1 m ω i = 1 . Algorithm 2 solves the problem of consistency. In order to make a group decision, the following aggregation operator is defined, which will be used to fuse all respective NHPMPRs of decision-makers.
Definition 11.
Take a collection of HPMEs h i ( i = 1 , 2 , 3 , , n ) with equal lengths and a weight vector of HPMEs λ = ( λ 1 , λ 2 , , λ n ) in a normalized form that is i = 1 n λ i = 1 . The hesitant probabilistic multiplicative weighted geometric (HPMWG) operator is defined as
H P M W G ( h 1 , h 2 , , h n ) = i = 1 n h i λ i = γ 1 σ ( s ) | p 1 σ ( s ) h 1 , γ 2 σ ( s ) | p 2 σ ( s ) h 2 , , γ n σ ( s ) | p n σ ( s ) h n i = 1 n γ i σ ( s ) λ i | i = 1 n λ i p i σ ( s ) .
Additionally, if we take weight vector λ = ( 1 / n , 1 / n , , 1 / n ) , then the above operator is reduced to a hesitant probabilistic multiplicative geometric (HPMG) as
H P M G ( h 1 , h 2 , , h n ) = i = 1 n h i 1 n = γ 1 σ ( s ) | p 1 σ ( s ) h 1 , γ 2 σ ( s ) | p 2 σ ( s ) h 2 , , γ n σ ( s ) | p n σ ( s ) h n i = 1 n γ i σ ( s ) 1 / n | i = 1 n 1 n p i σ ( s ) .
Theorem 6.
Assume H k = ( h i j , k ) n × n ( k = 1 , 2 , , m ) HPMPRs given by decision-makers, and let ω = ( ω 1 , ω 2 , , ω m ) be the weight vector of decision-makers with i = 1 m ω i = 1 . The NHPMPRs H ¯ k are obtained with optimized parameter ζ k ( k = 1 , 2 , , m ) through Equations (7) and (8), where # ( h ¯ i j , k ) = d = max 1 i < j n , 1 k m { # ( h i j , k ) } for all i , j = 1 , 2 , , n , and k = 1 , 2 , , m . Then, the HPMPR H g = ( h i j , g ) n × n is calculated:
H g = k = 1 n h ¯ i j , k ω k n × n
is the HPMPR (please see Appendix A for Proof 6).
The next result shows that the aggregated group HPMPR H g obtained by Equation (30) is consistent, provided all the individual HPMPRs are consistent.
Theorem 7.
Consider that H k = ( h i j , k ) n × n ( k = 1 , 2 , 3 , , m ) HPMPRs given by decision-makers and that group HPMPR H g is computed by Equation (30). Then,
C I ( H g ) max k { C I ( H k ) } .
(please see Appendix A for Proof 7).
Now, the consensus is another very important aspect of group decision-making. To deal with the consensus issue among all the individual decision-makers, the following consensus index is defined.
Definition 12.
Let H k = ( h i j , k ) n × n ( k = 1 , 2 , 3 , , m ) be m HPMPRs provided by decision-makers, and their NHPMPRs H ¯ k are obtained with optimized parameter ζ k ( k = 1 , 2 , , m ) . The group HPMPR H g is computed by Equation (30). Then, the group consensus index (GCI) of HPMPR H k is defined to be the deviation measured between H ¯ k and H g , i.e.,
G C I ( H k ) = D ( H ¯ k , H g ) .
The agreement between an individual decision-maker and a group decision is measured by the distance between individual HPMPR H k and group HPMPR H g . Therefore, G C I ( H k ) = 1 means that the kth decision-maker has full agreement with group decision; otherwise, the smaller the value of G C I ( H k ) is, the better the consensus will be. In many real life scenarios, it is important to have consensus among all decision-makers, although we have to live with a difference of opinion sometimes, and it is often hard to reach complete consensus, so a threshold value can be decided based on the practical nature of the problem to allow a difference of opinion to some extent.
Definition 13.
Let H k = ( h i j , k ) n × n ( k = 1 , 2 , 3 , , m ) be m HPMPRs provided by decision-makers, and assume their NHPMPRs H ¯ k are obtained with optimized ζ k ( k = 1 , 2 , , m ) . Furthermore, compute group HPMPR H g and G C I ( H k ) ( k = 1 , 2 , , m ) . Consider G C I r as a tolerance value of consensus measure. Thus, HPMPR H k can be said to be an acceptable consensus with group HPMPR H g if
G C I ( H k ) G C I r .
However, it is possible that a decision-maker has an unacceptable difference of opinion with respect to the group decision; in this regard, an algorithm is proposed to modify the dissenter’s HPMPR to reach an acceptable consensus.
The convergence of the Algorithm 3 follows from the following result.
Algorithm 3 Algorithmic description to modify the dissenter’s HPMPR to reach an acceptable consensus.
  • Input: The tolerance value G C I r of the consensus measure, HPMPRs H k = ( h i j , k ) n × n ( k = 1 , 2 , 3 , , m ) , and parameter β ( 0 , 1 ) .
    Output: The acceptable consensus HPMPRs H k ( t ) , group consensus index G C I ( H k ( t ) ) ( k = 1 , 2 , , m ) , group HPMPR H g ( t ) , and number of iterations value t .
    Step 1: Compute H ¯ k = ( h ¯ i j , k ) n × n with optimized parameters ζ k . Let t = 0 , H k ( 0 ) = ( h i j , k ( 0 ) ) n × n = H ¯ k = ( h ¯ i j , k ) n × n . Step 2: Compute group HPMPR H g ( t ) = ( h i j , g ( t ) ) n × n by fusing all individual HPMPRSs H k ( t ) ( k = 1 , 2 , , m ) according to Equation (30), where
    h i j , g ( t ) = k = 1 m h i j , k ( t ) ω k .
    Step 3: Calculate G C I ( H k ( t ) ) = D ( H k ( t ) , H g ( t ) ) ( k = 1 , 2 , , m ) . If G C I ( H k ( t ) ) G C I r for all k = 1 , 2 , , m then go to Step 5; otherwise, go to Step 4.
    Step 4: Let H k ( t + 1 ) = ( h i j , k ( t + 1 ) ) n × n , where
    h k ( t + 1 ) = h k ( t ) β h i j , g ( t ) 1 β .
    Additionally, let t = t + 1 , and go to Step 2.
    Step 5: Output the adjusted HPMPRs H k ( t ) = ( h i j , k ( t ) ) n × n , the group consistency index G C I ( H k ( t ) ) ( k = 1 , 2 , , m ) , the group HPMPR H g ( t ) , and number of iterations value t .
    Step 6: End.
Theorem 8.
Consider H k = ( h i j , k ) n × n ( k = 1 , 2 , 3 , , m ) HPMPRs given by decision-makers. Let { H k ( t ) } be the sequence obtained from Algorithm 3. Then,
G C I H k ( t + 1 ) < G C I H k ( t ) for all k = 1 , 2 , , m .
(please see Appendix A for Proof 8).
Now, 1000 random sets of m HPMPRs are generated and Algorithm 3 is applied to the developed consensus. The average value of iterations of Algorithm 3 is presented in Table 3 for different values of parameters. The analysis of Table 3 suggests that the increase in the value of parameter β has adverse effects on the number of iterations of Algorithm 3. Thus, the value of the parameter β must be small. More iterations are needed to develop consensus when G C I r is nearer to 1 . Furthermore, Algorithm 3 does not disturb the consistency of HPMPRs, i.e., if the individual HPMPRs H k = ( h i j , k ) n × n ( k = 1 , 2 , 3 , , m ) are consistent, then the output of Algorithm 3, the adjusted HPMPRs H k ( t ) = ( h i j , k ( t ) ) n × n , are also consistent.
Theorem 9.
Consider H k = ( h i j , k ) n × n ( k = 1 , 2 , 3 , , m ) HPMPRs given by decision-makers. Let { H k ( t ) } and { H g ( t ) } be the sequences obtained from Algorithm 3. If max k { C I ( H k ( t ) ) } C I r , then
max k { C I ( H k ( t + 1 ) ) } max k { C I ( H k ( t ) ) } C I r .
(please see Appendix A for Proof 9).

5. Decision Support Model for Group Decision Making with HPMPRs

Now, the issues of consistency and consensus are addressed. Algorithms 1–3 will provide the consistent individual HPMPRs and the group HPMPR with agreement among decision-makers. To comprehend the final standing of the alternatives, first for the alternative x i the ith row of group HPMPR is aggregated by the HPMG operator (Equation (29)), and the aggregated HPFEs are ordered according to their score and deviation. In the form of the Algorithm 4, a complete decision model is presented.
Flowchart of the decision-making model is presented in Figure 7.
Algorithm 4 Algorithmic description to determine a complete decision model.
  • Input: The HPMPRs H k = ( h i j , k ) n × n ( k = 1 , 2 , 3 , , m ) made by decision-makers, the weight vector ω = ( ω 1 , ω 2 , , ω m ) of decision-makers in a normalized form, consistency tolerance value C I r , group consensus tolerance value G C I r , the maximum number of iterations allowed t max , and the parameters 0 < α , β < 1 for modification.
    Output: The final standings of all the alternatives.
  • Step 1: Compute NHPMPRs H ¯ k = ( h ¯ i j , k ) n × n with optimized parameters ζ k . Let t = 0 and H k ( 0 ) = ( h i j , k ( 0 ) ) n × n = H ¯ k = ( h ¯ i j , k ) n × n .
    Step 2: Calculate the consistent HPMPRs H ˜ k ( t ) = h ˜ i j , k ( t ) n × n by Algorithm 1 and consistency indexes C I H k ( t ) , where
    C I H k ( t ) = 1 d n 2 i = 1 n j = 1 n s = 1 d h ¯ i j , k ( t ) σ ( s ) × h ˜ j i , k ( t ) σ ( s ) + 2 n d ( n + 1 ) i = 1 n j i n s = 1 d p ¯ i j , k ( t ) σ ( s ) p ˜ i j , k ( t ) σ ( s ) .
    Step 3: If C I H k ( t ) C I r for all k = 1 , 2 , , m , then go to Step 5; otherwise, go to Step 4.
    Step 4: Make the adjusted HPMPRs H k ( t + 1 ) = ( h i j , k ( t + 1 ) ) , where
    h i j , k ( t + 1 ) = h i j , k ( t ) , C I H k ( t ) C I r h i j , k ( t ) α h ˜ i j , k ( t ) 1 α , C I H k ( t ) > C I r .
    Let t = t + 1 . Now return to Step 2.
    Step 5: Apply the HPMWG operator of Equation (28) to individual HPMPRs H k ( t ) = h i j , k ( t ) n × n ( k = 1 , 2 , , m ) to obtain group HPMPR H g ( t ) = h i j , g ( t ) n × n , where
    h i j , g ( t ) = k = 1 m h i j , k ( t ) ω k .
    Step 6: Calculate G C I H k ( t ) = D H k ( t ) , H g ( t ) ( k = 1 , 2 , , m ) . If G C I H k ( t ) G C I r for all k = 1 , 2 , , m or t > t max , then go to Step 8; otherwise, go to Step 7.
    Step 7: Let H k ( t + 1 ) = h i j , k ( t + 1 ) n × n , where
    h k ( t + 1 ) = h k ( t ) β h i j , g ( t ) 1 β .
    Additionally, let t = t + 1 and go to Step 5.
    Step 8: Aggregate each i-th-row of HPMPR H g ( t ) by HPMG operator of Equation (29)
    h i , g = H P M G h i 1 , g ( t ) , h i 2 , g ( t ) , , h i n , g ( t ) = j = 1 n h i j , g ( t ) 1 n
    so that collective multiplicative preference degrees of alternative x i over all other alternatives ( i = 1 , 2 , , n ) can be obtained.
    Step 9: Compute scores s ( h i , g ) and deviations d ( h i , g ) as follows:
    s ( h i , g ) = i = 1 d h i , g σ ( s ) p i , g σ ( s ) 1 d
    d ( h i , g ) = s = 1 d h i , g σ ( s ) p i , g σ ( s ) s ( h i , g ) 2 . i = ( 1 , 2 , , n ) .
    Step 10: Determine the final standings of all the alternatives by comparing their scores and deviations values. Make and output the ranking vector.
    Step 11: End.

6. Case Study

The proposed decision-making model will be applied to a practical problem of Forex investment.
Example 5.
The Flagship Investment Company (FIC) is known for investment plans in different commodities in Forex. Investors hire FIC for profitable plans for investment. An investor is interested in investing in four commodities—oil, gold, wheat, and coppe, denoted as x 1 , x 2 , x 3 and x 4 , respectively. This investor approached the FIC to indicate which commodity is most profitable to invest in. The FIC summons a committee of four economic experts e 1 , e 2 , e 3 , and e 4 . Each expert will provide his preferences in terms of the HPMPR depending upon many factors, such as previous market rates, market trends, possible future decisions of different regulatory bodies, economic stability, and the global peace situation. The complex nature of the Forex requires tools that will model the vagueness and produce realistic results. The HPMPR will allow an economic experts to express their hesitancy and provide the probabilities of different preference degrees, which are bound to be different because of the factors discussed above. Based on the experience, the economic knowledge weight vector of experts is ω = ( 0.1 , 0.4 , 0.2 , 0.3 ) T . The economic expert e k provides his HPMPR H k ( k = 1 , 2 , 3 , 4 ) as follows.
H 1 = { 1 | 1 } { 0.2 | 0.2979 , 8 | 0.7021 } { 0.15 | 0.6622 , 7.5 | 0.3378 } { 0.25 | 0.0636 , 5 | 0.9364 } { 5 | 0.2979 , 0.125 | 0.7021 } { 1 | 1 } { 0.31 | 0.2236 , 5.09 | 0.7764 } { 0.4 | 0.4313 , 8.88 | 0.5687 } { 6.6667 | 0.6622 , 0.1333 | 0.3378 } { 3.2258 | 0.2236 , 0.1965 | 0.7764 } { 1 | 1 } { 0.22 | 0.7145 , 3.68 | 0.2855 } { 4 | 0.0636 , 0.2 | 0.9364 } { 2.5 | 0.4313 , 0.1126 | 0.5687 } { 4.5455 | 0.7145 , 0.2717 | 0.2855 } { 1 | 1 }
H 2 = { 1 | 1 } { 0.15 | 0.5018 , 0.55 | 0.3181 , 6.18 | 0.18 } { 0.4 | 0.3339 , 5.9 | 0.6322 , 7.1 | 0.0338 } { 0.231 | 0.2412 , 7.35 | 0.7588 } { 6.6667 | 0.5018 , 1.8181 | 0.3181 , 0.1618 | 0.18 } { 1 | 1 } { 0.18 | 0.3627 , 3.65 | 0.1892 , 5.01 | 0.4481 } { 0.3 | 1 } 2.5 | 0.3339 , 0.1695 | 0.6322 , 0.1235 | 0.0338 } { 5.5556 | 0.3627 , 0.274 | 0.1892 , 0.1996 | 0.4481 } { 1 | 1 } { 0.35 | 0.7060 , 8.56 | 0.2940 } { 4.329 | 0.2412 , 0.1361 | 0.7588 } { 0.3333 | 1 } { 2.8571 | 0.7060 , 0.1168 | 0.2940 } { 1 | 1 }
H 3 = { 1 | 1 } { 0.33 | 0.1317 , 0.65 | 0.4745 , 6.18 | 0.3938 } { 0.28 | 0.7453 , 5.9 | 0.2547 } { 0.16 | 0.2951 , 7.35 | 0.7049 } { 3.0303 | 0.1317 , 1.5385 | 0.4745 , 0.1618 | 0.3938 } { 1 | 1 } { 0.2010 | 0.1232 , 3.65 | 0.8767 } { 0.4 | 0.2398 , 0.711 | 0.5387 , 6.78 | 0.2214 } { 3.5714 | 0.7453 , 0.1695 | 0.2547 } { 4.9751 | 0.1233 , 0.274 | 0.8767 } { 1 | 1 } { 0.22 | 0.5612 , 0.7 | 0.4388 } { 6.25 | 0.2951 , 0.1361 | 0.7049 } { 2.5 | 0.2398 , 1.4065 | 0.5387 , 0.1475 | 0.2215 } { 4.5455 | 0.5612 , 1.4286 | 0.4388 } { 1 | 1 }
H 4 = { 1 | 1 } { 0.29 | 0.629 , 0.6 | 0.371 } { 0.3498 | 0.8924 , 0.9447 | 0.1076 } { 0.27 | 0.2908 , 1.35 | 0.5157 , 6.1 | 0.1936 } { 3.4483 | 0.629 , 1.6667 | 0.371 } { 1 | 1 } { 0.19 | 0.7128 , 0.64 | 0.2855 , 3.65 | 0.0017 } { 0.35 | 0.6610 , 1.9 | 0.0808 , 5.6 | 0.2582 } { 3.0303 | 0.8924 , 0.1353 | 0.1076 } { 5.2632 | 0.7128 , 1.5625 | 0.2855 , 0.274 | 0.0017 } { 1 | 1 } { 0.27 | 0.4378 , 0.66 | 0.163 , 5.55 | 0.3992 } { 3.7037 | 0.2908 , 0.7407 | 0.5157 , 0.1639 | 0.1936 } { 2.8571 | 0.6610 , 0.5263 | 0.0808 , 0.1776 | 0.2582 } { 3.7037 | 0.4378 , 1.5151 | 0.163 , 0.1801 | 0.3992 } { 1 | 1 } .
For the illustration of our proposed decision support model (Algorithm 4), the step-by-step explanation and computations are given below.
Step 1: (Normalization) The NHPMPRs H k ( 0 ) = H ¯ k ( k = 1 , 2 , 3 , 4 ) are calculated by Equations (7) as follows.
H 1 ( 0 ) = { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.2 | 0.149 , 0.2 | 0.149 , 8 | 0.7021 } { 0.15 | 0.3311 , 0.15 | 0.3311 , 7.5 | 0.3378 } { 0.25 | 0.0318 , 0.25 | 0.0318 , 1.5 | 0.9364 } { 5 | 0.149 , 5 | 0.149 , 0.125 | 0.7020 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.31 | 0.1118 , 0.31 | 0.1118 , 5.09 | 0.7764 } { 0.4 | 0.2156 , 0.4 | 0.2156 , 8.88 | 0.5687 } { 6.6667 | 0.3311 , 6.6667 | 0.3311 , 0.1333 | 0.3378 } { 3.2258 | 0.1118 , 3.2258 | 0.1118 , 0.1965 | 0.7764 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.22 | 0.3573 , 0.22 | 0.3573 , 3.68 | 0.2855 } { 4 | 0.03180 , 4 | 0.03180 , 0.6667 | 0.9364 } { 2.5 | 0.2156 , 2.5 | 0.2156 , 0.1126 | 0.5687 } { 4.5455 | 0.3573 , 4.5455 | 0.3573 , 0.2717 | 0.2855 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 }
H 2 ( 0 ) = { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.15 | 0.5018 , 0.55 | 0.3181 , 6.18 | 0.18 } { 0.4 | 0.3339 , 5.9 | 0.6322 , 7.1 | 0.0338 } { 0.231 | 0.1206 , 0.231 | 0.1206 , 7.35 | 0.7588 } { 6.6667 | 0.5018 , 1.8182 | 0.3181 , 0.1618 | 0.18 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.18 | 0.3627 , 3.65 | 0.1892 , 5.01 | 0.4481 } { 0.3 | 0.3333 , 0.3 | 0.3333 , 0.3 | 0.3333 } { 5.2632 | 0.3339 , 0.1695 | 0.6322 , 0.1408 | 0.03383 } { 5.5556 | 0.3626 , 0.274 | 0.1892 , 0.1996 | 0.4481 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.35 | 0.353 , 0.35 | 0.353 , 8.56 | 0.294 } { 4.329 | 0.1206 , 4.329 | 0.1206 , 0.1361 | 0.7588 } 0.3333 | 0.3333 , 0.3333 | 0.3333 , 0.3333 | 0.3333 } { 2.8571 | 0.353 , 2.8571 | 0.353 , 0.1168 | 0.294 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 }
H 3 ( 0 ) = { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.33 | 0.1317 , 0.65 | 0.4744 , 3.18 | 0.3938 } { 0.28 | 0.3727 , 0.28 | 0.3727 , 5.9 | 0.2547 } { 0.16 | 0.1476 , 0.16 | 0.1476 , 7.35 | 0.7049 } { 3.0303 | 0.1317 , 1.5385 | 0.4744 , 0.3145 | 0.3938 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.201 | 0.0616 , 0.201 | 0.0616 , 3.65 | 0.8767 } { 0.14 | 0.2398 , 0.711 | 0.5387 , 2.78 | 0.2215 } { 3.5714 | 0.3727 , 3.5714 | 0.3727 , 0.1695 | 0.2547 } { 4.9751 | 0.0616 , 4.9751 | 0.0616 , 0.274 | 0.8767 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.22 | 0.2806 , 0.22 | 0.2806 , 0.7 | 0.4388 } { 6.25 | 0.1476 , 6.25 | 0.1476 , 0.1361 | 0.7049 } { 2.5 | 0.239 , 1.4065 | , 0.5387 , 0.3597 | 0.2214 } { 4.5455 | 0.2807 , 4.5455 | 0.2807 , 1.4286 | 0.4388 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 }
H 4 ( 0 ) = { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.29 | 0.3145 , 0.29 | 0.3145 , 0.6 | 0.371 } { 0.33 | 0.4462 , 0.33 | 0.4462 , 7.39 | 0.1076 } { 0.27 | 0.2908 , 1.35 | 0.5157 , 6.1 | 0.1936 } { 3.4483 | 0.3145 , 3.4483 | 0.3145 , 1.6667 | 0.371 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.19 | 0.7128 , 0.64 | 0.2855 , 3.65 | 0.0017 } { 0.35 | 0.661 , 1.9 | 0.0808 , 5.63 | 0.2581 } { 3.0303 | 0.4462 , 3.0303 | 0.4462 , 0.1353 | 0.1076 } { 5.2632 | 0.7128 , 1.5625 | 0.2855 , 0.274 | 0.0017 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } { 0.27 | 0.4378 , 0.66 | 0.163 , 5.55 | 0.3992 } { 3.7037 | 0.2908 , 0.7407 | 0.5157 , 0.1639 | 0.1936 } { 2.8571 | 0.661 , 0.5263 | 0.0808 , 0.1776 | 0.2581 } { 3.7037 | 0.4378 , 1.5152 | 0.163 , 0.1802 | 0.3992 } { 1 | 0.3333 , 1 | 0.3333 , 1 | 0.3333 } .
Step 2: The consistent HPMPRs H ˜ k ( 0 ) = ( h ˜ i j , k 0 ) ) n × n are computed by Algorithm 1 and consistency indexes C I ( H k ( t ) ) are calculated by Equation (37) as follows:
C I H 1 ( 0 ) = 1.3166 , C I H 2 ( 0 ) = 1.3457 , C I H 3 ( 0 ) = 1.2718 and C I H 4 ( 0 ) = 1.2302 .
Step 3: The consistency tolerance value C I r is decided to be 1.01 , so all the HPMPRs H k ( 0 ) ( k = 1 , 2 , 3 , 4 ) need to be adjusted. Therefore, we go to step 4 and then return to step 2.
Steps 2–4 are repeated to obtain the following acceptably consistent HPMPRs:
H 1 ( 2 ) = { 1 | 0.2293 , 1 | 0.2293 , 1 | 0.5414 } { 0.33 | 0.2265 , 0.33 | 0.2265 , 2.7289 | 0.547 } { 0.199 | 0.237 , 0.199 | 0.237 , 5.5446 | 0.5261 } { 0.1142 | 0.2288 , 0.1142 | 0.2288 , 8.0131 | 0.5425 { 3.0306 | 0.2265 , 3.0306 | 0.2265 , 0.4937 | 0.547 } { 1 | 0.2274 , 1 | 0.2274 , 1 | 0.5452 } { 0.5977 | 0.2338 , 0.5977 | 0.2338 , 2.7838 | 0.5323 } { 0.3423 | 0.2296 , 0.3423 | 0.2296 , 4.1113 | 0.5407 } { 5.0259 | 0.237 , 5.0259 | 0.237 , 0.1804 | 0.5261 } { 1.6731 | 0.2338 , | 0.1.67312338 , 0.3592 | 0.5323 } { 1 | 0.2447 , 1 | 0.2447 , 1 | 0.5106 } { 0.5626 | 0.2397 , 0.5626 | 0.2397 , 1.4879 | 0.5206 } { 8.7537 | 0.2288 , 8.7537 | 0.2288 , 0.1248 | 0.5425 } { 2.9215 | 0.2296 , 2.9215 | 0.2296 , 0.2432 | 0.5407 } { 1.7773 | 0.2397 , 1.7773 | 0.2397 , 0.6721 | 0.5206 } { 1 | 0.2343 , 1 | 0.2343 , 1 | 0.5315 }
H 2 ( 2 ) = { 1 | 0.3272 , 1 | 0.341 , 1 | 0.3318 } { 0.438 | 0.3466 , 0.7806 | 0.324 , 6.0359 | 0.3294 } { 0.2323 | 0.3341 , 2.6274 | 0.3516 , 6.0602 | 0.3144 } { 0.1361 | 0.3142 , 0.3655 | 0.3196 , 8.8167 | 0.3663 } { 2.2821 | 0.3466 , 1.2811 | 0.324 , 0.1657 | 0.3294 } { 1 | 0.3625 , 1 | 0.3073 , 1 | 0.3301 } { 0.5161 | 0.352 , 3.3299 | 0.3303 , 1.0189 | 0.3177 } { 0.3056 | 0.334 , 0.4667 | 0.3048 , 1.4407 | 0.3612 } { 4.3039 | 0.3341 , 0.3806 | 0.3516 , 0.165 | 0.3144 } { 1.9374 | 0.352 , 0.3003 | 0.3303 , 0.9814 | 0.3177 } { 1 | 0.3409 , 1 | 0.3561 , 1 | 0.303 } { 0.583 | 0.3233 , 0.1423 | 0.3295 , 1.4859 | 0.3472 } { 7.3458 | 0.3142 , 2.7359 | 0.3196 , 0.1134 | 0.3663 } { 3.2719 | 0.334 , 2.1428 | 0.3049 , 0.6941 | 0.3612 } { 1.7153 | 0.3233 , 7.0326 | 0.3295 , 0.673 | 0.3472 } { 1 | 0.3054 , 1 | 0.3024 , 1 | 0.3922 }
H 3 ( 2 ) = { 1 | 0.2437 , 1 | 0.3263 , 1 | 0.4301 } { 0.4943 | 0.2256 , 0.6037 | 0.3335 , 2.5695 | 0.4408 } { 0.2483 | 0.2486 , 0.2936 | 0.3062 , 8.0453 | 0.4452 } { 0.1205 | 0.243 , 0.1643 | 0.3224 , 6.6707 | 0.4347 } { 2.0231 | 0.2256 , 1.6564 | 0.3335 , 0.3892 | 0.4408 } { 1 | 0.2116 , 1 | 0.338 , 1 | 0.4504 } { 0.4951 | 0.2295 , 0.4827 | 0.3089 , 3.1523 | 0.4615 } { 0.2432 | 0.2279 , 0.275 | 0.3321 , 2.6009 | 0.44 } { 4.0276 | 0.2486 , 3.4055 | 0.3062 , 0.1243 | 0.4452 } { 2.0197 | 0.2295 , 2.0717 | 0.3089 , 0.3172 | 0.4615 } { 1 | 0.2529 , 1 | 0.2853 , 1 | 0.4618 } { 0.3706 | 0.2489 , 0.5541 | 0.3032 , 0.4806 | 0.4479 } { 8.3015 | 0.243 , 6.0879 | 0.3224 , 0.1499 | 0.4347 } { 4.1113 | 0.227 , 3.6366 | 0.3321 , 0.3845 | 0.44 } { 2.0809 | 0.2489 , 1.8048 | 0.3032 , 1.213 | 0.4479 } { 1 | 0.246 , 1 | 0.3222 , 1 | 0.4318 }
H 4 ( 2 ) = { 1 | 0.3845 , 1 | 0.3639 , 1 | 0.2515 } { 0.5754 | 0.4309 , 0.4174 | 0.3202 , 0.9384 | 0.2489 } { 0.2788 | 0.4256 , 0.4495 | 0.3371 , 3.4126 | 0.2373 } { 0.1611 | 0.4088 , 0.6885 | 0.3279 , 8.4457 | 0.2632 } { 1.738 | 0.4309 , 2.3956 | 0.3202 , 1.0656 | 0.2489 } { 1 | 0.4777 , 1 | 0.2768 , 1 | 0.2455 } { 0.4759 | 0.4748 , 1.0707 | 0.292 , 3.5925 | 0.2332 } { 0.2772 | 0.4591 , 1.6347 | 0.28 , 8.9467 | 0.2608 } { 3.5869 | 0.4256 , 2.2246 | 0.3371 , 0.293 | 0.2373 } { 2.1011 | 0.4748 , 0.934 | 0.292 , 0.27834 | 0.2332 } { 1 | 0.4643 , 1 | 0.3081 , 1 | 0.2276 } { 0.5714 | 0.4502 , 1.5040 | 0.2965 , 2.5225 | 0.2533 } { 6.2082 | 0.4088 , 1.4524 | 0.3279 , 0.1184 | 0.2632 } { 3.6072 | 0.4591 , 0.6117 | 0.28 , 0.1118 | 0.2608 } { 1.7501 | 0.4502 , 0.6649 | 0.2965 , 0.3964 | 0.2533 } { 1 | 0.434 , 1 | 0.2883 , 1 | 0.2777 }
as well as the following consistent HPMPRs:
H ˜ 1 ( 2 ) = { 1 | 0.2304 , 1 | 0.2304 , 1 | 0.5392 } { 0.3316 | 0.2299 , 0.3316 | 0.2299 , 1.9978 | 0.5403 } { 0.1995 | 0.2346 , 0.1995 | 0.2346 , 5.5277 | 0.5308 } { 0.568 | 0.2317 , 0.568 | 0.2317 , 1.4744 | 0.5365 } { 2.5034 | 0.2299 , 2.5034 | 0.2299 , 0.3705 | 0.5403 } { 1 | 0.2293 , 1 | 0.2293 , 1 | 0.5413 } { 0.6017 | 0.2341 , 0.6017 | 0.2341 , 2.7669 | 0.5319 } { 0.3417 | 0.2312 , 0.3417 | 0.2312 , 4.0794 | 0.5376 } { 5.0116 | 0.2346 , 5.0116 | 0.2346 , 0.1339 | 0.5308 } { 1.6621 | 0.2341 , 1.6621 | 0.2341 , 0.3614 | 0.5318 } { 1 | 0.2388 , 1 | 0.2388 , 1 | 0.5224 } { 0.568 | 0.2359 , 0.568 | 0.2359 , 1.4744 | 0.5281 } { 8.8233 | 0.2317 , 8.8233 | 0.2317 , 0.1227 | 0.5365 } { 2.9261 | 0.2312 , 2.9261 | 0.2312 , 0.2451 | 0.5376 } { 1.7606 | 0.2359 , 1.7606 | 0.2359 , 0.6783 | 0.5281 } { 1 | 0.2331 , 1 | 0.2331 , 1 | 0.5338 }
H ˜ 2 ( 2 ) = { 1 | 0.3305 , 1 | 0.3340 , 1 | 0.3355 } { 0.443 | 0.3396 , 0.7833 | 0.3253 , 6.0345 | 0.335 } { 0.2311 | 0.334 , 2.606 | 0.338 , 6.0505 | 0.328 } { 0.1354 | 0.3249 , 0.3672 | 0.3241 , 8.8329 | 0.3511 } { 2.2575 | 0.3396 , 1.2766 | 0.3253 , 0.1657 | 0.335 } { 1 | 0.3488 , 1 | 0.3166 , 1 | 0.3346 } { 0.5217 | 0.3432 , 3.3268 | 0.3292 , 1.0026 | 0.3276 } { 0.3057 | 0.334 , 0.4688 | 0.3153 , 1.4637 | 0.3507 } { 4.3276 | 0.334 , 0.3837 | 0.338 , 0.1653 | 0.328 } { 1.9169 | 0.3432 , 0.3006 | 0.3292 , 0.3004 | 0.3276 } { 1 | 0.3376 , 1 | 0.3417 , 1 | 0.3206 } { 0.584 | 0.3284 , 0.1409 | 0.328 , 1.4598 | 0.3436 } { 7.3851 | 0.3249 , 2.7233 | 0.3241 , 0.1132 | 0.3511 } { 3.2713 | 0.33 , 2.1332 | 0.3153 , 0.6832 | 0.3507 } { 1.7065 | 0.328 , 7.0969 | 0.328 , 0.685 | 0.3436 } { 1 | 0.3192 , 1 | 0.3141 , 1 | 0.3667 }
H ˜ 3 ( 2 ) = { 1 | 0.2402 , 1 | 0.3221 , 1 | 0.4377 } { 0.4963 | 0.2319 , 0.6033 | 0.3251 , 2.564 | 0.4429 } { 0.248 | 0.2426 , 0.2938 | 0.3115 , 8.0706 | 0.4459 } { 0.1201 | 0.2408 , 0.1643 | 0.321 , 6.6642 | 0.4381 } { 2.0148 | 0.2319 , 1.6576 | 0.3251 , 0.39 | 0.4429 } { 1 | 0.2237 , 1 | 0.3281 , 1 | 0.4481 } { 0.4997 | 0.2343 , 0.487 | 0.3145 , 3.1477 | 0.4511 } { 0.2420 | 0.2326 , 0.2724 | 0.3241 , 2.5992 | 0.4434 } { 4.0325 | 0.2426 , 3.4038 | 0.3115 , 0.1239 | 0.4459 } { 2.0014 | 0.2343 , 2.0534 | 0.3145 , 0.3177 | 0.4511 } { 1 | 0.245 , 1 | 0.3009 , 1 | 0.4541 } { 0.4844 | 0.2432 , 0.5593 | 0.3105 , 0.8257 | 0.4463 } { 8.3253 | 0.2408 , 6.0862 | 0.321 , 0.1501 | 0.4381 } { 4.132 | 0.2326 , 3.6716 | 0.3241 , 0.3847 | 0.4434 } { 2.0646 | 0.2432 , 1.788 | 0.3105 , 1.211 | 0.4463 } { 1 | 0.2414 , 1 | 0.32 , 1 | 0.4386 }
H ˜ 4 ( 2 ) = { 1 | 0.4125 , 1 | 0.3373 , 1 | 0.2502 } { 0.5794 | 0.4365 , 0.419 | 0.3148 , 0.9427 | 0.2487 } { 0.2783 | 0.4331 , 0.4509 | 0.3229 , 3.386 | 0.2441 } { 0.1602 | 0.4253 , 0.6839 | 0.3178 , 8.4735 | 0.257 } { 1.726 | 0.4365 , 2.3868 | 0.3148 , 1.0608 | 0.2487 } { 1 | 0.4606 , 1 | 0.2922 , 1 | 0.2471 } { 0.4804 | 0.4572 , 1.0763 | 0.3003 , 3.5919 | 0.2425 } { 0.2766 | 0.4493 , 1.6323 | 0.2952 , 8.9887 | 0.2554 } { 3.593 | 0.4331 , 2.2177 | 0.3229 , 0.2953 | 0.2441 } { 2.0817 | 0.4572 , 0.9291 | 0.3003 , 0.2784 | 0.2425 } { 1 | 0.4537 , 1 | 0.3084 , 1 | 0.237 } { 0.5757 | 0.4459 , 1.5166 | 0.3033 , 2.5025 | 0.2508 } { 6.2407 | 0.4253 , 1.4623 | 0.3177 , 0.118 | 0.257 } { 3.6157 | 0.4493 , 0.6126 | 0.2952 , 0.1113 | 0.2554 } { 1.7369 | 0.4459 , 0.6594 | 0.3033 , 0.3996 | 0.2508 } { 1 | 0.438 , 1 | 0.2982 , 1 | 0.2638 } .
Step 5: By applying the HPMWG operator of Equation (28) to individual HPMPRs H k ( t ) = h i j , k ( t ) n × n ( k = 1 , 2 , , m ) , we obtain group HPMPR H g ( t ) = h i j , g ( t ) n × n as follows:
H g ( 2 ) = { 1 | 0.3179 , 1 | 0.3338 , 1 | 0.3483 } { 0.4735 | 0.3357 , 0.5638 | 0.315 , 2.61 | 0.3493 } { 0.2449 | 0.3347 , 0.771 | 0.3267 , 5.3507 | 0.3386 } { 0.1373 | 0.3198 , 0.3353 | 0.3136 , 8.1532 | 0.3666 } { 2.112 | 0.3357 , 1.7736 | 0.315 , 0.3831 | 0.3493 } { 1 | 0.3534 , 1 | 0.2963 , 1 | 0.3503 } { 0.5069 | 0.3525 , 1.356 | 0.3049 , 2.0609 | 0.3426 } { 0.2868 | 0.3399 , 0.5929 | 0.2953 , 3.1143 | 0.3648 } { 4.084 | 0.3347 , 1.297 | 0.3267 , 0.1869 | 0.3386 } { 1.9725 | 0.3525 , 0.7375 | 0.3049 , 0.4852 | 0.3426 } { 1 | 0.3507 , 1 | 0.3164 , 1 | 0.3329 } { 0.5555 | 0.3381 , 0.4346 | 0.3054 , 1.5482 | 0.3565 } { 7.2838 | 0.3198 , 2.437 | 0.3136 , 0.1227 | 0.3666 } { 3.4868 | 0.3399 , 1.6867 | 0.2953 , 0.3211 | 0.3648 } { 1.8001 | 0.3381 , 2.3011 | 0.3054 , 0.646 | 0.3565 } { 1 | 0.325 , 1 | 0.2953 , 1 | 0.3797 } .
Step 6: The group consensus indexes are
G C I H 1 ( t ) = 1.0451 , G C I H 2 ( t ) = 1.0813 , G C I H 3 ( t ) = 1.071 and G C I H 4 ( t ) = 1.0098 .
Now, G C I r is decided to be 1.1 , so all of he HPMPRs are an acceptable consensus with group HPMPR. Therefore, Step 7 will be skipped and we can move on to Step 8.
Step 8: Now, all the rows of HPMPR H g are aggregated by HPMG operator of Equation (29) as follows:
h 1 , g = { 0.3552 | 0.327 , 0.61789 | 0.3223 , 3.2666 | 0.3507 } h 2 , g = { 0.7444 | 0.3454 , 1.0927 | 0.3029 , 1.2523 | 0.3517 } h 3 , g = { 1.4545 | 0.344 , 0.8029 | 0.3133 , 0.6121 | 0.3426 } h 4 , g = { 2.6003 | 0.3307 , 1.8446 | 0.3024 , 0.3994 | 0.3669 } .
Step 9: By Equations (42) and (43), scores and deviations of h i , g ( i = 1 , 2 , 3 , 4 ) are calculated as follows:
s h 1 , g = 0.9742 , s h 2 , g = 1.0013 , s h 3 , g = 0.9646 and s h 4 , g = 1.0563 d h 1 , g = 0.6118 , d h 2 , g = 0.1299 , d h 3 , g = 0.2142 and d h 4 , g = 0.4881 .
Step 10: The comparison of scores and deviations computed in Step 9 provides us with the following final preference ranking:
x 4 > x 2 > x 1 > x 3 .
Step 11: End.
Remark 2.
The proposed decision support model is divided into three main parts, Steps 1–4 deals with the consistency issue of data provided by the economic experts, Steps 5–7 generate a reasonable consensus among them, and Steps 8–10 provide the final standings of all the commodities. Figure 8, Figure 9, Figure 10 and Figure 11 present the comparison of area graphs of score values for normalized, acceptably consistent and consistent HPMPRs made from H 1 , H 2 , H 3 , and H 4 , respectively, by the end of Step 4.
The results of the case study point out copper as the best commodity for investment, whereas wheat is the least favorable. The Forex market is not fully predictable because of its complexities, and there are numerous factors that can alter the profit margin of any commodity. The HPMPR has a better chance of modeling and handling its vagueness with the help of probability theory.

Effects of Probability on Decision Making

In order to show how important the role played by the probabilities of preference degrees is, two new versions are generated from the HPMPRs H k = h i j , k n × n ( k = 1 , 2 , 3 , 4 ) by changing the probabilities only and preserving the preference degrees:
H 1 * = { 1 | 1 } { 0.2 | 0.2 , 8 | 0.8 } { 0.15 | 0.1 , 7.5 | 0.9 } { 0.25 | 0.231 , 1.5 | 0.769 } { 5 | 0.2979 , 0.125 | 0.7021 } { 1 | 1 } { 0.31 | 0.15 , 5.09 | 0.85 } { 0.4 | 0.925 , 8.88 | 0.075 } { 6.6667 | 0.6622 , 0.1333 | 0.3378 } { 3.2258 | 0.2236 , 0.1965 | 0.7764 } { 1 | 1 } { 0.22 | 0.1 , 3.68 | 0.9 } { 4 | 0.0636 , 0.6667 | 0.9364 } { 2.5 | 0.4313 , 0.1126 | 0.5687 } { 4.5455 | 0.7145 , 0.2717 | 0.2855 } { 1 | 1 }
H 2 * = { 1 | 1 } { 0.15 | 0.1 , 0.55 | 0.1 , 6.18 | 0.8 } { 0.4 | 0.2 , 5.9 | 0.1 , 7.1 | 0.7 } { 0.231 | 0.35 , 7.35 | 0.65 } { 6.6667 | 0.5018 , 1.8182 | 0.3181 , 0.1618 | 0.18 } { 1 | 1 } { 0.18 | 0.8 , 3.65 | 0.1 , 5.01 | 0.1 } { 0.3 | 1 } { 2.5 | 0.3339 , 0.1695 | 0.6322 , 0.1235 | 0.0338 } { 5.5556 | 0.3628 , 0.274 | 0.1892 , 0.1996 | 0.4481 } { 1 | 1 } { 0.35 | 0.11 , 8.56 | 0.89 } { 4.329 | 0.2412 , 0.1361 | 0.7588 } { 0.3333 | 1 } { 2.8571 | 0.706 , 0.1168 | 0.294 } { 1 | 1 }
H 3 * = { 1 | 1 } { [ 0.33 | 0.01 , 0.65 | 0.23 , 3.18 | 0.76 } { 0.28 | 0.289 , 5.9 | 0.711 } { 0.16 | 0.0586 , 7.35 | 0.9414 } { 3.0303 | 0.1317 , 1.5385 | 0.4745 , 0.1618 | 0.3938 } { 1 | 1 } { 0.201 | 0.7 , 3.65 | 0.3 } { 0.4 | 0.8 , 0.711 | 0.1 , 6.78 | 0.1 } { 3.5714 | 0.7453 , 0.2547 | 0.11 } { 4.9751 | 0.1233 , 0.274 | 0.8767 } { 1 | 1 } { 0.22 | 0.15 , 0.7 | 0.85 } { 6.25 | 0.2951 , 0.7049 | 0.0586 } { 2.5 | 0.8 , 1.4065 | 0.1 , 0.1475 | 0.1 } { 4.5455 | 0.5612 , 1.4286 | 0.4388 } { 1 | 1 }
H 4 * = { 1 | 1 } { 0.29 | 0.05 , 0.6 | 0.95 } { 0.33 | 0.15 , 7.39 | 0.85 } { 0.27 | 0.1 , 1.35 | 0.05 , 6.1 | 0.85 } { 3.4483 | 0.629 , 1.6667 | 0.371 } { 1 | 1 } { [ 0.19 | 0.75 , 0.64 | 0.1 , 3.65 | 0.15 } { 0.35 | 0.75 , 1.9 | 0.1 , 5.63 | 0.15 } { 3.0303 | 0.8924 , 0.1353 | 0.1076 } { 5.2632 | 0.7128 , 1.5625 | 0.2855 , 0.274 | 0.0017 } { 1 | 1 } { 0.27 | 0.15 , 0.66 | 0.1 , 5.55 | 0.75 } { 3.7037 | 0.2908 , 0.7407 | 0.5157 , 0.1639 | 0.1936 } { 2.8571 | 0.661 , 0.5263 | 0.0808 , 0.1776 | 0.2582 } { 3.7037 | 0.4378 , 1.5152 | 0.163 , 0.1802 | 0.3992 } { 1 | 1 }
H 1 * * = { 1 | 1 } { 0.2 | 0.9 , 8 | 0.1 } { 0.15 | 0.6622 , 7.5 | 0.3378 } { 0.25 | 0.9 , 1.5 | 0.1 } { 5 | 0.2979 , 0.125 | 0.7021 } { 1 | 1 } { 0.31 | 0.05 , 5.09 | 0.95 } { 0.4 | 1 , 8.88 | 0.9 } { 6.6667 | 0.6622 , 0.1333 | 0.3378 } { 3.2258 | 0.2236 , 0.1965 | 0.7764 } { 1 | 1 } { 0.22 | 0.1 , 3.68 | 0.9 } { 4 | 0.0636 , 0.6667 | 0.9364 } { 2.5 | 0.4313 , 0.1126 | 0.5687 } { 4.5455 | 0.7145 , 0.2717 | 0.2855 } { 1 | 1 }
H 2 * * = { 1 | 1 } { 0.15 | 0.8 , 0.55 | 0.1 , 6.18 | 0.1 } { 0.4 | 0.3339 , 5.9 | 0.6322 , 7.1 | 0.0338 } { 0.231 | 0.8 , 7.35 | 0.2 } { 6.6667 | 0.5018 , 1.8182 | 0.3181 , 0.1618 | 0.18 } { 1 | 1 } { 0.18 | 0.05 , 3.65 | 0.05 , 5.01 | 0.9 } { 0.3 | 1 } { 2.5 | 0.3339 , 0.1695 | 0.6322 , 0.1408 | 0.0338 } { 5.5556 | 0.3628 , 0.274 | 0.1892 , 0.1996 | 0.4481 } { 1 | 1 } { 0.35 | 0.1 , 8.56 | 0.9 } { 4.329 | 0.2412 , 0.1361 | 0.7588 } { 0.3333 | 1 } { 2.8571 | 0.706 , 0.1168 | 0.294 } { 1 | 1 }
H 3 * * = { 1 | 1 } { [ 0.33 | 0.01 , 0.65 | 0.23 , 3.18 | 0.76 } { 0.28 | 0.289 , 5.9 | 0.711 } { 0.16 | 0.0586 , 7.35 | 0.9414 } { 3.0303 | 0.1317 , 1.5385 | 0.4745 , 0.1618 | 0.3938 } { 1 | 1 } { 0.201 | 0.7 , 3.65 | 0.3 } { 0.4 | 0.8 , 0.711 | 0.1 , 6.78 | 0.1 } { 3.5714 | 0.7453 , 0.1695 | 0.2547 } { 4.9751 | 0.1233 , 0.274 | 0.8767 } { 1 | 1 } { 0.22 | 0.15 , 0.7 | 0.85 } { 6.25 | 0.2951 , 0.1361 | 0.7049 } { 2.5 | 0.2398 , 1.4065 | 0.5387 , 0.1475 | 0.2215 } { 4.5455 | 0.5612 , 1.4286 | 0.4388 } { 1 | 1 }
H 4 * * = { 1 | 1 } { 0.29 | 0.9 , 0.6 | 0.1 } { 0.33 | 0.9 , 7.39 | 0.1 } { 0.27 | 0.290 , 1.35 | 0.5157 , 6.1 | 0.1936 } { 3.4483 | 0.629 , 1.6667 | 0.371 } { 1 | 1 } { [ 0.19 | 0 , 0.64 | 0.1 , 3.65 | 0.9 } { 0.35 | 0.1 , 1.9 | 0.1 , 5.63 | 0.8 } { 3.0303 | 0.8924 , 0.1353 | 0.1076 } { 5.2632 | 0.7128 , 1.5625 | 0.2855 , 0.274 | 0.0017 } { 1 | 1 } { 0.27 | 0.05 , 0.66 | 0.05 , 5.55 | 0.9 } { 3.7037 | 0.2908 , 0.7407 | 0.5157 , 0.1639 | 0.1936 } { 2.8571 | 0.661 , 0.5263 | 0.0808 , 0.1776 | 0.2582 } { 3.7037 | 0.4378 , 1.5152 | 0.163 , 0.1802 | 0.3992 } { 1 | 1 } .
Remark 3.
The inspection of Table 4 reveals why it is important to consider the probabilities of different preference degrees. Often, decision-makers are taking decisions related to future scenarios like our case study. The future is a mystery that cannot be predicted completely, and probability is a useful tool to deal with uncertainties. In real-life scenarios, the probability of different events can alter the decisions we make today.

7. Conclusions

A group decision support model based on HPMPRs consists of three parts: a consistency improving process, a consensus reaching process, and the selection process. The consistency measure of an HPMPR has been defined in the consistency improving process. For HPMPRs that have unacceptable consistency, an optimization method is proposed to improve the consistency until the HPMPRs have an acceptable consistency value. A consensus index is defined to measure the consensus level in the group decision model. For HPMPRs that have an unacceptable consensus, an optimization method is designed to assist DMs in achieving a predefined consensus level. The proposed model can be used to address GDM (group decision making) problems with HPMPRs. Optimization methods are also developed to help individual HPMPRs to achieve a predefined consistency level and consensus level with fewer interactions of the DMs. As a consequence, our model is time-saving, efficient, and convenient for practical applications. The consistency improving process is performed to ensure that the DMs are neither random nor illogical in their pairwise comparisons. The procedure to reach a consensus level ensures that the adjusted HPMPRs not only achieve the predefined level of consensus, but also maintain acceptable consistency. The proposed model also ensures that the consistent HPMPRs do not change at each iteration. This property can retain the DMs’ original decision-making information to the greatest extent possible. Here, we did not discuss the effects of the application of different controlling parameters and distance functions in the developed model. This model is not useful for incomplete preference relations. In the future, we will address these problems.

Author Contributions

This paper is a result of the common work of the authors in all aspects.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
Immediate consequences of Equations (4) and (5). ☐
Proof of Theorem 2.
Definition 8 and Equations (18) and (19) directly imply this theorem. ☐
Proof of Theorem 3.
(1): Now, the deviation degree D ( H 1 , H 2 ) in Equation (21) is simplified as
D H 1 , H 2 = 1 d n 2 i = 1 n 1 j = 1 + 1 n s = 1 d h ¯ i j , 1 σ ( s ) × h ¯ j i , 2 σ ( s ) + h ¯ j i , 1 σ ( s ) × h ¯ i j , 2 σ ( s ) + 1 n + 2 n d ( n + 1 ) i = 1 n j i n s = 1 d p ¯ i j , 1 σ ( s ) p ¯ i j , 2 σ ( s ) ,
but
h ¯ i j , 1 σ ( s ) × h ¯ j i , 2 σ ( s ) + h ¯ j i , 1 σ ( s ) × h ¯ i j , 2 σ ( s ) 2 h ¯ i j , 1 σ ( s ) × h ¯ j i , 2 σ ( s ) × h ¯ j i , 1 σ ( s ) × h ¯ i j , 2 σ ( s ) = 2 ,
therefore,
1 d n 2 i = 1 n 1 j = 1 + 1 n s = 1 d h ¯ i j , 1 σ ( s ) × h ¯ j i , 2 σ ( s ) + h ¯ j i , 1 σ ( s ) × h ¯ i j , 2 σ ( s ) + 1 n 1 d n 2 × n ( n 1 ) 2 × 2 d + 1 n = 1 ,
and
2 n d ( n + 1 ) i = 1 n j i n s = 1 d p ¯ i j , 1 σ ( s ) p ¯ i j , 2 σ ( s ) 0 .
Hence,
D H 1 , H 2 1 .
(2): Follows from (1) and Equation (21).
(3): Evident from Equation (21). ☐
Proof of Theorem 4.
Consider the function f ( x ) = y x + 1 y x , differentiating with respect to x gives
f ( x ) = y x 1 y x ln ( y ) .
It will be positive when 0 < y < 1 and y > 1 , therefore, for these y-values f is an increasing function. Thus,
f ( x ) < f ( 1 ) = y + 1 y ,
furthermore, we have equality if and only if y = 1 .  ☐
Proof of Theorem 5.
From Equations (18) and (26), it follows
h ˜ i j ( t + 1 ) σ ( s ) = k = 1 n h i k ( t + 1 ) σ ( s ) × h j k ( t + 1 ) σ ( s ) 1 n = k = 1 n h i k ( t ) σ ( s ) α × h ˜ i k ( t ) σ ( s ) ( 1 α ) × h j k ( t ) σ ( s ) α × h ˜ j k ( t ) σ ( s ) ( 1 α ) 1 n = k = 1 n h i k ( t ) σ ( s ) × h j k ( t ) σ ( s ) 1 n α × k = 1 n h ˜ i k ( t ) σ ( s ) × h ˜ j k ( t ) σ ( s ) 1 n ( 1 α ) = h ˜ i j ( t ) σ ( s )
Thus,
h i j ( t + 1 ) σ ( s ) × h ˜ j i ( t + 1 ) σ ( s ) = h i j ( t ) σ ( s ) α × h ˜ i j ( t ) σ ( s ) 1 α × h ˜ j i ( t ) σ ( s ) = h i j ( t ) σ ( s ) × h ˜ j i ( t ) σ ( s ) α .
So,
l h i j ( t + 1 ) σ ( s ) × h ˜ j i ( t + 1 ) σ ( s ) + h j i ( t + 1 ) σ ( s ) × h ˜ i j ( t + 1 ) σ ( s ) = h i j ( t ) σ ( s ) × h ˜ j i ( t ) σ ( s ) α + h j i ( t ) σ ( s ) × h ˜ i j ( t ) α h i j ( t ) σ ( s ) × h ˜ j i ( t ) σ ( s ) + h j i ( t ) σ ( s ) × h ˜ i j ( t ) σ ( s ) ,
here by Lemma 1 equality holds only when h i j ( t ) σ ( s ) × h ˜ j i ( t ) σ ( s ) = 1 , but that lead to H ( t ) = H ˜ ( t ) , which is not true, hence
h i j ( t + 1 ) σ ( s ) × h ˜ j i ( t + 1 ) σ ( s ) + h j i ( t + 1 ) σ ( s ) × h ˜ i j ( t + 1 ) σ ( s ) < h i j ( t ) σ ( s ) × h ˜ j i ( t ) σ ( s ) + h j i ( t ) σ ( s ) × h ˜ i j ( t ) σ ( s ) ,
and
i = 1 n 1 j = 1 + 1 n s = 1 d h i j ( t + 1 ) σ ( s ) × h ˜ j i ( t + 1 ) σ ( s ) + h i j ( t + 1 ) σ ( s ) × h ˜ j i ( t + 1 ) σ ( s ) < i = 1 n 1 j = 1 + 1 n s = 1 d h i j ( t ) σ ( s ) × h ˜ j i ( t ) σ ( s ) + h i j ( t ) σ ( s ) × h ˜ j i ( t ) σ ( s ) .
By Equation (26)
p i j ( t ) σ ( s ) < p i j ( t + 1 ) σ ( s ) < p ˜ i j ( t ) σ ( s ) ,
with each iteration p i j ( t ) σ ( s ) will come closer to p ˜ i j ( t ) σ ( s ) depending on the value of α. Table 2 shows the smaller the α is faster the value of p i j ( t ) σ ( s ) coming closer to p ˜ i j ( t ) σ ( s ) . As H ˜ ( t ) is consistent, therefore each iteration will make H ( t ) more consistent i.e., if p i k ( t ) σ ( s ) 1 2 p k j ( t ) σ ( s ) 1 2 p i j ( t ) σ ( s ) < 1 2 then by Equations (26) and (20)
p i j ( t + 1 ) σ ( s ) p ˜ i j ( t + 1 ) σ ( s ) < p i j ( t ) σ ( s ) p ˜ i j ( t ) σ ( s ) .
Now, it is deduced from Equations (A2) and (A3) that
C I H ( t + 1 ) < C I H ( t ) ,
thus, { C I ( H ( t ) ) } is monotonically decreasing sequence, but C I ( H ( t ) ) 1 , therefore it is a convergent sequence, denote lim t H ( t ) = H ; then
C I H = lim t C I ( H ( t ) ) = inf { C I ( H ( t ) ) : t = 1 , 2 , 3 , } .
If C I H > C I r then implementation of Algorithm 2 results into improved HPMPR with lesser consistency index, but that is contradictory to the fact that C I H is the greatest lower bound. ☐
Proof of Theorem 6.
It follows from Equations (4), (5) and (28). ☐
Proof of Theorem 7.
Let H ˜ g ( t ) = h ˜ i j , g ( t ) and H ˜ k ( t ) = h ˜ i j , k ( t ) ( k = 1 , 2 , , m ) be the sequences of HPMPRs generated by Algorithm 1 applied to H g and H k ( k = 1 , 2 , , m ) , respectively. By Equations (21) and (23)
C I H g = 1 d n 2 i = 1 n j = 1 n s = 1 d h i j , g σ ( s ) × h ˜ j i , g σ ( s ) + 2 n d ( n + 1 ) i = 1 n j i n s = 1 d p i j , g σ ( s ) p ˜ i j , g σ ( s ) .
Now, first we deal with multiplicative preference degrees
h i j , g σ ( s ) × h ˜ j i , g σ ( s ) = k = 1 m h ¯ i j , k σ ( s ) ω k × l = 1 n h j l , g σ ( s ) × h l i , g σ ( s ) 1 n = k = 1 m h ¯ i j , k σ ( s ) ω k × l = 1 n k = 1 m h ¯ j l , k σ ( s ) ω k × k = 1 m h ¯ l i , k σ ( s ) ω k 1 n = k = 1 m h ¯ i j , k σ ( s ) × l = 1 n h ¯ j l , k σ ( s ) × h ¯ l i , k σ ( s ) 1 n ω k
thus, by Lemma 2
h i j , g σ ( s ) × h ˜ j i , g σ ( s ) k = 1 m ω k h ¯ i j , k σ ( s ) × h ˜ j i , k σ ( s ) ,
therefore Equation (A4) implies
1 d n 2 i = 1 n j = 1 n s = 1 d h i j , g σ ( s ) × h ˜ j i , g σ ( s ) max k 1 d n 2 i = 1 n j = 1 n s = 1 d h ¯ i j , k σ ( s ) × h ˜ j i , k σ ( s ) .
Consider,
p ˜ i j , g ( 1 ) σ ( s ) = 1 2 n l = 1 n p i l , g σ ( s ) + p l j , g σ ( s ) = 1 2 n l = 1 n k = 1 m ω k p ¯ i l , k σ ( s ) + k = 1 m ω k p ¯ l j , k σ ( s ) = k = 1 m ω k 1 2 n l = 1 n p ¯ i l , k σ ( s ) + p ¯ l j , k σ ( s ) = k = 1 m ω k p ˜ i j , k ( 1 ) σ ( s ) .
For the inductive step, suppose the following
p ˜ i j , g ( r ) σ ( s ) = k = 1 m ω k p ˜ i j , k ( r ) σ ( s ) , ( i , j = 1 , 2 , , n ) .
then
p ˜ i j , g ( r + 1 ) σ ( s ) = 1 2 n l = 1 n p ˜ i l , g ( r ) σ ( s ) + p ˜ l j , g ( r ) σ ( s ) ,
by supposition,
p ˜ i j , g ( r + 1 ) σ ( s ) = 1 2 n l = 1 n k = 1 m ω k p ˜ i l , k ( r ) σ ( s ) + k = 1 m ω k p ˜ l j , k ( r ) σ ( s ) = k = 1 m ω k 1 2 n l = 1 n p ˜ i l , k ( r ) σ ( s ) + p ˜ l j , k ( r ) σ ( s ) = k = 1 m ω k p ˜ i j , k ( r + 1 ) σ ( s ) .
Hence, by principle of mathematical induction for all t N
p ˜ i j , g ( t ) σ ( s ) = k = 1 m ω k p ˜ i j , k ( t ) σ ( s ) , ( i , j = 1 , 2 , , n ) .
Therefore,
p i j , g σ ( s ) p ˜ i j , g ( t ) σ ( s ) max k p ¯ i j , k σ ( s ) p ˜ i j , k ( t ) σ ( s )
Equations (A5) and (A7) imply
C I ( H g ) max k { C I ( H k ) } .
 ☐
Proof of Theorem 8.
By Equations (4), (5) and (34),
h i j , g ( t + 1 ) σ ( s ) | p i j , g ( t + 1 ) σ ( s ) = l = 1 m h i j , l ( t + 1 ) σ ( s ) ω l | l = 1 m ω l p i j , l ( t + 1 ) .
Consider,
h i j , k ( t + 1 ) σ ( s ) × h j i , g ( t + 1 ) σ ( s ) = h i j , k ( t ) σ ( s ) β × h i j , g ( t ) σ ( s ) 1 β × l = 1 m h j i , l ( t + 1 ) σ ( s ) ω l = h i j , k ( t ) σ ( s ) β × h i j , g ( t ) σ ( s ) 1 β × l = 1 m h j i , l ( t ) σ ( s ) β × h j i , g ( t ) σ ( s ) 1 β ω l = h i j , k ( t ) σ ( s ) β × h i j , g ( t ) σ ( s ) 1 β × l = 1 m h j i , l ( t ) σ ( s ) ω l β h j i , g ( t ) σ ( s ) 1 β = h i j , k ( t ) σ ( s ) × l = 1 m h j i , l ( t ) σ ( s ) ω l β × h i j , g ( t ) σ ( s ) × h j i , g ( t ) σ ( s ) 1 β = h i j , k ( t ) σ ( s ) × h j i , g ( t ) σ ( s ) β < h i j , k ( t ) σ ( s ) × h j i , g ( t ) σ ( s ) .
Additionally,
p i j , k ( t + 1 ) σ ( s ) p i j , g ( t + 1 ) σ ( s ) = β p i j , k ( t ) σ ( s ) + ( 1 β ) p i j , g ( t ) σ ( s ) l = 1 m ω l p i j , l ( t + 1 ) σ ( s ) = β p i j , k ( t ) σ ( s ) + ( 1 β ) p i j , g ( t ) σ ( s ) l = 1 m ω l β p i j , l ( t ) σ ( s ) + ( 1 β ) p i j , g ( t ) σ ( s ) = β p i j , k ( t ) σ ( s ) l = 1 m ω l β p i j , l ( t ) σ ( s ) = β p i j , k ( t ) σ ( s ) p i j , g ( t ) σ ( s ) < p i j , k ( t ) σ ( s ) p i j , g ( t ) σ ( s ) .
Now, by Equations (21) and (32)
G C I H k ( t + 1 ) = 1 d n 2 i = 1 n j = 1 n s = 1 d h i j , k ( t + 1 ) σ ( s ) × h j i , g ( t + 1 ) σ ( s ) + 2 n d ( n + 1 ) i = 1 n j i n s = 1 d p i j , k ( t + 1 ) σ ( s ) p i j , g ( t + 1 ) σ ( s ) ,
virtue of Equations (A8) and (A9) provide us the conclusion that
G C I H k ( t + 1 ) < G C I H k ( t ) : ( k = 1 , 2 , , m ) ,
with this, proof is complete. ☐
Proof of Theorem 9.
It follows from Theorems 5, 7 and Equation (35).

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  2. Bellman, R.E.; Zadeh, L.A. Decision-making in a fuzzy environment. Manag. Sci. 1970, 17, 141–164. [Google Scholar] [CrossRef]
  3. Faizi, S.; Rashid, T.; Sałabun, W.; Zafar, S.; Wątróbski, J. Decision Making with Uncertainty Using Hesitant Fuzzy Sets. Int. J. Fuzzy Syst. 2018, 20, 99–103. [Google Scholar] [CrossRef]
  4. Faizi, S.; Rashid, W.S.T.; Wątróbski, J.; Zafar, S. Group Decision-Making for Hesitant Fuzzy Sets Based on Characteristic Objects Method. Symmetry 2017, 9, 136. [Google Scholar] [CrossRef]
  5. Lu, N.; Liang, L. Correlation Coefficients of Extended Hesitant Fuzzy Sets and Their Applications to Decision Making. Symmetry 2017, 9, 47. [Google Scholar] [CrossRef]
  6. Sałabun, W.; Piegat, A. Comparative analysis of MCDM methods for the assessment of mortality in patients with acute coronary syndrome. Artif. Intell. Rev. 2017, 48, 557–571. [Google Scholar]
  7. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  8. Beg, I.; Rashid, T. Ideal solutions for hesitant fuzzy soft sets. J. Intell. Fuzzy Syst. 2016, 30, 143–150. [Google Scholar] [CrossRef]
  9. Beg, I.; Rashid, T. Hesitant 2-tuple linguistic information in multiple attributes group decision-making. J. Intell. Fuzzy Syst. 2016, 30, 109–116. [Google Scholar] [CrossRef]
  10. Chen, N.; Xu, Z.S.; Xia, M.M. Interval-valued hesitant preference relations and their applications to group decision-making. Knowl. Based Syst. 2013, 37, 528–540. [Google Scholar] [CrossRef]
  11. Qian, G.; Wang, H.; Feng, X. Generalized hesitant fuzzy sets and their application in decision support system. Knowl. Based Syst. 2013, 37, 357–365. [Google Scholar] [CrossRef]
  12. Rashid, T.; Beg, I. Convex hesitant fuzzy sets. J. Intell. Fuzzy Syst. 2016, 30, 2791–2796. [Google Scholar] [CrossRef]
  13. Xia, M.; Xu, Z.S. Hesitant fuzzy information aggregation in decision-making. Int. J. Approx. Reason. 2011, 52, 395–407. [Google Scholar] [CrossRef]
  14. Zhu, B.; Xu, Z.; Xia, M. Dual hesitant fuzzy sets. J. Appl. Math. 2012, 2012, 1–13. [Google Scholar] [CrossRef]
  15. Xu, Z.; Zhou, W. Consensus building with a group of decision makers under the probabilistic fuzzy environment. Fuzzy Optim. Decis. Mak. 2016. [Google Scholar] [CrossRef]
  16. Zhai, Y.; Xu, Z.; Liao, H. Probabilistic linguistic vector-term set and its application in group decision-making with multi-granular linguistic information. Appl. Soft Comput. 2016. [Google Scholar] [CrossRef]
  17. Zhang, S.; Xu, Z.; He, Y. Operations and Integrations of Probabilistic Hesitant Fuzzy Information in Decision Making. Inf. Fusion 2017. [Google Scholar] [CrossRef]
  18. Pang, Q.; Wang, H.; Xu, Z. Probabilistic linguistic term sets in multi-attribute group decision-making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  19. Saaty, T.L. The Analytical Hierarchy Process; McGraw-Hill: New York, NY, USA, 1980. [Google Scholar]
  20. Saaty, T.L.; Vargas, L.G. Uncertainty and rank order in the analytic hierarchy process. Eur. J. Oper. Res. 1987, 32, 107–117. [Google Scholar] [CrossRef]
  21. Xia, M.M.; Xu, Z.S. Managing hesitant information in GDM problems under fuzzy and multiplicative preference relations. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2013, 21, 865–897. [Google Scholar] [CrossRef]
  22. Wu, Z.B.; Xu, J.P. A consistency and consensus based decision support model for group decision-making with multiplicative preference relations. Decis. Support Syst. 2012, 52, 757–767. [Google Scholar] [CrossRef]
  23. Orlovsky, S.A. Decision-making with a fuzzy preference relation. Fuzzy Sets Syst. 1978, 1, 155–167. [Google Scholar] [CrossRef]
  24. Chiclana, F.; Herrera, F.; Herrera-Viedma, E. Integrating multiplicative preference relations in a multipurpose decision-making model based on fuzzy preference relations. Fuzzy Sets Syst. 2001, 122, 277–291. [Google Scholar] [CrossRef]
  25. Herrera-Viedma, E.; Alonso, S.; Chiclana, F.; Herrera, F. A consensus model for group decision-making with incomplete fuzzy preference relations. IEEE Trans. Fuzzy Syst. 2007, 15, 863–877. [Google Scholar] [CrossRef]
  26. Xu, Z.S.; Chen, J. Group decision-making procedure based on incomplete reciprocal relations. Soft Comput. 2008, 12, 515–521. [Google Scholar] [CrossRef]
  27. Dong, Y.; Xu, Y.; Li, H. On consistency measures of linguistic preference relations. Eur. J. Oper. Res. 2008, 189, 430–444. [Google Scholar] [CrossRef]
  28. Herrera, F. A sequential selection process in group decision making with linguistic assessment. Inf. Sci. 1995, 85, 223–239. [Google Scholar] [CrossRef]
  29. Herrera, F.; Herrera-Viedma, E. Linguistic decision analysis: steps for solving decision problems under linguistic information. Fuzzy Sets Syst. 2000, 115, 67–82. [Google Scholar] [CrossRef]
  30. Xia, M.M.; Xu, Z.S.; Liao, H.C. Preference relations based on intuitionistic multiplicative information. IEEE Trans. Fuzzy Syst. 2013, 21, 113–133. [Google Scholar]
  31. Xu, Z.S. Priority weight intervals derived from intuitionistic multiplicative preference relations. IEEE Trans. Fuzzy Syst. 2013, 21, 642–654. [Google Scholar]
  32. Xu, Z.S. On compatibility of interval fuzzy preference matrices. Fuzzy Optim. Decis. Mak. 2004, 3, 217–225. [Google Scholar] [CrossRef]
  33. Xu, Z.S. Intuitionistic preference relations and their application in group decision-making. Inf. Sci. 2007, 177, 2363–2379. [Google Scholar] [CrossRef]
  34. Xu, Z.S. Generalized fuzzy consistency matrix and its priority method. J. PLA Univ. Sci. Technol. 2000, 1, 97–99. [Google Scholar]
  35. Zhang, Z.; Wang, C.; Tian, X. A decision support model for group decision-making with hesitant fuzzy preference relations. Knowl. Based Syst. 2015, 86, 77–101. [Google Scholar] [CrossRef]
  36. Zhu, B.; Xu, Z.S. Regression methods for hesitant fuzzy preference relations. Technol. Econ. Dev. Econ. 2013, 19, 214–227. [Google Scholar] [CrossRef]
  37. Zhu, B.; Xu, Z.; Xu, J.P. Deriving a ranking from hesitant fuzzy preference relations under Group Decision Making. IEEE Trans. Cybern. 2014, 44, 1328–1337. [Google Scholar] [CrossRef] [PubMed]
  38. Zhang, Z.; Wang, C. A decision support model for group decision making with hesitant multiplicative preference relations. Inf. Sci. 2014, 282, 136–166. [Google Scholar] [CrossRef]
  39. Zhang, Z.; Wu, C. Deriving the priority weights from hesitant multiplicative preference relations in group decision-making. Appl. Soft Comput. 2014, 25, 107–117. [Google Scholar] [CrossRef]
  40. Zhou, W.; Xu, Z.S. Probability calculation and element optimization of probabilistic hesitant fuzzy preference relations based on expected consistency. IEEE Trans. Fuzzy Syst. 2017. [Google Scholar] [CrossRef]
  41. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. A model of consensus in group decision-making under linguistic assessments. Fuzzy Sets Syst. 1996, 78, 73–87. [Google Scholar] [CrossRef]
  42. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. Direct approach processes in group decision-making using linguistic OWA operators. Fuzzy Sets Syst. 1996, 79, 175–190. [Google Scholar] [CrossRef]
  43. Kacprzyk, J. Group decision-making with a fuzzy linguistic majority. Fuzzy Sets Syst. 1986, 18, 105–118. [Google Scholar] [CrossRef]
  44. Tanino, T. Fuzzy preference orderings in group decision-making. Fuzzy Sets Syst. 1984, 12, 117–131. [Google Scholar] [CrossRef]
  45. Herrera-Viedma, E.; Chiclana, F.; Herrera, F.; Alonso, S. Group decision-making model with incomplete fuzzy preference relations based on additive consistency. IEEE Trans. Syst. Man Cybern. Part B 2007, 37, 176–189. [Google Scholar] [CrossRef]
  46. Herrera-Viedma, E.; Herrera, F.; Chiclana, F.; Luque, M. Some issues on consistency of fuzzy preference relations. Eur. J. Oper. Res. 2004, 154, 98–109. [Google Scholar] [CrossRef]
  47. Xia, M.M.; Xu, Z.S.; Chen, J. Algorithms for improving consistency or consensus of reciprocal [0, 1]-valued preference relations. Fuzzy Sets Syst. 2013, 216, 108–133. [Google Scholar] [CrossRef]
  48. Dong, Y.; Zhang, G.; Hong, W.-H.; Xu, Y. Consensus models for AHP group decision-making under row geometric mean prioritization method. Decis. Support Syst. 2010, 49, 281–289. [Google Scholar] [CrossRef]
  49. Herrera-Viedma, E.; Herrera, F.; Chiclana, F.; Luque, M. A consensus model for multiperson decision-making with different preference structures. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2002, 34, 394–402. [Google Scholar] [CrossRef]
  50. Xu, Z.S.; Xia, M.M. Distance and similarity measures for hesitant fuzzy sets. Inf. Sci. 2011, 181, 2128–2138. [Google Scholar] [CrossRef]
  51. Bossuyt, P. A Comparison of Probabilistic Unfolding Theories for Paired Comparisons Data; Springer: Berlin, Germany, 1990. [Google Scholar]
  52. Xu, Z.S. On consistency of the weighted geometric mean complex judgment matrix in AHP. Eur. J. Oper. Res. 2000, 126, 683–687. [Google Scholar] [CrossRef]
Figure 1. Area graphs of multiplicative preference degrees for H 1 .
Figure 1. Area graphs of multiplicative preference degrees for H 1 .
Applsci 08 00398 g001
Figure 2. Area graphs of probabilities for H 1 .
Figure 2. Area graphs of probabilities for H 1 .
Applsci 08 00398 g002
Figure 3. Area graphs of score values for H 1 .
Figure 3. Area graphs of score values for H 1 .
Applsci 08 00398 g003
Figure 4. Area graphs of multiplicative preference degrees for H 2 .
Figure 4. Area graphs of multiplicative preference degrees for H 2 .
Applsci 08 00398 g004
Figure 5. Area graphs of probabilities for H 2 .
Figure 5. Area graphs of probabilities for H 2 .
Applsci 08 00398 g005
Figure 6. Area graphs of score values for H 2 .
Figure 6. Area graphs of score values for H 2 .
Applsci 08 00398 g006
Figure 7. Flowchart of the proposed decision model.
Figure 7. Flowchart of the proposed decision model.
Applsci 08 00398 g007
Figure 8. Area graphs of score values of H 1 .
Figure 8. Area graphs of score values of H 1 .
Applsci 08 00398 g008
Figure 9. Area graphs of score values of H 2 .
Figure 9. Area graphs of score values of H 2 .
Applsci 08 00398 g009
Figure 10. Area graphs of score values of H 3 .
Figure 10. Area graphs of score values of H 3 .
Applsci 08 00398 g010
Figure 11. Area graphs of score values of H 4 .
Figure 11. Area graphs of score values of H 4 .
Applsci 08 00398 g011
Table 1. Average value of iterations in Algorithm 1.
Table 1. Average value of iterations in Algorithm 1.
nd ζ = 0 ζ = 1
441.03801.036
31.191.201
22.2282.222
551.0021.004
41.0221.024
31.1881.162
661.0021.002
51.0071.004
41.0251.013
771.0011
611.001
51.0031.001
8811
711
611
9911
811
711
101011
911
811
Table 2. Average iterations values in Algorithm 2.
Table 2. Average iterations values in Algorithm 2.
nd CI r α
0.10.30.60.8
441.0511.2372.535.177
1.10.9890.9911.6833.125
1.150.9310.9451.1642.044
31.051.0031.2972.5985.29
1.10.990.9931.6873.306
1.150.9410.9351.2462.241
21.050.9961.3442.5975.343
1.10.9780.9791.7343.309
1.150.9220.9071.1982.241
551.0511.3332.7865.625
1.10.99911.9243.674
1.150.9980.9991.3852.573
41.0511.4522.8385.706
1.1111.933.789
1.150.9980.9991.4392.698
31.0511.5462.865.847
1.111.0021.9533.843
1.150.9910.9951.5022.81
661.0511.4562.9285.805
1.1111.9953.874
1.15111.5632.827
51.0511.5162.9545.908
1.1111.9974.012
1.15111.6052.95
41.0511.6322.9626.053
1.1112.0224.095
1.15111.6923.002
771.0511.5242.9875.889
1.1112.014.055
1.15111.7083.016
61.0511.632.9856.015
1.1112.0074.095
1.15111.7453.095
41.0511.832.9996.275
1.1112.0534.31
1.15111.8623.249
881.0511.6192.9965.935
1.1112.0024.111
1.15111.8063.09
71.0511.7252.9966.032
1.1112.0074.168
1.15111.8583.168
61.0511.7882.9986.13
1.1112.0174.239
1.15111.8973.223
991.0511.6482.9995.991
1.1112.0014.128
1.15111.8853.16
81.0511.7492.9996.038
1.1112.0064.199
1.15111.9193.204
71.0511.84536.116
1.1112.0054.266
1.15111.9463.257
10101.0511.68236.007
1.11124.155
1.15111.9513.179
91.0511.79636.035
1.1112.0034.192
1.15111.9643.232
81.0511.88236.1
1.1112.0034.272
1.15111.9693.269
Table 3. Average values of iterations in Algorithm 3.
Table 3. Average values of iterations in Algorithm 3.
nmd GCI r β
0.20.40.70.9
4441.011.9672.7155.96918.828
1.0511.2712.7478.101
1.10.970.9981.6924.61
331.011.8782.5745.7518.261
1.050.9991.2962.7738.116
1.10.9380.9671.5894.311
221.011.7592.5255.75618.47
1.050.8840.9271.5524.394
1.10.7930.7711.2483.292
5451.011.8922.3935.47117.564
1.0511.1652.6167.644
1.10.990.9981.7144.569
341.011.8962.5125.65217.993
1.0511.6113.2039.615
1.10.890.8681.2352.911
331.011.9642.7596.04419.299
1.050.9991.0572.2786.513
1.10.860.8711.1332.655
6561.011.9462.3425.52917.46
1.0511.1212.5637.507
1.10.9950.9941.53.896
551.011.9322.4575.72317.95
1.0511.2132.6427.891
1.10.9990.9991.5233.994
441.011.9592.5485.89718.231
1.051.011.2542.7427.982
1.10.8911.2122.7454.121
7771.011.8912.215.12316.213
1.050.9971.0122.2137.211
1.10.9950.9851.2133.456
761.011.8932.2255.5416.523
1.050.9951.122.2417.543
1.10.9940.9941.2233.672
841.011.6742.1235.11216.254
1.050.9921.1112.2137.654
1.10.9920.9911.2113.254
8981.011.2592.0044.95115.319
1.050.9911.1012.1217.545
1.10.9910.9891.2213.224
871.011.2612.1124.99815.119
1.050.9911.1042.1327.614
1.10.9920.9921.2253.514
861.011.2712.1514.99715.211
1.050.9931.1122.1357.664
1.10.9940.9951.2313.612
91091.011.1011.1214.12110.123
1.050.99811.8576.345
1.10.9920.9951.2352.986
781.011.2111.2254.23410.512
1.050.99911.9786.546
1.10.9990.9991.4353.102
971.011.1121.2354.22510.562
1.050.9990.9991.8986.658
1.10.99911.4523.21
109101.01112.17.152
1.05111.214.231
1.1111.1022.147
891.01112.2117.236
1.05111.2234.542
1.1111.1132.231
881.01112.3217.325
1.05111.3414.653
1.1111.2312.251
Table 4. Effects of probability on decision-making.
Table 4. Effects of probability on decision-making.
HPMPRsRanking of AlternativesBest Commodity
H 1 , H 2 , H 3 , H 4 x 4 > x 2 > x 1 > x 3 Copper
H 1 * , H 2 * , H 3 * , H 4 * x 1 > x 2 > x 4 > x 3 Oil
H 1 * * , H 2 * * , H 3 * * , H 4 * * x 2 > x 1 > x 4 > x 3 Gold

Share and Cite

MDPI and ACS Style

Bashir, Z.; Rashid, T.; Wątróbski, J.; Sałabun, W.; Malik, A. Hesitant Probabilistic Multiplicative Preference Relations in Group Decision Making. Appl. Sci. 2018, 8, 398. https://doi.org/10.3390/app8030398

AMA Style

Bashir Z, Rashid T, Wątróbski J, Sałabun W, Malik A. Hesitant Probabilistic Multiplicative Preference Relations in Group Decision Making. Applied Sciences. 2018; 8(3):398. https://doi.org/10.3390/app8030398

Chicago/Turabian Style

Bashir, Zia, Tabasam Rashid, Jarosław Wątróbski, Wojciech Sałabun, and Abbas Malik. 2018. "Hesitant Probabilistic Multiplicative Preference Relations in Group Decision Making" Applied Sciences 8, no. 3: 398. https://doi.org/10.3390/app8030398

APA Style

Bashir, Z., Rashid, T., Wątróbski, J., Sałabun, W., & Malik, A. (2018). Hesitant Probabilistic Multiplicative Preference Relations in Group Decision Making. Applied Sciences, 8(3), 398. https://doi.org/10.3390/app8030398

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop