Next Article in Journal
An Efficient Class of Estimators in Stratified Random Sampling with an Application to Real Data
Previous Article in Journal
Periodic and Almost Periodic Solutions of Stochastic Inertial Bidirectional Associative Memory Neural Networks on Time Scales
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of the Existence and Uniqueness Conditions of Parameter Estimation in Paired Comparison Models

by
László Gyarmati
*,
Éva Orbán-Mihálykó
and
Csaba Mihálykó
Department of Mathematics, University of Pannonia, Egyetem u. 10., H-8200 Veszprém, Hungary
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(6), 575; https://doi.org/10.3390/axioms12060575
Submission received: 12 May 2023 / Revised: 3 June 2023 / Accepted: 6 June 2023 / Published: 9 June 2023

Abstract

:
In this paper, paired comparison models with stochastic background are investigated. We focus on the models that allow three options for choice and the parameters are estimated by maximum likelihood method. The existence and uniqueness of the estimator are key issues of the evaluation. In the case of two options, a necessary and sufficient condition is given by Ford in the Bradley–Terry model. We generalize this statement for the set of strictly log-concave distribution. Although in the case of three options the necessary and sufficient condition is not known, there are two different sufficient conditions that are formulated in the literature. In this paper, we generalize them; moreover, we compare these conditions. Their capacities to indicate the existence of the maximum were analyzed using a large number of computer simulations. These simulations support that the new condition indicates the existence of the maximum much more frequently than the previously known ones.

1. Introduction

Comparisons in pairs are frequently used in ranking and rating problems. They are mainly applied when scaling is very uncertain, but comparing the objects to the others can guarantee more reliable statistical data. The area of the possible applications is extremely large, and some examples are the followings: education [1,2], sports [3,4,5], information retrieval [6], energy supply [7], financial sector [8], management [9], and food industry [10].
The most popular method is Analytic Hierarchy Process (AHP) elaborated by Saaty [11,12] and developed by others; see for example the detailed literature in [13]. The method has many advantages: more than two options, several methods for evaluation, the opportunity of incomplete comparisons, a simple condition for the uniqueness of the evaluation [14], the possibility of multi-level decisions [15], and the concept of consistency [16]. Nevertheless, although [17] proposed a very flexible and applicable composite indicator to an inferential approach that consists of multiple tests based on non-parametric techniques in a stochastic framework, in the case of AHP, due to the lack of a stochastic framework, confidence intervals for the parameters and testing hypotheses (in connection with the parameters) are out of the possibilities.
Fundamentally different models of paired comparisons are Thurstone motivated stochastic models. The basic concept is the idea of latent random variables, presented in [18]. Thurstone assumed Gauss-distributed latent random variables and allowed two options in decisions, “worse” and “better”. The method was modified: Gauss distribution was replaced by logistic distribution in [19] and the model is called the Bradley–Terry Model (BTM). One of its main advantage is the simple mathematical formulae. Thurstone applied the least squares method for parameter estimation, whereas BTM applies maximum likelihood estimation [20] and the not-complicated formulae allow quick numerical methods for solving optimization problems. The existence and uniqueness of the optimizer is a key issue in the case of ML estimations; the necessary and sufficient condition for it is proven in [21].
Ref. [22] handles the Bradley–Terry model, also allowing options such as a nonlinear logistic regression model, and finds the optimal regions of design. A detailed survey for paired comparison, including regression models, is contained in [23].
The model was generalized for three options (“worse”, “equal”, and “better”) in [24] for Gauss distribution and in [25] for logistic distribution. The latter paper applied maximum likelihood parameter estimation. Davidson made further modifications to the model concerning ties in [26]. For more than three options we can find generalization in [27] in the case of the Bradley–Terry model, and in [28] in the case of Gauss distribution. In [29], it was proven that the models require the same conditions in order to be able to evaluate the data uniquely in the case of a broad set of cumulative distribution functions for the latent random variables: the strictly log-concave property of the probability density function is the crucial point of the uniqueness, while the assurance of the existence is hidden in the data structure. We mention that Gauss distribution and logistic distribution are included in the set of distributions having a strictly log-concave probability density function. Note that, due to the probabilistic background, the Thurstone motivated models have the opportunity of building in the home field or first-mover advantage [30], testing hypotheses [31], and making forecasts [32]; therefore, they are worth investigating.
In [33], the author analyzed the structure of the comparisons allowing both two and three options in choice. The author emphasized that not only the structure of the graph made from the compared pairs but the results of the comparisons affect the existence of MLE. He made some data perturbations in the cases where there are comparisons, but some results did not occur. By these perturbations, the zero data values became positive, and these positive value guaranteed the strongly connected property of the directed graph constructed by the wins. However, these perturbations modified the data structures; therefore, it would be better to avoid them.
In [34], the authors investigated BTM with two options and provided estimations for the probability of the existence of MLE. The authors turned to the condition of Ford to check whether MLE exists uniquely or not. As the condition of Ford is a necessary and sufficient condition, it indicates explicitly whether the MLE works or not. However, in the case of other distributions and/or more than two options, these investigations could not be performed due to the lack of a necessary and sufficient condition for the existence and uniqueness of MLE.
To continue their research, it would be conducive to have a (necessary and) sufficient condition for the existence and uniqueness. To the best of our knowledge, there is no such theorem in the research literature in the case of three options, and only two sufficient conditions are known. In this paper we compare the known conditions, we formulate their generalization, and we prove it. Then, we compare the applicability of the different conditions from the following point of view: how often and for what type of parameters are they able to indicate the existence and uniqueness of MLE. We have made large numbers of computer simulations’ repetitions in many parameters’ cases and we use them to answer these questions.
The paper is organized as follows: In Section 2, the investigated model is described. In Section 3, we present new conditions under which the existence and uniqueness are fulfilled. The proof can be found in Appendix A. In Section 4, two real-life applications are contained. In Section 5, simulation results concerning the applicability are presented. Finally, a short summary is given.

2. The Investigated Model

Let n denote the number of the different objects to evaluate and let the number  1 , 2 , . . . , n  denote the objects themselves. We would like to evaluate (rank and rate) them with the help of the opinions of some persons called observers. We think that every object has a latent random variable behind,  ξ i ,   i = 1 , 2 , . . . , n . Let the number of the options in a choice be  s = 3 , namely, “worse”, “equal”, and “better”, denoted by  C 1 C 2 , and  C 3 , respectively. The set of the real numbers  R  is the union of three intervals, which have no common points. Each option in judgment corresponds to an interval of these intervals. If the judgment comparing i and j is the option  C k , then the difference  ξ i ξ j  is situated in the interval  I k , k = 1 , 2 , 3 .  The intervals are appointed by their initial points and endpoints, − d , d, and , so  I 1  = (−, d ),  I 2  = [ d , d ], and  I 3  = (d,). The above intervals together with the corresponding options are presented in Figure 1.
We can write
ξ i ξ j = m i m j + η i , j ,   i = 1 , . . . , n ,   j = 1 , . . . , n ,   i j ,
where:
E ( ξ i ) = m i ,
are the strengths of the objects, and  η i , j  are identically distributed random variables with expectation 0. The ranking of the expectations determines the ranking of the objects and the differences in their values give information concerning the differences of the strengths. We want to estimate the expectations and the value of the border of “equal” (d) on the basis of the data. For that, we use maximum likelihood estimation.
The probabilities of judgment  C k  can be determined based on the distribution of  η i , j  as follows [35]:
P ( ξ i ξ j I 1 ) = P ( ξ i ξ j < d ) = F ( d ( m i m j ) )
P ( ξ i ξ j I 2 ) = P ( d < = ξ i ξ j < = d ) = F ( d ( m i m j ) ) F ( d ( m i m j ) )
P ( ξ i ξ j I 3 ) = P ( d < ξ i ξ j ) = 1 F ( d ( m i m j ) )
where F is the (common) cumulative distribution function (c.d.f) of  η i , j .
Let us denote the number of observers by r. The judgment produced by the uth observer ( u = 1 , 2 , . . . , r ) concerning the comparison of i and j is encoded in the elements of a four-dimensional matrix. The third indices correspond to the options in choices, and k = 1, 2, 3 are for judgments “worse”, “equal”, and ”better”, respectively. The matrix X has four dimensions,  i = 1 , 2 , . . . , n ,   j = 1 , 2 , . . . , n ,   k = 1, 2, 3,  u = 1 , 2 , . . . , r  and its elements are:
X i , j , k , u  =  1 , if the judgment of the u t h observer is C k , comparing i and j 0 , otherwise
Let  X i , i , k , u = 0 . Of course, due to the symmetry,  X i , j , k , u = X j , i , 4 k , u . It expresses that if the ith object is “better” than the jth object, then the jth object is “worse” than the ith object, according to the judgment of the uth respondent.
Let  A i , j , k = u = 1 r X i , j , k , u  be the number of observations  C k  when objects i and j are compared and let A denote the three-dimensional matrix containing the elements  A i , j , k .  Of course,  A i , j , k = A j , i , 4 k .
The likelihood function is the probability of the sample in the function of the parameters. If the judgments are independent, the likelihood function is expressed as follows:
L ( X | m 1 , m 2 , . . . , m n , d ) = k = 1 3 i = 1 n 1 j = i + 1 n P ( ξ i ξ j I k ) A i , j , k
which has to be maximized in  m ̲ = ( m 1 , . . . , m n )  and  0 < d .
We can notice that the function (6) depends on the differences of the expectation’s coordinates; therefore, one of the coordinates, for example  m 1 , can be fixed.

3. Conditions for the Existence and Uniqueness

In [21], the author presents a necessary and sufficient condition for the existence and uniqueness of MLE, if there are only two options for choice and F, the c.d.f. of  η i , j  is the logistic c.d.f. The condition is the following: for an arbitrary non-empty partition of the objects, S and  S ¯ , there exists at least one element of S that is “better” than an element of  S ¯ , and vice versa. In [26], the author states that this condition supplemented with the condition “there is at least one tie (“equal”)” is enough for having a unique maximizer in a modified Bradley–Terry model. The theorem assumes logistic distribution, and its proof uses this special form; therefore, the proof is valid only for the investigated special model. Now, we prove it for a broad set of cumulative distribution functions. We require the following properties: F is a c.d.f. with  0 < F ( x ) < 1 , F is three times continuously differentiable, its probability density function f is symmetric and the logarithm of f is a strictly concave function in  R . Gauss and logistic distributions belong to this set, together with many others. Let us denote the set of these c.d.f.’s by  F .
First, we state the following generalization of Ford’s theorem:
Theorem 1. 
Let  F F , and suppose that there are only two options in the choice. Fix the value of the parameter  m 1 = 0 . The necessary and sufficient condition for the existence and uniqueness of MLE is the following: for an arbitrary non-empty partition of the objects S and  S ¯ , there exists at least one element of S that is “better” than an element of  S ¯ , and vice versa.
The proof of sufficiency relies on the argumentation of Theorem 4 omitting the variable d. The used steps are (ST3), (ST5), and (ST6) in Appendix A. In the last step, the strictly concave property of  l o g L  can be concluded from the theory of logarithmic concave measures [36]. The necessity is obvious: if there would be a partition without “better” from one subset to another, then each element of this subset would be “worse” than the elements of the complement, but the measure of “worse” could not be estimated. The likelihood function would be monotone increasing, and consequently, the maximum would not be reached.
Returning to the case of three options, we formulate the conditions of Davidson in the following:
DC 1. 
There exists an index pair  ( i 1 , j 1 )  for which  0 <  A i 1 , j 1 , 2 .
DC 2. 
For any non-empty partition of the objects S and  S ¯ , there exist at least two index pairs ( i 2 , j 2 ) and ( i 3 , j 3 i 2 , i 3 S j 2 , j 3 S ¯  for which  0 < A i 2 , j 2 , 3  and  0 < A i 3 , j 3 , 1 .
We shall refer to them as the set of conditions DC. Condition DC 1 expresses that there is a judgment “equal”. Condition DC 2 coincides with the condition of Ford in [21] in the case of two options. It expresses that there is at least one object in both subsets that is “better ” than an object in the complement.
Theorem 2. 
Let  F F . If conditions DC 1 and DC 2 hold, then, fixing  m 1 = 0 , the likelihood function (6) reaches its maximal value and its argument is unique.
Theorem 2 follows from a more general statement, Theorem 4, which will be proven in Appendix A.
Now, we turn to another set of conditions that guarantees the existence and uniqueness of MLE. These conditions will be abbreviated by the initial letters MC.
MC 1. 
There is at least one index pair  ( i 1 , j 1 )  for which
0 < A i 1 , j 1 , 2  holds.
MC 2. 
There is at least one index pair  ( i 2 , j 2 )  for which
0 < A i 2 , j 2 , 1  and  0 < A i 2 , j 2 , 3 .
Let us define the graph  G ( M )  as follows: the nodes are the objects to be compared. There is an edge between two nodes i and j if   0 < A i , j , 2  or ( 0 < A i , j , 1  and  0 < A i , j , 3 ) hold.
MC 3. 
Graph  G ( M )  is connected.
Theorem 3. 
Ref. [29] Let  F F . If conditions MC 1, MC 2, and MC 3 hold, then, after fixing  m 1  = 0, the likelihood function (6) attains its maximal value and the argument of the maximum is unique.
To clear the relationship between conditions DC 1 and DC 2, and MC 1, MC 2, and MC 3, we present two examples. In Example 1, DC 1 and DC 2 are satisfied but MC 2 and MC 3 are not. In Example 2, DC 2 is not satisfied but MC 1, MC 2, and MC 3 are. These examples expose that the sets of conditions DC and MC do not cover each other. Moreover, they support that MLE may exist uniquely even if DC 1 and DC 2 or MC 1, MC 2, and MC 3 do not hold. Therefore, we can see that neither conditions DC nor conditions MC are necessary conditions.
Example 1. 
Let n = 3 and  A 1 , 2 , 2  = 1,  A 1 , 2 , 3  = 1,  A 2 , 3 , 3  = 1, and  A 1 , 3 , 1  = 1 (see Figure 2). Now, both DC 1 and DC 2 hold, but MC 3 does not.
Example 2. 
Let n = 3 and  A 1 , 2 , 1  = 1,  A 1 , 2 , 3  = 1, and  A 2 , 3 , 2  = 1 (see Figure 3). Now, one can easily check that MC 1, MC 2, and MC 3 hold but DC 2 does not.
As a short explanation, the graph in Figure 2 represents the following comparison results. There are 4 comparisons between the object “1”, “2” and “3”. There is an opinion according to which “equal” between “1” and “2” (denoted by 1- - -2;  A 1 , 2 , 2 = 1 ), moreover there is an opinion according to which “1” is better than “2” (denoted by 1->2,  A 1 , 2 , 3 = 1 ). Furthermore, there is an opinion according to which “2” is better than “3” (denoted by 2->3;  A 2 , 3 , 3 = 1 ), finally there is an opinion according to which “3” is better than “1” (denoted by 3->1;  A 1 , 3 , 1 = 1 ). Similarly, the graph of Figure 3 visualizes the following comparison results between the objects “1”, “2”, and “3”: there is an opinion according to which “2” is “equal” to “3” (2- -3;  A 2 , 3 , 2 = 1 ), there is an opinion according to which “1” is better than “2”, and vice versa (1->2; 2->1;  A 1 , 2 , 1  =  A 1 , 2 , 3  = 1).
Theorems 2 and 3 can be generalized. Let us introduce the following set of conditions, denoted by SC:
SC 1. 
There is at least one index pair  ( i 1 , j 1 )  for which  0 < A i 1 , j 1 , 2  holds.
Let us introduce a graph belonging to the results of the comparisons as follows: let  D G ( S C )  be a directed graph, the nodes be the objects, and let there be a directed edge from i to j if there is an opinion according to which i is “better” than j, that is  0 < A i , j , 3 . Now, we can formulate the following conditions:
SC 2. 
There is a cycle in the directed graph  D G ( S C ) .
SC 3. 
For any non-empty partition of the objects S and  S ¯ , there exists at least two (not necessarily different) index pairs ( i 2 , j 2 ) and ( i 3 , j 3 i 2 , i 3 S j 2 , j 3 S ¯  for which
0 < A i 2 , j 2 , 3  and  0 < A i 3 , j 3 , 1 ,
or there exists an index pair ( i 4 , j 4 i 4 S  and  j 4 S ¯  for which  0 < A i 4 , j 4 , 2 .
It is easy to see that condition SC 2 is more general than condition MC 3 and condition SC 3 is more general than condition DC 2. Condition SC 3 expresses that any subset and its complement is interconnected by an opinion “better” from one to another and vice versa or an opinion “equal”. Here, Condition DC 2 is replaced by a more general condition: next to the “better” opinions, the opinion “equal” is an appropriate judgment for connection.
To analyze the relationships between the sets of conditions DC, MC, and SC, we can recognize that:
(A) DC 1, MC 1, and SC 1 coincide.
(B) If DC 2 holds, then so do SC 2 and SC 3.
(C) If MC 2 holds, so does SC 2.
(D) If MC 3 holds, so does SC 3.
These together present that conditions SC 1, SC 2, and SC 3 are the generalization of the conditions DC and MC. To show that SC is really a more general set of conditions we present Example 3.
Example 3. 
Let n = 4,  A 1 , 2 , 3  = 1,  A 2 , 3 , 3  = 1,  A 1 , 3 , 1  = 1, and  A 1 , 4 , 2  = 1 (see Figure 4). In this case, neither conditions DC 2 nor MC 2 hold, but SC 1, SC 2, and SC 3 do.
Now, we state the following theorem.
Theorem 4. 
Let  F F . If conditions SC 1, SC 2, and SC 3 hold, then, after fixing  m 1  = 0, the likelihood function (6) attains its maximum value and its argument is unique.
The proof of Theorem 4 can be found in Appendix A.
We note that Theorem 2 is a straightforward consequence of Theorem 4.
Unfortunately, conditions SC 1, SC 2, and SC 3 are not necessary conditions. One can prove that in the case of Example 4 there exists a unique maximizer of function (6) but SC 2 does not hold.
Example 4. 
Let n = 3,  A 1 , 2 , 3 = 1 A 2 , 3 , 3 = 1 , and  A 1 , 3 , 2 = 1  (see Figure 5).

4. Illustrative Examples of the Applicability of Theorem 4

In this section, we present two real examples for the applicability of the Theorem 4. The first example is a very simple example. With its help it is easy to follow the steps of the evaluation method. The steps are detailed at all points.
The second example is much more complex. It demonstrates that Theorem 4 is useful even in more complicated real-life cases, as neither the set of conditions DC nor the set of conditions MC cover the real-life problems.

4.1. Group Stage of FIFA World Cup 2022

During the FIFA World Cup 2022, first, the teams were grouped into separate groups. Group D contained the teams of Australia (AUS: “1”), Denmark (DEN: “2” ), France (FRA: “3”) and Tunisia (TUN: “4”). There were six matches between the teams, and each team played only once with any other team. The comparisons are the matches, and the results of the comparison can be the following: tie (“equal”), win (“better”), and defeat (“worse”). The results of the matches were the following: the result of the match between Denmark and Tunisia was a tie, Australia beat Tunisia and Denmark, France beat Australia and Denmark, and Tunisia beat France. Data matrix A is:  A 2 , 4 , 2  =  A 4 , 2 , 2 = 1 A 1 , 4 , 3  =  A 4 , 1 , 1 = 1 A 1 , 2 , 3  =  A 2 , 1 , 1 = 1 A 3 , 1 , 3  =  A 1 , 3 , 1 = 1 A 3 , 2 , 1  =  A 2 , 3 , 3 = 1 A 4 , 3 , 3  =  A 3 , 4 , 1 = 1 . Data are available on the website [37]. The graph of the results of Group D can be seen in Figure 6.
Denote the expectations by  m i , i = 1 , 2 , 3 , 4  (representing the strengths of the teams) and use parameter d for the boundary of a tie. According to (4), the probability of a tie between Tunisia and Denmark is:
F ( d ( m 2 m 4 ) ) F ( d ( m 2 m 4 ) ) .
According to (5), the probability that Australia beats Tunisia is:
1 F ( d ( m 1 m 4 ) ) .
According to (3), the probability that France beats Australia, that is, Australia is defeated by France, is:
F ( d ( m 1 m 3 ) ) .
One can express similarly the probabilities belonging to the other match results with the help of the parameters.
Aggregating them in the likelihood function, we obtain:
L ( X | m 1 , m 2 , m 3 , m 4 , d ) = ( F ( d ( m 2 m 4 ) ) F ( d ( m 2 m 4 ) ) ) · ( 1 F ( d ( m 1 m 4 ) ) ) · ( 1 F ( d ( m 1 m 2 ) ) ) · · F ( d ( m 1 m 3 ) ) ) · F ( d ( m 2 m 3 ) ) · F ( d ( m 3 m 4 ) ) .
If we use the Bradley–Terry model, we substitute:
F ( x ) = 1 1 + exp ( x ) .
In the case of the Thurstone model,  F ( x )  is the standard normal (Gauss) c.d.f. The likelihood function (10) has to be maximized in  m ̲ = ( m 1 , m 2 , m 3 , m 4 )  and  0 < d . Remember that one parameter can be fixed.
The likelihood function (10) is a complicated function, and it cannot be optimized analytically. Moreover, it might happen that the function does not attain its maximal value, or the argument of the maximal value is not unique. Let us ascertain if this is the case or not.
1
The set of conditions DC does not hold, as DC 2 is not satisfied; consider S = {DEN} and  S ¯   = {AUS, FRA, TUN}.
2
The set of conditions MC does not hold. MC 2 is not satisfied, because only one match was played between two teams. One can easily check that neither MC 3 holds.
3
The set of conditions SC is fulfilled, as:
SC 1 holds; there is a tie between TUN and DEN.
SC 2 holds; see the directed circle FRA, AUS, TUN, FRA in Figure 6.
SC 3 is satisfied; its checking is left to the reader.
The previously-known theorems (Theorems 2 and 3) could not guarantee the existence and the uniqueness of MLE, but Theorem 4 can. The existence and uniqueness of MLE means that the results of Group G can be evaluated using Thurstone motivated paired comparison methods.
If we perform the evaluation applying BTM, we obtain the following results (see Table 1). The official results’ evaluation method is the usual point-based method.
We can see that, although the official points are equal in the case of France and Australia, the strength of France is larger than the strength of Australia.

4.2. Group Stage of UEFA Nations League 2022–2023

We would like to demonstrate the usability of the new conditional system through a more complicated example.
In the 2022–2023 UEFA Nations League, in Group A3, even though there were four teams (teams of England (ENG), Germany (GER), Hungary (HUN). and Italy (ITA)), they played a round-robin system with home and away matches. In this example, there were twice as many comparisons between the teams as in the previous example. The data of the matches in this example can be found on this website [38]. Based on this, one can easily fill data matrix A, and the likelihood function can be set up accordingly. The graph of the match results can be seen in Figure 7.
Now, we turn to the possibility of evaluation using Thurstone motivated methods.
1
The set of conditions DC dos not hold, as DC 2 is not satisfied; consider S = {ENG} and  S ¯  = {GER, HUN, ITA}.
2
The set of conditions MC does not hold as MC 2 is not satisfied, because there was no back and forth victory between two teams.
3
The set of conditions SC is fulfilled.
SC 1 holds; there was a tie between GER and HUN.
SC 2 holds; see the directed circle GER, ITA, HUN, GER in Figure 7.
SC 3 is satisfied; its checking is left to the reader.
The results of the evaluation applying logistic distribution can be found in Table 2. One can realize that, although the official points of ITA and HUN are close to each other, the difference between the strengths are quite high. The reason is that the wins of HUN are against the weak teams in the groups, GER and ENG, while the wins of ITA are against a strong team, namely HUN. This discrepancy is reflected in the estimated values of the expectations.

5. Comparisons of the Efficiency of the Conditions

In this section, we investigate, in some special situations, which sets of conditions (conditions DC 1, DC 2; conditions MC 1, MC 2, MC 3; conditions SC 1, SC 2, SC 3) are fulfilled, i.e., are able to detect the existence and the uniqueness of the maximizer.
From applications’ perspective, there are such cases when the strengths of the objects to rank are close to each other and when they differ very much. On the other hand, there are such cases when the judgment “equal” is frequent, and such cases when it is rare. Referring to sports: in football and in chess the result draw comes up often, but in handball rarely.
The most general set of conditions is the set of conditions SC. These conditions are fulfilled most frequently from the three sets of conditions. Nevertheless, it is interesting to what extent it is more applicable than the other two sets of conditions. For that, we made a large amount of computer simulation repetitions in the case of different parameter settings, and we investigated how frequently the conditions are satisfied and how frequently we experience that the maximum exists.
Monte-Carlo simulation is a widespread tool for demonstrating the efficiency of a method or finding optimal solutions [39,40,41]. Due to the wide variety of the possible outcomes, we applied Monte-Carlo simulation for the investigations. The steps of the simulations are detailed as follows:
1
Fix the expectations ( m ̲ ) and the value of the parameter d. We used arithmetical sequences, i.e.,  m ̲  =  ( 0 , h , 2 h , . . . , ( n 1 ) h ) .
2
Fix the number of comparisons.
3
Generate randomly the pairs between which the comparisons exist.
4
Generate randomly the result of each comparison, according to the probabilities (3), (4), and (5).
5
Check if the set of conditions DC, the set of conditions MC, and the set of conditions SC are satisfied or not in the case of the above generated random graph.
6
Optimize numerically the likelihood function (6) applying the data corresponding to the generated graph. Decide whether the optimization process is convergent or not.
7
Repeat steps 3–6 N times.
8
Compute the relative frequency of the fulfills of the set of conditions DC, MC, and SC; moreover, compute the relative frequency of the iterations in which the numerical optimization is convergent.
During the simulations, logistic distribution was used in the case of the likelihood function. The numerical optimization could be performed by statistical program packages (for example MATLAB [42], or R) but for the sake of very quick optimizations we developed a program in C#, applying a modified fix-point iteration for the partial derivatives of the logarithm of the likelihood function (6). After every 25 iterations, we checked the changes in the variables rounded to five decimal places. If it was zero, we stopped the iteration. If the iteration process was not stopped after 1000 iterations, it indicated the absence of a clear maximum position. Using this method, we calculated the objects’ strength and decided about the convergence of the iteration.
In the presented cases,  m ̲  =  ( 0 , h , 2 h , . . . , ( n 1 ) h ) .  We present the case of eight objects. The numbers of comparisons were any integer from 8 to 64. In the presented cases they are 8, 16, 32, and 64. We present four parameter ensembles, called situations, which are shown in Table 3. The numbers of repetitions were high,  N = 10 8 .
In the presented situations, if the value of h is small then the strengths of the objects are close to each other. It implies that many “better–worse” pairs could be formed during the simulations. On the other hand, if the value of h is large, the strengths of the objects are far from each other, then we can expect only few “better–worse” pairs, but a great amount of “better” judgment. In terms of the number of “equal” judgments, if d is large then many “equal” judgments could be formed during the simulations, while only few, when d is small. The set of conditions DC can apply well the judgments “better”, but it requires only a single “equal” judgment. However, the set of conditions MC can use the judgments “equal” for connections, and the pairs “better–worse” judgments. Conditions SC do not require “better–worse” pairs, only judgments “better”, in one circle. We recall that a single “better–worse” pair is appropriate as a circle. The judgments “equal” are well-applicable for this set of conditions, too.
Table 3 summarizes the situations with the presumable ratios of the “equal” judgments and “better–worse” pairs. In addition, Table 4, Table 5, Table 6 and Table 7 contains the numerical results of the simulations. The order of the situations in terms of the number of the existence of the maximal values is decreasing. Column MAX contains the number of the cases when the maximum exists. Columns DC/MAX, MC/MAX and SC/MAX present the ratios of the cases when the set of conditions DC, MC, SC hold, respectively. We can see that increasing the number of comparisons, the number of such cases when the maximal value exists and the ratios increase. We can realize that the values of the columns SC/MAX are less than 1 on several occasions. This detects again that SC is not a necessary condition.
We performed  10 8  simulations per situation. Table 4 presents the results in Situation I. In this case we can see that the DC/MAX rate is lower than the MC/MAX rate. We could predict it because there are many “equal” judgments. The SC/MAX rate is high even for 16 comparisons. In the case of 16 comparisons, SC is 3.5 times better than MC and over 100 times better than DC.
Table 5 presents the results of Situation II. In this case, the rate of “equal” is low, which does not favor the set of conditions MC. This is also reflected in the ratio MC/MAX, which is much worse than the ratio DC/MAX. The set of conditions SC still stands out among the other conditions.
Table 6 shows the results of Situation III. Here, the maximum values exist more rarely than in the previous two cases. In this case, the number of “equal” decisions is high, while the rate of the “better–worse” pairs is low, which is more favorable for MC than DC, as we can see in Table 6. It can also be seen that the set of conditions DC is not as good as in the previous tables in terms of detecting the existence of the maximum. SC stands out again from the other two sets of conditions. Nevertheless, SC is able to show the existence of the maximum only at 73% in the case of 32 comparisons, compared to 99% in the previous situations. The set of conditions DC is almost useless, it is useful only in the cases of 3.3% even if the number of comparisons equals 64. The set of conditions of the MC method is slowly catching up and getting better, but for small numbers of comparisons (8, 16, 32) it is far from the much better SC criteria.
Table 7 presents the results in Situation IV. In the latter case, the numbers of “equal” choices and “better–worse” pairs are small, which is unfavorable to MC, principally. In this situation, SC detects the existence of the maximal value exceptionally well. DC evinces them less fine, but it works better than MC.
In all situations, we found that when we make few comparisons, SC is superior to the other conditions. As we made more and more comparisons, both other methods got better and better, but they were always worse than SC. The clear conclusion from the four tables is that the set of conditions SC is much more effective than the others, especially for small numbers of comparisons.

6. Summary

In this paper, conditions guaranteeing the existence and uniqueness of the MLE are investigated. The case of distributions with a general log-concave probability density function is studied. If two options are allowed, the usually applied Ford’s condition is generalized from the logistic distribution to a wide set of distributions. This condition is a necessary and sufficient condition.
In the case of three options in judgment, a necessary and sufficient condition was not proven, but there were two different sufficient conditions. We generalized them. It has been proven that the new set of conditions guarantees the existence and uniqueness of the maximizer. Two real-life sports examples illustrate the applicability of the generalization.
Moreover, we compared the conditions with the help of computer simulations and we have experienced that the set of the new conditions indicates the existence and uniqueness much more frequently than the previously known conditions. Consequently, it provides a more effective tool for such research, which was preformed by Yan [33] and Bong and Rinaldo [34]. The new set of conditions may afford effective help in decision support’s design.
The research includes the possibility of further developments. It would be desirable to set up the necessary and sufficient condition of the existence and uniqueness of the maximizer for the case of three options in choices, and simulations may help these findings. Further research is necessary to investigate the case of more than three options. These would be the subject of a next paper.

Author Contributions

Conceptualization, C.M.; methodology, É.O.-M., C.M.; software, L.G.; formal analysis, É.O.-M., C.M.; investigation, L.G.; data curation, L.G.; writing—É.O.-M.; writing-review and editing, L.G., C.M.; visualization, É.O.-M.; supervision, C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 4
Proof. 
First, we mention that instead of (6), its logarithm, the log-likelihood function
l o g L ( X | m 1 , m 2 , . . . , m n , d ) = k = 1 3 i = 1 n 1 j = i + 1 n A i , j , k · l o g ( P ( ξ i ξ j I k ) )
that is,
l o g L ( X | m 1 , m 2 , . . . , m n , d ) = 0.5 · k = 1 3 i = 1 n j = 1 , i j n A i , j , k · l o g ( P ( ξ i ξ j I k ) )
is maximized under the conditions 0 < d and  m 1  = 0. We prove that (A2) attains its maximal value under the conditions 0 < d and  m 1  = 0 and the argument of the maximal value is unique.
The steps of the proof are denoted by (ST1), (ST2), (ST3), (ST4), (ST5), and (ST6).
Computing the value of the log-likelihood function at  m = ( 0 , 0 , 0 , . . . , 0 ) ,   d = 1  and denoting this by  l o g L 0 , the maximum has to be sought in such regions where the values of (A2) are at least  l o g L 0 . Moreover, we note that every term of the sum in (A2) is negative (or zero if  A i , j , k = 0 ); consequently, the maximum cannot be attained in those regions where any term is under  l o g L 0 . By investigating the limits of the terms, we will check which parameters can be restricted into a closed bounded regions. The proof of the existence relies on the Weierstrass theorem: we restrict the range of d and  m 2 ,..., m n  to bounded closed sets where the continuous function (A2) has maximal value. For that, we prove some lemmas.
(ST1) The first step is to find a positive lower bound for the variable d.
Lemma A1. 
Condition SC 1 guarantees that the maximum can be attained if  ε d  with an appropriate value of  0 < ε .
Proof. 
SC 1 guarantees that there exists an index pair  i , j  for which  0 < A i , j , 2 . Now,
A i , j , 2 · l o g ( F ( d ( m i m j ) ) F ( d ( m i m j ) ) ) i f d 0 .
If  d 0 , the arguments of the c.d.f. tend to the same value, and their difference tends to zero. Consequently, its logarithm, with a positive multiplier, tends to minus infinity.
As  0.5 · A i , j , 2 · l o g ( F ( d ( m i m j ) ) F ( d ( m i m j ) ) )  <  l o g L 0 , if  d < ε , we can restrict the region of d to the subset  ε d  with an appropriate value of  0 < ε , while seeking the maximum. □
(ST2) The next step is to find an upper bound for the variable d.
Lemma A2. 
If  0 < A i , j , 3 , then there exists an upper bound  K i , j  for which it holds that the maximum can be attained in the region  d ( m i m j ) K i , j .
Proof. 
It is easy to see that if  0 < A i , j , 3 , then:
A i , j , 3 · l o g ( 1 F ( d ( m i m j ) ) ) supposing d ( m i m j ) .
Consequently, there exists a value  K i , j  with the following property: if  K i , j < d ( m i m j ) , then  0.5 · A i , j , 3 · l o g ( 1 F ( d ( m i m j ) ) ) < l o g L 0 , and the maximum has to be sought in the region  d ( m i m j ) K i , j .  It means that the maximum can be reached only in such regions where  d ( m i m j )  has an upper bound. □
Condition SC 2 guarantees that there is a cycle ( i 1 , i 2 , . . . , i h , i 1 ) with directed edges from  i k  to  i k + 1   k = 1 , 2 , . . . , h  and from  i h  to  i 1  and these directed edges arise from  0 < A i k , i k + 1 , 3   k = 1 , . . . , h  and  0 < A i h , i 1 , 3 . We can assume that  i 1  = 1. Lemma A2 implies that:
d ( m i k m i k + 1 ) K k , k + 1 and d ( m i h m i 1 ) K h , 1 .
Using a common upper bound K ( K k , k + 1 K  and  K h , 1 K ), moreover, summing the inequalities in (A5), we obtain that  h · d k = 1 h K .  This proves that it is enough to seek the maximum in a closed bounded set of d.
(ST3) Now, let us turn to the upper and lower bounds of the parameters  m i .
Let us define a graph  G ( S C )  as follows: the vertices are the objects. There is a directed edge from i to j if  0 < A i , j , 3  (i is “better” than j according at least one opinion). There is a directed edge from i to j and also from j to i if  0 < A i , j , 2  (they are “equal” according to least one opinion). We will use the following well-known statement. Condition SC 3 is equivalent to the following condition: between any pair of objects i and j there is a directed path in  G ( S C )  from one to the other.
Lemma A3. 
If  0 < A i , j , 3  and  m i K i , then there exists an upper bound of  m j  denoted by  K j , with the following property:  0.5 · A i , j , 3 · l o g ( 1 F ( d ( m i m j ) ) ) < l o g L 0  if  K j < m j , that is, the maximum can be attained if  m j K j .
Proof. 
Recalling (A4), we can conclude that  d ( m i m j ) K i , j .  As  m i K i  and  0 < d < K ( d ) , we obtain that  m j K i , j + K i .   □
We can interpret Lemma A3, that the property “having an upper bound” spreads in the direction of the edge “better” defined by  0 < A i , j , 3 .
Lemma A4. 
If  0 < A i , j , 1  (there is at least one opinion according to which i is “worse” than j) and the inequality  B i m i  holds, then there exists a lower bound of  m j , denoted by  B j , with the following property:  0.5 · A i , j , 1 · l o g F ( d ( m i m j ) ) < l o g L 0  if  m j < B j , that is, the maximum can be attained if  B j m j .
Proof. 
The statement is the consequence of the following: if  0 < A i , j , 1 , then:
l o g F ( d ( m i m j ) )   supposing d ( m i m j ) .
We can interpret Lemma A4, that the property “having a lower bound” spreads along the opinion “worse”.
(ST4) Finally, we investigate the effect of the existence of an opinion “equal” for the property boundedness.
Lemma A5. 
Suppose that the parameter d is bounded. If 0  < A i , j , 2  and  m i U i , there exists an upper bound  U j  for which if  U j < m j  then (A2 < l o g L 0 . It means that the maximum has to be sought in the region  m j U j .
If 0 < A i , j , 2  and  H i m i , there exists a lower bound  H j  for which if  m j < H j  then (A2 < l o g L 0 . It means that the maximum has to be sought in the region  H j m j .
Proof. 
It is easy to see that:
lim d ( m i m j ) A i , j , 2 · l o g ( F ( d ( m i m j ) ) F ( d ( m i m j ) ) ) = ,
and:
lim d ( m i m j ) A i , j , 2 · l o g ( F ( d ( m i m j ) ) F ( d ( m i m j ) ) ) = .
Consequently, the maximum has to be in the following region:
H i , j d ( m i m j ) and d ( m i m j ) B i , j ,
respectively with appropriate bounds  H i , j  and  B i , j . As  ϵ d K ( d ) , m i B i  implies that  m j   B i , j + B i  and  H i , j H i m j . □
We can summarize Lemma A5, that both properties “having an upper bound” and “having a lower bound” spread with opinion “equal”. It behaves as a “better” and a “worse” opinion, altogether.
(ST5) Now, we can prove that it is enough to seek the maximum on a closed bounded set of every parameter  m i .
Starting out of  m 1 = 0 , there exists a directed path from 1 to i in  G ( S C ) , along the edges defined by  0 < A i , j , 3  and  0 < A i , j , 2 . Walking along this path, and recalling that  m 1 = 0 , the property “having an upper bound” spreads from 1 to object i. The directed path from the object i to 1 is a reverse directed path from 1 to i, and the property “having a lower bound” of the object 1 spreads to i for every index i; consequently, the expectations can be restricted into a bounded closed set. The maximum of (A2) can only be in these regions. As (A2) is a continuous function, the Weierstrass theorem implies the existence of the maximal value.
(ST6) The uniqueness of the argument of the maximal value is the consequence of the strictly concave property of the logarithm of the p.d.f. Lemma 6 in [29] implies the strictly concave property of the function (A2) in  d ( m i m j )  and  d ( m i m j )  for every index pair  ( i , j )  for which  0 < A i , j , 2  and in  d ( m i m j )  if  0 < A i , j , 3 . Walking along the circle  D G ( S C )  defined by SC 2 and summing the arguments, we obtain that the function (A2) is the strictly concave function of the parameter d.
Now, let us turn to the strictly concave property of (A2) in the parameters  m i i = 2 , 3 , . . . , n . There is a directed path from 1 to i defined by  0 < A i , j , 2  and  0 < A i , j , 3  in graph  G ( S C ) . Walking along it, we can conclude the strictly concave property of (A2) in  d ( m i k m i k + 1 ) . Summing the arguments of the terms belonging to the path, we obtain that the log-likelihood function is strictly concave in  l · d + m i , where  0 < l  is the length of the path. This fact and the strictly concave property in d guarantee the strictly concave property in  m i . We obtain that function (A2) is strictly concave in its every variable  m i  and d; therefore, the argument of the maximum has to be unique.

References

  1. Sahroni, T.R.; Ariff, H. Design of analytical hierarchy process (AHP) for teaching and learning. In Proceedings of the 2016 11th International Conference on Knowledge, Information and Creativity Support Systems (KICSS), IEEE, Yogyakarta, Indonesia, 10–12 November 2016; pp. 1–4. [Google Scholar]
  2. Kosztyán, Z.T.; Orbán-Mihálykó, É.; Mihálykó, C.; Csányi, V.V.; Telcs, A. Analyzing and clustering students’ application preferences in higher education. J. Appl. Stat. 2020, 47, 2961–2983. [Google Scholar] [CrossRef] [PubMed]
  3. Cattelan, M.; Varin, C.; Firth, D. Dynamic Bradley–Terry modelling of sports tournaments. J. R. Stat. Soc. Ser. C (Appl. Stat.) 2013, 62, 135–150. [Google Scholar] [CrossRef] [Green Version]
  4. Gyarmati, L.; Orbán-Mihálykó, É.; Mihálykó, C.; Vathy-Fogarassy, Á. Aggregated Rankings of Top Leagues’ Football Teams: Application and Comparison of Different Ranking Methods. Appl. Sci. 2023, 13, 4556. [Google Scholar] [CrossRef]
  5. Orbán-Mihálykó, É.; Mihálykó, C.; Gyarmati, L. Application of the Generalized Thurstone Method for Evaluations of Sports Tournaments’ Results. Knowledge 2022, 2, 157–166. [Google Scholar] [CrossRef]
  6. Jeon, J.J.; Kim, Y. Revisiting the Bradley-Terry model and its application to information retrieval. J. Korean Data Inf. Sci. Soc. 2013, 24, 1089–1099. [Google Scholar] [CrossRef] [Green Version]
  7. Trojanowski, T.W.; Kazibudzki, P.T. Prospects and Constraints of Sustainable Marketing Mix Development for Poland’s High-Energy Consumer Goods. Energies 2021, 14, 8437. [Google Scholar] [CrossRef]
  8. Montequín, V.R.; Balsera, J.M.V.; Piloñeta, M.D.; Pérez, C.Á. A Bradley-Terry model-based approach to prioritize the balance scorecard driving factors: The case study of a financial software factory. Mathematics 2020, 8, 107. [Google Scholar]
  9. Canco, I.; Kruja, D.; Iancu, T. AHP, a Reliable Method for Quality Decision Making: A Case Study in Business. Sustainability 2021, 13, 13932. [Google Scholar] [CrossRef]
  10. Courcoux, P.; Semenou, M. Preference data analysis using a paired comparison model. Food Qual. Prefer. 1997, 8, 353–358. [Google Scholar] [CrossRef]
  11. Saaty, T.L. A scaling method for priorities in hierarchical structures. J. Math. Psychol. 1977, 15, 234–281. [Google Scholar] [CrossRef]
  12. Saaty, T.L. Decision making—The analytic hierarchy and network processes (AHP/ANP). J. Syst. Sci. Syst. Eng. 2004, 13, 1–35. [Google Scholar] [CrossRef]
  13. Liu, Y.; Eckert, C.M.; Earl, C. A review of fuzzy AHP methods for decision-making with subjective judgements. Expert Syst. Appl. 2020, 161, 113738. [Google Scholar] [CrossRef]
  14. Bozóki, S.; Fülöp, J.; Rónyai, L. On optimal completion of incomplete pairwise comparison matrices. Math. Comput. Model. 2010, 52, 318–333. [Google Scholar] [CrossRef] [Green Version]
  15. Rahman, H.U.; Raza, M.; Afsar, P.; Alharbi, A.; Ahmad, S.; Alyami, H. Multi-criteria decision making model for application maintenance offshoring using analytic hierarchy process. Appl. Sci. 2021, 11, 8550. [Google Scholar] [CrossRef]
  16. Brunelli, M. Introduction to the Analytic Hierarchy Process; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  17. Bonnini, S. Multivariate approach for comparative evaluations of customer satisfaction with application to transport services. Commun. Stat.-Simul. Comput. 2016, 45, 1554–1568. [Google Scholar] [CrossRef]
  18. Thurstone, L. A law of comparative judgment. Psychol. Rev. 1927, 34, 273–286. [Google Scholar] [CrossRef]
  19. Bradley, R.A.; Terry, M.E. Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons. Biometrika 1952, 39, 324–345. [Google Scholar] [CrossRef]
  20. Eliason, S.R. Maximum Likelihood Estimation: Logic and Practice; Number 96; Sage: London, England, 1993. [Google Scholar]
  21. Ford, L.R., Jr. Solution of a ranking problem from binary comparisons. Am. Math. Mon. 1957, 64, 28–33. [Google Scholar] [CrossRef]
  22. Graßhoff, U.; Großmann, H.; Holling, H.; Schwabe, R. Optimal designs for main effects in linear paired comparison models. J. Stat. Plan. Inference 2004, 126, 361–376. [Google Scholar] [CrossRef]
  23. Cattelan, M. Models for paired comparison data: A review with emphasis on dependent data. Stat. Sci. 2012, 27, 412–433. [Google Scholar] [CrossRef]
  24. Glenn, W.; David, H. Ties in paired-comparison experiments using a modified Thurstone-Mosteller model. Biometrics 1960, 16, 86–109. [Google Scholar] [CrossRef]
  25. Rao, P.; Kupper, L.L. Ties in paired-comparison experiments: A generalization of the Bradley-Terry model. J. Am. Stat. Assoc. 1967, 62, 194–204. [Google Scholar] [CrossRef]
  26. Davidson, R.R. On extending the Bradley-Terry model to accommodate ties in paired comparison experiments. J. Am. Stat. Assoc. 1970, 65, 317–328. [Google Scholar] [CrossRef]
  27. Agresti, A. Analysis of ordinal paired comparison data. J. R. Stat. Soc. Ser. C (Appl. Stat.) 1992, 41, 287–297. [Google Scholar] [CrossRef]
  28. Orbán-Mihálykó, É.; Mihálykó, C.; Koltay, L. A generalization of the Thurstone method for multiple choice and incomplete paired comparisons. Cent. Eur. J. Oper. Res. 2019, 27, 133–159. [Google Scholar] [CrossRef]
  29. Orbán-Mihálykó, É.; Mihálykó, C.; Koltay, L. Incomplete paired comparisons in case of multiple choice and general log-concave probability density functions. Cent. Eur. J. Oper. Res. 2019, 27, 515–532. [Google Scholar] [CrossRef]
  30. Hankin, R.K. A generalization of the Bradley–Terry model for draws in chess with an application to collusion. J. Econ. Behav. Organ. 2020, 180, 325–333. [Google Scholar] [CrossRef]
  31. Szabó, F.; Kéri, R.; Schanda, J.; Csuti, P.; Mihálykó-Orbán, E. A study of preferred colour rendering of light sources: Home lighting. Light. Res. Technol. 2016, 48, 103–125. [Google Scholar] [CrossRef]
  32. McHale, I.; Morton, A. A Bradley-Terry type model for forecasting tennis match results. Int. J. Forecast. 2011, 27, 619–630. [Google Scholar] [CrossRef]
  33. Yan, T. Ranking in the generalized Bradley–Terry models when the strong connection condition fails. Commun. Stat.-Theory Methods 2016, 45, 340–353. [Google Scholar] [CrossRef] [Green Version]
  34. Bong, H.; Rinaldo, A. Generalized results for the existence and consistency of the MLE in the Bradley-Terry-Luce model. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 2160–2177. [Google Scholar]
  35. Karlin, S. A First Course in Stochastic Processes; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  36. Prékopa, A. On logarithmic concave measures and functions. Acta Sci. Math. 1973, 34, 335–343. [Google Scholar]
  37. Wikipedia. 2022 FIFA World Cup. 2022. Available online: https://en.wikipedia.org/wiki/2022_FIFA_World_Cup#Group_D (accessed on 1 June 2023).
  38. Wikipedia. 2022–23 UEFA Nations League. 2022. Available online: https://en.wikipedia.org/wiki/2022%E2%80%9323_UEFA_Nations_League#Group_A3 (accessed on 1 June 2023).
  39. Zamanzade, E.; Mahdizadeh, M.; Samawi, H.M. Efficient estimation of cumulative distribution function using moving extreme ranked set sampling with application to reliability. AStA Adv. Stat. Anal. 2020, 104, 485–502. [Google Scholar] [CrossRef]
  40. Mahdizadeh, M.; Zamanzade, E. Smooth estimation of the area under the ROC curve in multistage ranked set sampling. Stat. Pap. 2021, 62, 1753–1776. [Google Scholar] [CrossRef]
  41. Szádoczki, Z.; Bozóki, S.; Juhász, P.; Kadenko, S.V.; Tsyganok, V. Incomplete pairwise comparison matrices based on graphs with average degree approximately 3. Ann. Oper. Res. 2022, 1–25. [Google Scholar] [CrossRef]
  42. Wickelmaier, F.; Schmid, C. A Matlab function to estimate choice model parameters from paired-comparison data. Behav. Res. Methods Instrum. Comput. 2004, 36, 29–40. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The options and the intervals belonging to them.
Figure 1. The options and the intervals belonging to them.
Axioms 12 00575 g001
Figure 2. The graph of Example 1. - - - is for “equal”, -> is for “better”.
Figure 2. The graph of Example 1. - - - is for “equal”, -> is for “better”.
Axioms 12 00575 g002
Figure 3. The graph of Example 2. - - - is for “equal”, -> is for “better”.
Figure 3. The graph of Example 2. - - - is for “equal”, -> is for “better”.
Axioms 12 00575 g003
Figure 4. The graph of Example 3.- - - is for “equal”, -> is for “better”.
Figure 4. The graph of Example 3.- - - is for “equal”, -> is for “better”.
Axioms 12 00575 g004
Figure 5. The graph of Example 4.- - - is for “equal”, -> is for “better”.
Figure 5. The graph of Example 4.- - - is for “equal”, -> is for “better”.
Axioms 12 00575 g005
Figure 6. The graph of the match results of Group D. (- - - is for tie, -> is for win).
Figure 6. The graph of the match results of Group D. (- - - is for tie, -> is for win).
Axioms 12 00575 g006
Figure 7. The graph of the match results of Group A3. (- - - is for tie, -> is for win).
Figure 7. The graph of the match results of Group A3. (- - - is for tie, -> is for win).
Axioms 12 00575 g007
Table 1. Evaluation of the match results in Group D.
Table 1. Evaluation of the match results in Group D.
CountryOfficial Points m i
FRA61.830
AUS61.791
TUN40.905
DEN10.000
Table 2. Evaluation of the match results in Group A3.
Table 2. Evaluation of the match results in Group A3.
CountryOfficial Points m i
ITA111.672
HUN101.347
GER70.900
ENG30.000
Table 3. Situations investigated.
Table 3. Situations investigated.
SituationhdRate of Judgments “Equal”Rate of “Better-Worse” Pairs
I.0.050.5largelarge
II.0.050.05smalllarge
III.0.50.5largesmall
IV.0.50.05smallsmall
Table 4. Situation I. ( h = 0.05 d = 0.5 ).
Table 4. Situation I. ( h = 0.05 d = 0.5 ).
Number of ComparisonsMAXDC/MAXMC/MAXSC/MAX
857,21600.09214210.1941765
1638,664,3250.00585680.20198020.7097257
3295,920,5810.2398530.82803850.9895364
6499,987,0660.8835990.99885960.9999986
Table 5. Situation II. ( h = 0.05 , d = 0.05 ).
Table 5. Situation II. ( h = 0.05 , d = 0.05 ).
Number of ComparisonsMAXDC/MAXMC/MAXSC/MAX
8371000.4070081
165,448,8900.32288760.00091190.9937707
3258,963,8020.87081190.18988810.9999976
6492,019,0270.99633520.95063071
Table 6. Situation III. ( h = 0.5 , d = 0.5 ).
Table 6. Situation III. ( h = 0.5 , d = 0.5 ).
Number of ComparisonsMAXDC/MAXMC/MAXSC/MAX
824800.02822580.0604839
161,025,0640.00057170.05322790.4203006
3223,544,0500.0045970.27710480.7256062
6476,946,0230.03331630.81416690.95373
Table 7. Situation IV. ( h = 0.5 , d = 0.05 ).
Table 7. Situation IV. ( h = 0.5 , d = 0.05 ).
Number of ComparisonsMAXDC/MAXMC/MAXSC/MAX
82001
1644,2460.11462090.000203550.9370956
322,621,6540.351735550.01842990.9965827
6425,579,1730.63298230.375946150.99996685
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gyarmati, L.; Orbán-Mihálykó, É.; Mihálykó, C. Comparative Analysis of the Existence and Uniqueness Conditions of Parameter Estimation in Paired Comparison Models. Axioms 2023, 12, 575. https://doi.org/10.3390/axioms12060575

AMA Style

Gyarmati L, Orbán-Mihálykó É, Mihálykó C. Comparative Analysis of the Existence and Uniqueness Conditions of Parameter Estimation in Paired Comparison Models. Axioms. 2023; 12(6):575. https://doi.org/10.3390/axioms12060575

Chicago/Turabian Style

Gyarmati, László, Éva Orbán-Mihálykó, and Csaba Mihálykó. 2023. "Comparative Analysis of the Existence and Uniqueness Conditions of Parameter Estimation in Paired Comparison Models" Axioms 12, no. 6: 575. https://doi.org/10.3390/axioms12060575

APA Style

Gyarmati, L., Orbán-Mihálykó, É., & Mihálykó, C. (2023). Comparative Analysis of the Existence and Uniqueness Conditions of Parameter Estimation in Paired Comparison Models. Axioms, 12(6), 575. https://doi.org/10.3390/axioms12060575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop