1. Introduction
Yager [
1,
2] introduced the concept of ordered weighted averaging (OWA) operator. It is an important issue to the application and theory of OWA operators to determine the weights of the operators. Previous studies have proposed a number of approaches for obtaining the associated weights in different areas such as date mining, decision making, neural networks, approximate reasoning, expert systems, fuzzy system and control [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20]. A number of approaches have been proposed for the identification of associated weights, including exponential smoothing [
6], quantifier guided aggregation [
19,
20] and learning [
20]. O’Hagan [
9] proposed another approach that determines a special class of OWA operators having maximal entropy for the OWA weights; this approach is algorithmically based on the solution of a constrained optimization problem. Hong [
10] provided new method supporting the minimum variance problem. Fullér and Majlender [
7,
8] suggested a minimum variance approach to obtain the minimal variability OWA weights and proved that the maximum entropy model could be transformed into a polynomial equation that could be proved analytically. Liu and Chen [
13] proposed a parametric geometric approach that can be used to obtain maximum entropy weights. Wang and Parkan [
18] suggested a new method which generates the OWA operator weights by minimizing the maximum difference between any two adjacent weights. They transferred the minimax disparity problem to a linear programming problem, obtained weights for some special values of orness, and proved the dual property of OWA. Liu [
12] proved that the minimax disparity OWA problem of Wang and Parkan [
18] and the minimum variance problem of Fullér and Majlender [
7] would always produce the same weight vector. Emrouznejad and Amin [
5] gave an alternative disparity problem to identify the OWA operator weights by minimizing the sum of the deviation between two distinct OWA weights. Amin and Emrouznejad [
3,
4] proposed an extended minimax disparity model. Hong [
11] proved this open problem in a mathematical sense. Recently, Wang et al. [
18] suggested a least square deviation model for obtaining OWA operator weights, which is nonlinear and was proved by using LINGO program for a given degree of orness. Sang and Liu [
17] proved this constrained optimization problem analytically, using the method of Lagrange multipliers. Liu [
14] stidied the general minimax disparity OWA operator optimization problem which includes a minimax disparity OWA operator optimization model and a general convex OWA operator optimization problem which includes the maximum entropy [
7] and minimum variance OWA problem [
8,
10,
15]. Liu [
15] suggested a general optimization model for determining ordered weighted averaging (OWA) operators and three specific models for generating monotonic and symmetric OWA operators.
In this paper, we propose a general least convex deviation model for OWA operators which attempts to obtain the desired OWA weight vector under a given orness level to minimize the least convex deviation after monotone convex function transformation of absolute deviation. The model includes the least square deviation (LSD) OWA operators model suggested by Wang et al. [
1]. We completely prove the optimization problem mathematically and consider the same numerical examples that Wang et al. [
1] and Sang and Liu [
17] presented in their illustration of the application of the least square deviation model. We also determine the solution OWA operator weights not for some discrete value of
but for all orness levels
as a function of
2. The Least Convex Deviation Model
Yager [
2] introduced an aggregation technique based on the ordered weighted averaging (OWA) operators. An OWA operator of dimension
n is a mapping
that has an associated weighting vector
with properties
, and
where
is the
jth largest element of a collection of the aggregated objects
In [
2], Yager introduced a measure of "orness" associated with the weighting vector
W of an OWA operator, which is defined as
Wang and Parkan [
17] proposed a minimax disparity OWA operator optimization problem:
The minimax disparity approach obtains OWA operator weights based on the minimization of the maximum difference between any two adjacent weights. Recently, Liu [
14] considered the general minimax disparity OWA operator optimization problem as follows.
where
F is a strictly convex function on
and is at least two order differentiable.
Liu [
14] also considered a general convex OWA operator optimization problem with given orness level:
where
F is a strictly convex function on
and is at least two order differentiable.
When
(1) becomes the maximum entropy OWA operator problem that was discussed in [
7,
12].
in (1) corresponds to minimum variance OWA operator problem [
8,
10]. When
(1) becomes the OWA problem of R
nyi entropy [
15].
Wang et al. [
1] have introduced the following least squares deviation (LSD) method as an alternative approach to determine the OWA operator weights.
They solved this problem by using LINGO or MATLAB software package. Recently, Sang and Liu [
17] solved this constrained optimization problem analytically by using the method of Lagrange multipliers. The general least convex deviation model for OWA operators attempts to obtain the desired OWA weight vector under a given orness level to minimize the least convex deviation after monotone convex function transformation of absolute deviation, which includes the least square deviation (LSD) problem as a special case.
We now propose the general least convex deviation model with a given orness level as follows,
where
F is a strictly convex function on
, and
is continuous on [0, 1) such that
.
The followings are well-known propositions which can be easily checked.
Proposition 1. If , then is the only feasible solution of the model (3). For , is the only feasible solution of the model (3). Since if and only if , we have that if , then is the only optimum solution of the model (3).
Proposition 2. If is an optimal solution of the model (3) for a given level of then , where is an optimal solution of the model (3) for and vice versa. Hence, for any , we can consider the model (3) for degree of and then take the reverse of that optimal solution.
By Proposition 1 and 2, without loss of generality, we may assume that .
3. Optimal Solution of the Least Convex Deviation Problem
In this section, we consider the mathematical proof of the optimization problem (3). We need the following lemmas to find optimal solution of the model (3).
Lemma 1. Let be the set of nonnegative weighting vectors where such that If then there exists the set of nonnegative weighting vectors such that and Proof. Consider
and
(
depends on
) such that
and define a function
on
by
Then
ia continuous and
Let
for some
and
. Then we have
so that
Now since
and then there exist
and
such that
and
and
and, by (4),
Then since
and
F is strictly increasing, we have
This completes the proof. □
Lemma 2. Let be the set of nonnegative weighting vectors such that If then there exists the set of nonnegative weighting vectors such that and Proof. Let
be the i-th smallest weighting vector of
. Then we have
Hence there exists some
such that
and
where
Since
we consider two possible cases;
or
First we suppose that
and let
Since
and
we have that
and then
and
Since
F is nondecreasing on
,
We note that for
there exists
such that
Then h is an increasing continuous function of and we have three possible cases as ; (Case 1) for some , (Case 2) , and (Case 3) for some .
We define a function
on
by
such that
Then
H is continuous and, then by (6), we have
(Case 1) for some ;
There are two possible cases, that is,
or
Then, from (8) and (9), there exist
and
such that
Then we have
and
And since
F is nondecreasing on
, by construction of
for
Second, suppose that
and let
,
and
. Then
and from Lemma 1, we obtain
such that
,
and
.
(Case 2) ;
Since
and
from (8) and (11), there exist
such that
Hence we obtain
by putting
such that
and
And, just like (Case 1), we have
(Case 3) for some ;
There are two possible cases, that is,
or
But if
, then it is easy to obtain desired
by the similar arguments to the above. Hence we consider the case
Now (12) and (13) are exactly the same as (5) and (6) regarding
as
and
as
in (5) and (6). If we use the same arguments as above finite number of times, then we finally have the following situation; there exist
such that
and
If we put and in Lemma 1, then we obtain the desired result of by using Lemma 1 again. We complete the proof. □
The following result is immediately from Lemma 2.
Lemma 3. The model (3) is equivalent to the following model:where F is a strictly convex function on , and is continuous on [0, 1) such that . Lemma 4. If we put , then the model (14) is transformed into the following model:where F is a strictly convex function on with continuous first differentiability of F such that . We now prove the optimization problem of model (3). We note that F is strictly convex if and only if is strictly increasing.
Theorem 1. Let F be a strictly convex function on and be continuous on [0, 1) such that . Then the optimal solution for the model (3) with given orness level is as follow:
In case of it is the weighting function withwhere are determined by the constraints:and . In case of it is the weighting function withandwhere is determined by the constraints such that Proof. By Lemma 4, we consider the following model (15) to get
for
.
There are two possible cases such as (case 1) or (2) .
(Case 1)
Let
such that
and let
for
be a vector such that
We also note that
and we put
for
. Then, noting that
we have
from (22) and (24) because
We also have, from (21) and (23)
because
Since
(the equality holds if and only if
), we have that
where the second equality comes from the fact that
, the third equality comes from (25), the fifth equality comes from (26) and (27) and the second inequality comes from the fact that
for
. The equality holds if and only if
This completes the Case 1.
(Case 2)
Let
and
where
is determined by the constraints such that
Since
we have
and then
for
satisfies constraints of the model (15). We now show that
for
is the optimal solution of the model (15). Let
for
be a vector such that
If we put
,
then we have
because
where the first equality comes from (35) and the last equality comes from (30). Hence we have
where the second equality comes from (28) and the fourth equality comes from (36). The equality holds if and only if
for
This completes the proof. □
Note 1. Observe that
is either
or
for some
. By Lemma 2, the solution OWA operator weights for
has the form
Then and by, . We also note that ⇔ and ⇔ for some
As a special case of model (3), we consider the following model for
Note 2. Let
be a subset of
on which the optimal solution for the model (37) with given orness level
has the form of
If
is a linear function of
with positive slope, then we define
by
. We also have
From now on we have the closed form of the exact optimal solutions of the LSD OWA model specifically as a function of n and .
Corollary 1 ([
17]).
The optimal solution for the model (37) with given orness level when and is the weighting function , whereandon
Proof. By the Equation (
20) in with
and
then we have
Then by, Equation (
18)
for
and hence by Equation (
19)
Since
, noting that
is a linear function of
with positive slope,
So that is the optimal solution for the model (37) for □
Corollary 2 ([
17]).
The optimal solution for the model (38) with given orness level when and for is the weighting function on
Proof. Let
be given for
and
,
in the Equation (
16) of Theorem 1. If
and
then we have
where
Since
is the linear function of
with positive slope, we have
so that
This completes the proof. □
From Corollary 1, is a linear function of on each interval . It is also easy to check that is continuous as a function of . Hence we have the following property.
Proposition 3. Let , as a function of be the optimal solution for the model (37) with given orness level when . Then is continuous and piecewise linear.
4. Numerical Examples
We consider the same numerical example that Wang et al. [
1] presented in their illustration of the application of the least square deviation model for
. Wang et al. [
18] determined the OWA operator weights satisfying discrete degrees of orness:
But, in this example, we determine the solution OWA operator weights as a continuous function of
for all orness level
using our results.
Example 1 ([
3]).
Suppose that and Then, from Theorem 1 and Equation (39) of Corollary 2,In case of , we substituting n with 5 and k with in equations of Theorem 1. Then Thus the optimal solution of the problem is In case of , we substituting n with 5 and k with in Equation (38) of Corollary 2. Then Thus the optimal solution of the problem is Similarly, we can obtain optimal solutions as a linear function of α on each intervals and , as on , the optimal solution isand on , the optimal solution is In terms of Proposition 2, if the orness level , the optimal solutions is the dual of the optimal solutions with and
Table 1 shows the OWA operator weights determined by model (37) with
and
as a continuous piecewise linear function of
We next consider the same numerical example that Sang and Liu [
17] presented in their illustration of the application of the least square deviation model for
. Sang and Liu [
17] determined the OWA operator weights satisfying discrete degrees of orness:
. But, in this example, we determine the solution OWA operator weights
as a function of
for all orness level
Example 2 ([
17]).
Suppose that and Then, from Corollary 1 and Equation (39) of Corollary 2, we haveIn case of , we substitute k with in equations of Corollary 1. Then Thus the optimal solution of the problem is In case of , we substitute k with in Equation (38) of Corollary 2. Then Thus the optimal solution of the problem is Similarly, we can obtain optimal solutions as a linear function of α on each intervals such as , , , , , and .
Example 3. In this example we consider a different type of the model (37) when and We determine the solution OWA operator weights as a function of α on If then and then . By the Equation (20) in with and we have Since we have for in Equation (18) of and Since , Thus the optimal solution of the problem (40) in case of is By similar method in the proof of Corollary 2, we have Since ,and from Equation (16) in, that isso that the optimal solution is Similarly for , we haveand from Equation (16) in,so that the optimal solution is