Next Article in Journal
Fully Parallel Homological Region Adjacency Graph via Frontier Recognition
Next Article in Special Issue
“Multi-Objective and Multi-Level Optimization: Algorithms and Applications”: Foreword by the Guest Editor
Previous Article in Journal
Iterative Oblique Decision Trees Deliver Explainable RL Models
Previous Article in Special Issue
A Pixel-Wise k-Immediate Neighbour-Based Image Analysis Approach for Identifying Rock Pores and Fractures from Grayscale Image Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Porcupine Measure for Comparing the Performance of Multi-Objective Optimization Algorithms

by
Christiaan Scheepers
1 and
Andries Engelbrecht
2,3,*
1
Independent Researcher, Pretoria 0001, South Africa
2
Department of Industrial Engineering, Computer Science Division, Stellenbosh University, Stellenbosch 7600, South Africa
3
Center for Applied Mathematics and Bioinformatics, Gulf Unversity of Science and Technology, Hawally 32093, Kuwait
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(6), 283; https://doi.org/10.3390/a16060283
Submission received: 19 April 2023 / Revised: 26 May 2023 / Accepted: 28 May 2023 / Published: 31 May 2023

Abstract

:
In spite of being introduced over twenty-five years ago, Fonseca and Fleming’s attainment surfaces have not been widely used. This article investigates some of the shortcomings that may have led to the lack of adoption of this performance measure. The quantitative measure based on attainment surfaces, introduced by Knowles and Corne, is analyzed. The analysis shows that the results obtained by the Knowles and Corne approach are influenced (biased) by the shape of the attainment surface. Improvements to the Knowles and Corne approach for bi-objective Pareto-optimal front (POF) comparisons are proposed. Furthermore, assuming M objective functions, an M-dimensional attainment-surface-based quantitative measure, named the porcupine measure, is proposed for comparing the performance of multi-objective optimization algorithms. A computationally optimized version of the porcupine measure is presented and empirically analyzed.

1. Introduction

First introduced by Fonseca and Fleming [1], attainment surfaces provide researchers in multi-objective optimization with a means to accurately visualize the region dominated by a Pareto-optimal front (POF). In many studies, approximated Pareto optimal fronts (POFs) are shown by joining the non-dominated solutions using a curve. Fonseca and Fleming reasoned that it is not correct to use a curve to join these non-dominated solutions. The use of a curve creates a false impression that intermediate solutions exist between any two non-dominated solutions. In reality, there is no guarantee that any intermediate solutions exist. Fonseca and Fleming suggested that, instead of a curve, the non-dominated solutions can be used to create an envelope that separates the dominated and non-dominated spaces. The envelope formed by the non-dominated solutions is referred to as an attainment surface.
Despite being proposed in 1995, attainment surfaces have not seen wide use in the comparison of multi-objective algorithms (MOAs). Instead, the well-known hypervolume [2,3], inverted generational distance [4,5] and its improvements [6], and spread [7] measures are frequently used to quantify and to compare the quality of approximated POFs. This study provides an analysis of the shortcomings of attainment surfaces as a multi-objective performance measure. Specifically, the attainment-surface-based measure proposed by Knowles and Corne [8] is analyzed. Improvements to Knowles and Corne’s approach for bi-objective optimization problems are developed and analyzed in this paper. Additionally, an M-dimensional (where M is the number of objectives) attainment-surface-based quantitative measure, named the porcupine measure, is proposed and analyzed.
The porcupine measure provides a way to quantify the ratio of the Pareto front when one algorithm performs statistically significantly better than another algorithm. The objective of this paper is to introduce this new attainment-surface-based measure and to illustrate its applicability. For this purpose, the measure is applied to compare the performance of arbitrary selected MOAs on a set of multi-objective optimization benchmark problems. Note that the focus is not on an extensive comparison of multi-objective algorithms but rather on validating the use of the porcupine measure as a statistically sound mechanism to compare MOAs.
The remainder of this paper is organized as follows. Section 2 introduces multi-objective optimization along with the definitions used throughout this paper. Section 3 presents the background and related work. Next, 2-dimensional attainment surfaces are introduced in Section 4, followed by a weighted approach to produce attainment surfaces in Section 5. The generalization to M dimensions is provided in Section 6. Finally, the conclusions are given in Section 7.

2. Definitions

Without loss of generality, assuming minimization, a multi-objective optimization problem (MOP) with M objectives is of the form
m i n i m i z e f ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f M ( x ) )
with x F , f m : R n R for all m { 1 , , M } , and where F R n is the feasible space as determined by constraints; n is the dimension of the search space, and M is the number of objective functions.
The following definitions are used throughout this paper.
Definition 1.
(Domination): A decision vector x 1 F dominates a decision vector x 2 F (denoted by x 1 x 2 ) if and only if f m ( x 1 ) f m ( x 2 ) m { 1 , , M } and m { 1 , , M } such that f m ( x 1 ) < f m ( x 2 ) .
Definition 2.
(Pareto optimal): A decision vector x 1 F is said to be Pareto optimal if no decision vector x 2 F exists such that x 2 x 1 .
Definition 3.
(Pareto-optimal set): A set P = { x 1 F | x 2 F : x 2 x 1 } , where P R n , is referred to as the Pareto-optimal solutions (POS).
Definition 4.
(Approximated Pareto-optimal front): A set Q = { f = ( f 1 ( x * ) , f 2 ( x * ) , , f M ( x * ) ) , x * P } , where Q R M , is referred to as an approximation for the true POF.
Definition 5.
(Nadir objective vector): A vector that represents the upper bound of each objective in the entire POF is referred to as a nadir point.

3. Background and Related Work

Fonseca and Fleming [1] suggested that the non-dominated solutions that make up the approximated POF be used to construct an attainment surface. The attainment surface’s envelope is defined as the boundary in the objective space that separates those points that are dominated by, or equal to, at least one of the non-dominated solutions that make up the approximated POF from those points for which no non-dominated solution dominates or equals. Figure 1 depicts an attainment surface and the corresponding approximated POF.
The attainment surface envelope is identical to the envelope used during the calculation of the hypervolume metric [2,3]. In contrast to the hypervolume calculation, in the case of an attainment surface, the envelope is not used directly in the calculation of a performance metric. Instead, the attainment surface can be used to visually compare algorithms’ performance by plotting the attainment surfaces for both algorithms.
For stochastic algorithms, variations in the performance over multiple runs (also referred to as samples) are expected. Fonseca and Fleming [1] described a procedure to generate an attainment surface that represents a given algorithm’s performance over multiple independent runs. The attainment surface for multiple independent runs is computed by first determining the attainment surface for each run’s approximated POF. Next, a number of random imaginary lines is chosen, pointing in the direction of improvement for all the objectives. For each line, the points of intersection with each of the lines and the attainment surfaces are calculated. Figure 2a,b depict three attainment surfaces with intersection lines and intersection points.
For each line, the intersection points can be seen as a sample distribution that is uni-dimensional and can thus be strictly ordered. By calculating the median for each of the sample distributions, the objective vectors that are likely to be attained in exactly 50 % of the runs can be identified. The envelope formed by the median points is known as the 50 % grand attainment surface. Similar to how the median is used to construct the 50 % grand attainment surface, the lower and upper quantiles (25th and 75th percentiles) are used to construct the 25 % and 75 % grand attainment surfaces.
The sample distribution approach can also be used to compare performance between algorithms. In order to compare two algorithms, two sample distributions—one for each of the algorithms—are calculated per intersection line. Standard non-parametric statistical test procedures can then be used to determine if there is a statistically significant difference between the two sample distributions. Using the statistical test results, a combined grand attainment surface, as depicted in Figure 2c, can be constructed, showing the regions where each of the algorithms outperforms the other. Fonseca and Fleming [1] suggested that suitable test procedures include the median test, its extensions to other quantiles, and tests of the Kolmogorov–Smirnov type [9].
Knowles and Corne [8] extended the work carried out by Fonseca and Fleming and used attainment surfaces to quantify the performance of their Pareto archives evolution strategy (PAES) algorithm. Knowles and Corne identified four variables in the approach proposed by Fonseca and Fleming, namely:
  • How many comparison lines should be used;
  • Where the comparison lines should go;
  • Which statistical test should be used to compare the univariate distribution;
  • In what form should the results be presented.
From their empirical analysis, Knowles and Corne found that at least 1000 lines should be used. In order to generate the intersection lines, the minimum and maximum values for each objective over the non-dominated solutions were found. The objective values were then normalized according to the minimum and maximum values into the range [ 0 , 1 ] . Intersection lines were then generated as equally spread lines from the origin rotated from ( 0 , 1 ) to ( 1 , 0 ) , effectively rotating 90°, covering the complete approximated POF.
For M-dimensional problems, where the number of objectives is M > 2 , Knowles and Corne suggested using a grid-based approach where points are spread equally on the M, ( M 1 ) -dimensional hyperplanes. Each hyperplane corresponds to an objective value fixed at the value 1.0 . The intersection lines are drawn from the origin to these equally distributed points. In the case of 3-dimensional problems, a 6 × 6 grid would result in 108 ( 3 × 6 × 6 ) points and, thus, 108 intersection lines. Similarly, using a 16 × 16 grid on a 3-dimensional problem would result in 768 intersection lines, and so forth.
For statistical significance testing, Knowles and Corne used the Mann–Whitney U test [9] with a significance level of α = 0.05 .
Finally, Knowles and Corne found that a convenient way to report the comparison results was to use simple value pairs [ a , b ] , hereafter referred to as the Knowles-Corne measure (KC), where a gives the percentage of space for which algorithm A was found to be statistically superior to algorithm B, and b gives the percentage where algorithm B was found to be statistically superior to algorithm A. It can be noted that 100 ( a + b ) gives the percentage where neither algorithm was found to be statistically superior to the other.
Knowles and Corne [8] generalized the definition of the comparison to compare more than two algorithms. For K algorithms, the above comparison is carried out for all K 2 algorithm pairs. For each algorithm, k, two percentages are reported: a k , which is the region where algorithm k was not worse than any other algorithm, and b k , which is the region where algorithm k performed better than all the other ( K 1 ) algorithms. Note that a k b k because the region described by b k is contained in the region described by a k .
Knowles and Corne [10] found that visualization of attainment surfaces in three dimensions is difficult due to the intersection lines not being evenly spread. As an alternative, Knowles presented an algorithm inspired by the work conducted by Smith et al. [11] to visually draw summary attainment surfaces using axis-aligned lines. The algorithm was found to be particularly well suited for drawing 3-dimensional attainment surfaces.
Fonseca et al. [12] continued work on attainment surfaces by introducing the empirical attainment function (EAF). The EAF is a mean-like, first-order moment measure of the solutions found by a multi-objective optimiser. The EAF allows for intuitive visual comparisons between bi-objective optimization algorithms by plotting the solution probabilities as a heat map [13]. Fonseca et al. [14] studied the use of the second-order EAF, which allows for the pairwise relationship between random Pareto-set approximations to be studied.
It should be noted that calculation of the EAF for three or more dimensions is not trivial [15]. Efficient algorithms to calculate the EAF for two and three dimensions have been proposed in [15]. Tušar and Filipič [16] developed approaches to visualize the EAFs in two and three dimensions.

4. Regarding 2-Dimensional Attainment Surfaces

The attainment surface calculation approach developed by Fonseca and Fleming [1] did not describe in detail how the intersection lines should be generated. Instead, it was only stated that a random number of intersection lines, each pointing in the direction of improvement for all the objectives, should be used. This approach worked well to construct a visualization of the attainment surface.
When Knowles and Corne [8] extended the intersection line approach to develop a quantitative comparison measure, they needed the lines to be equally distributed. If the lines were not equally distributed, as depicted in Figure 2b, certain regions of the attainment surface would contribute more than others, leading to misleading results.
Figure 3 depicts two example attainment surfaces with rotation-based intersection lines. Figure 3a depicts a concave attainment surface. Visually, the rotation-based intersection lines look to be equally distributed. Figure 3b, however, depicts a convex attainment surface. Visually, the length of the attainment surface between the intersection lines is larger in the regions closer to the objective axis than in the middle regions. Clearly, the rotation-based intersection lines are not equally spaced for convex-shaped fronts when comparing the length of the attainment surface represented by each intersection line.
In order to address the unequal spacing of the rotation-based intersection lines, a new approach to placing the intersection lines is proposed in this paper. To compensate for the shape of the front, the intersection lines can be generated either inwardly or outwardly positioned on a line, running from the extreme values of the attainment surface, based on the shape of the attainment surfaces being compared. Figure 4 depicts the inward and outward intersection line approaches for a convex shaped front. The regions are clearly more equally spread for the inward intersection line approach.
However, the direction of the intersection lines is less desirable for comparison purposes. At the edges, the intersection lines are parallel with the opposite objective’s axis. Intuitively, it is more desirable that the intersection lines should be parallel to the closest objective’s axis. Another disadvantage of the inward and outward approaches is that the approach to be selected depends on the shape of the front, which is typically unknown. For attainment surfaces that are not fully convex or concave, neither approach is suitable.
An alternative approach, referred to as attainment-surface-shaped intersection lines (ASSIL) in this paper, is to generate the intersection lines along the shape of the attainment surface. In order to equally spread the intersection lines, the Manhattan distance is used to calculate equal spacings for the intersection lines along the attainment surface. Figure 5 depicts the Manhattan distance calculation between two points on the approximated POF, which is q 3 and q 1 in this case. ASSIL can be generated using Algorithm 1.
Algorithm 1 Attainment-surface-shaped intersection line (ASSIL) generation.
  1:
Input: The optimal POF, Q = { q i : i { 1 , , I } } with I solutions and q i = ( q i 1 , q i 2 )
  2:
Output: An attainment surface
  3:
Sort Q in ascending order by q i 1
  4:
i 1
  5:
Let N be the desired number of intersection lines
  6:
for  i n 1 , , N  do
  7:
    d ( i n 1 ) × ( q I 1 q 11 ) + ( q 12 q I 2 ) N 1
  8:
   while  d < ( q i 1 q 11 ) + ( q 12 q i 2 )  do
  9:
      i i + 1
10:
   end while
11:
   if  d = ( q i 1 q 11 ) + ( q 12 q i 2 )  then
12:
      q ^ q i
13:
   else if  d ( q ( i + 1 ) 1 q 11 ) + ( q 12 q i 2 )  then
14:
      q ^ ( q 11 + ( d ( q 12 q i 2 ) ) , q i 2 )
15:
   else
16:
      q ^ ( q ( i + 1 ) 1 , q 12 ( d ( q ( i + 1 ) 1 q 11 ) ) )
17:
   end if
18:
    θ i N 1 N 1 × π 2
19:
    F r o m ( q ^ 1 sin θ , q ^ 2 cos θ )
20:
    T o ( q ^ 1 + sin θ , q ^ 2 + cos θ )
20:
   // Finally, draw the generated intersection line
21:
   drawIntersectionLine( F r o m , T o )
22:
end for
Intersection lines are spaced equally along the attainment surface. The intersection lines are rotated incrementally such that the intersection lines at the ends of the attainment surface are parallel to the objective axis.
Figure 6 depicts the attainment-surface-shaped intersection line approach. The generation of the intersection lines along the shape of the attainment surface allows for an equal spacing of the intersection lines independent of the shape of the front. For all shapes that the attainment surface can assume, whether convex, concave, or mixed, the intersection lines are equally spread out.
The KC measure is calculated as shown in Algorithm 2.
Algorithm 2 Algorithm for the calculation of the KC measure.
  1:
Input: Intersection lines for algorithms A and B to be compared
  2:
Output: The KC measure
  3:
Let t o t a l = 0
  4:
Let w i n s A = 0
  5:
Let w i n s B = 0
  6:
for each intersection line l do
  7:
   Let O be the strict ordering of the intersection points for algorithms A and B on intersection line l
  8:
   Let O A O be the ordering of the intersection points for algorithm A on intersection line l
  9:
   Let O B O be the ordering of the intersection points for algorithm B on intersection line l
10:
   if  O A is statistically significantly less than O B  then
11:
      w i n s A = w i n s A + 1
12:
   else if  O B is statistically significantly less than O A  then
13:
      w i n s B = w i n s B + 1
14:
   end if
15:
    t o t a l = t o t a l + 1
16:
end for
17:
Return [ 100 w i n s A t o t a l , 100 w i n s B t o t a l ]
An evaluation of the rotation-based and random intersection line approaches is presented using six artificially generated POF test cases based on those used by Knowles and Corne [8]. Figure 7 depicts the six artificially generated POF test cases. Each of these artificially generated POF test cases was tested using six pof shape geometries, namely concave, convex, linear, mixed, and disconnected geometries. Figure 8 depicts the five POF shape geometries.
Table 1 summarises the true KC, the KC with rotation-based and random intersection lines, and the KC with ASSIL results. Values in red indicate attainment surfaces obtained that outperformed the control method (i.e., the true KC measure) by more than 5%, while values in blue indicate attainment surfaces that were found that were 5% worse than the control method. For each of the approaches, 1000 intersection lines were used for the calculation.
As expected, the ASSIL generation approach produced results much closer to the true KC measure: the closer the POFs being compared are to the true POF, the more accurate the comparison using the ASSIL generation approach becomes.
Table 2 and Table 3 present a comparison of the varying results obtained from using the various intersection line generation approaches. Results comparing the vector evaluated particle swarm optimization (VEPSO) [17], optimized multi-objective particle swarm optimization (OMOPSO) [18], and speed-constrained multi-objective particle swarm optimization (SMPSO) [19] algorithms using the Zitzler-Deb-Thiele (ZDT) [20] and Walking Fish Group (WFG) [21] test sets are presented. The choice of algorithms was arbitrary and only for illustrative purposes. Results were obtained over 30 independent runs. For more details on the algorithms and parameters used, the interested reader is referred to [22]. The characteristics of the problems are summarized in Table 4.
Variations in the results between the different intersection line generation approaches can be seen. ZDT1, ZDT3, ZDT4, and ZDT6 all show variations in the results greater than 5 % for each of the comparisons. WFG1, WFG2, WFG5, WFG6, and WFG9 all show variations in the results greater than 5 % for at least one of the comparisons. The variations are indicative of the bias towards certain attainment surface shapes shown by the various intersection line generation approaches.

5. Weighted 2-Dimensional Attainment-Surface-Shaped Intersection Lines

As an alternative to the equally spread intersection lines used by ASSIL, intersection lines can be generated along the shape of the POF, with at least one intersection line per attainment surface line segment. Because the attainment surface segments are not all of equal lengths, a weight is associated with each intersection line to balance the KC measure result. The weighted attainment-surface-shaped intersection lines (WASSIL) generation algorithm is given in Algorithm 3.
Figure 9 depicts a convex POF with WASSIL-generated intersection lines. The figure clearly shows that the intersection lines are positioned along the attainment surface, and due to the positioning, the lines are angled slightly differently from the intersection lines in Figure 3b. The WASSIL algorithm should, for the test cases, result in a weighted KC measure result that matches the true KC measure result.
Note that the weighted KC measure is calculated as shown in Algorithm 4.
Table 5 summarises the true KC measure, the KC measure with rotation-based and random intersection lines, the KC measure with ASSIL, and the KC measure with WASSIL results. For each of the approaches, 1000 intersection lines were used for the calculation.
Algorithm 3 Weighted attainment-surface-shaped intersection line (WASSIL) generation.
  1:
Input: The optimal POF, Q = { q i : i { 1 , , I } } with I solutions and q i = ( q i 1 , q i 2 )
  2:
Output: An attainment surface
  3:
for each attainment line segment s l  do
  4:
   Let w l = l e n g t h ( s l )
  5:
   Let c be the center of s l
  6:
   Let θ = π ( c 1 + ( max ( q i 2 c 2 ) ) 2 d , for i = 1 , , I
  7:
   Let p = ( sin θ , cos θ )
  8:
   //Draw the generated intersection line
  9:
   drawIntersectionLineWithWeight(from= c p ,to= c + p ,weight= w l )
10:
end for
For POF test cases 1 through 3, only 2 of the 15 measurements using the random intersection line generation approach had a deviation from the true KC of less than 5 % . Overall, 50 % of the measurements using the random intersection line generation approach had a deviation greater than 5 % . This confirms that the random intersection line generation approach is not well suited for the KC calculation.
The rotation-based intersection line generation approach presented by Knowles and Corne fared better than the random intersection line generation approach. Only 7 of the 30 measurements using the rotation-based intersection line generation approach had a deviation greater than 5 % . Case 1 with a convex POF fared worse with a deviation of almost 15 % for both algorithms. Four of the five case 2 measurements using the rotation-based intersection line generation approach had a deviation greater than 5 % . For the remaining case 2 measurement, a deviation of at least 3 % is noted. The results indicate the rotation-based intersection line generation approach outperformed the random intersection line generation approach with respect to accuracy. However, the results also indicate that the rotation-based intersection line generation approach is not well suited for the KC calculation and that the results vary based on the POF shape and the spread of the solutions.
As expected, the WASSIL generation approach produced results much closer to the true KC: the closer the approximated POFs being compared are to the true POF, the more accurate the comparison using the WASSIL generation approach becomes.
Algorithm 4 Weighted KC measure algorithm
  1:
Input: Intersection lines for algorithms A and B to be compared
  2:
Output: The weighted KC measure
  3:
Let w t o t a l = 0
  4:
Let w i n s A = 0
  5:
Let w i n s B = 0
  6:
for each intersection line l do
  7:
   Let w l be the weight associated with l
  8:
   Let O be the strict ordering of the intersection points for algorithms A and B on intersection line l
  9:
   Let O A O be the ordering of the intersection points for algorithm A on intersection line l
10:
   Let O B O be the ordering of the intersection points for algorithm B on intersection line l
11:
   if  O A is statistically significantly less than O B  then
12:
      w i n s A = w i n s A + w l
13:
   else if  O B is statistically significantly less than O A  then
14:
      w i n s B = w i n s B + w l
15:
   end if
16:
    w t o t a l = w t o t a l + w l
17:
end for
18:
Return [ 100 w i n s A w t o t a l , 100 w i n s B w t o t a l ]

6. M -Dimensional Attainment Surfaces

For M-dimensional problems, Knowles and Corne [8] recommended that a grid-based intersection line generation approach, as explained in Section 3, be used. Similar to the rotational approach for 2-dimensional problems, the grid-based approach would lead to unbalanced intersection lines when measuring irregularly shaped POFs. Figure 10 shows an example of an irregularly shaped 3-dimensional attainment surface.
Section 6.1 discusses the challenges that need to be addressed in order to generate intersection lines for M-dimensional attainment surfaces. Section 6.2 and Section 6.3 introduce two algorithms to generate M-dimensional attainment surface intersection lines. The first uses a naive (and computationally expensive) approach that produces all possible intersection lines. The second is computationally more efficient, considering a minimal set of intersection lines. Section 6.4 presents a stability analysis of the two proposed algorithms to show that the computationally efficient approach performs similarly to the naive approach with respect to comparison accuracy.

6.1. Generalizing Attainment Surface Intersection Line Generation to M Dimensions

For 2-dimensional problems, the ASSIL approach generates equally spread intersection lines. Intuitively, generalization of the assil approach for M dimensions requires the calculation of equally spread points over the M-dimensional attainment surface.
The calculation of equally spread points requires that the surface is divided into equally sized M 1 -dimensional hypercubes. For the 3-dimensional case, this would require dividing the attainment surface into equally sized squares. The intersection vectors would be positioned from the middle of each square. The length of the edges would need to be set to the greatest common divider of the lengths of the edges that make up the attainment surface. Even for simple cases, this would lead to an excessive number of squares. The more squares, the higher the computational cost of the measure.
In order to lower the computational cost of the measure, the number of squares needs to be reduced. Because the square edge lengths are based on the greatest common divider, there is no way to reduce the number of squares as long as the squares must be equal in size. If the square sizes differ, the measure will be biased to areas with smaller squares. Areas with smaller squares will carry more weight in the calculation because there will be more of them.
In contrast to the ASSIL approach, the WASSIL approach does not require the intersection lines to be equally spread; instead, only a weight factor must be known for each intersection line. For the 3-dimensional case, the weight factor for each intersection line is calculated as the area of the squares that make up the attainment surface. The weight factor for each intersection line is calculated as the area of the square by multiplying the edge lengths. The weight factor also allows use of rectangles in the 3-dimensional case (hyper-rectangles in the M-dimensional case) instead of equally sized squares.

6.2. Porcupine Measure (Naive Implementation)

This section presents the naive implementation for the n-dimensional attainment-surface-based quantitative measurement named the porcupine measure. The naive implementation uses each of the n-dimensional values from each Pareto-optimal point to subdivide the attainment surface in each of the dimensions. Figure 11 depicts an example of the subdivision approach for each of the three dimensions, considering all intersection lines. Figure 12 depicts the attainment surface, in 3-dimensional space, with the subdivisions visible.
In addition to the calculation of the hyper-rectangles, the center point and intersection vector need to be calculated. The naive implementation of the porcupine measure is summarized in Algorithm 5. For a more detailed algorithm listing, please refer to Appendix A.
Using the intersection lines generated by the above algorithm, two algorithms can now be compared using a nonparametric statistical test, such as the Mann–Whitney U test [9]. The porcupine measure is defined, similar to the weighted KC measure, as the weighted sum of the intersection lines where a statistically significant difference exists over the sum of all the weights (i.e., the percentage of the surface area of the attainment surface, as determined by the weights, where one algorithm statistically performs superior to another).
Algorithm 5 Porcupine measure (naive implementation).
  1:
Input: The found POF
  2:
Output: The porcupine attainment surface
  3:
for each of the objective space basis vectors, m c , do
  4:
   Project the attainment surface parallel to the basis vector, m c , onto the orthogonal ( M 1 ) -dimensional subspace
  5:
   Subdivide the projected attainment surface, in each dimension at every Pareto-optimal point’s dimensional value, into hyper-rectangles
  6:
   for each hyper-rectangle do
  7:
     Let c  be the center point of the hyper-rectangle
  8:
     Let w be the weight of the hyper-rectangle, equal to the product of the hyper-rectangle’s edge lengths
  9:
     for each dimension in objective space m { 1 , , M }  do
10:
         Let min m be the smallest of the mth-dimensional values of all the Pareto-optimal points where at least one other dimensional value is less than or equal to that of the corresponding center vector’s dimensional value
11:
        Let max m be the largest of the mth-dimensional values of all the Pareto-optimal points
12:
     end for
13:
     Let p be the intersection vector, calculated as p m = c m min m max m min m , where p m is the mth component of the vector p , corresponding to the mth objective function
14:
     drawIntersectionLineWithWeight (from = c p , to = c + p , weight = w)
15:
   end for
16:
end for
Figure 13 depicts an attainment surface with subdivisions and intersection vectors generated using the naive approach. The porcupine measure’s name is derived from the fact that the intersection vectors resembles the spikes of a porcupine.

6.3. Porcupine Measure (Optimized Implementation)

The large number of subdivisions that result from using the naive implementation of the porcupine measure creates a computationally complex problem when performing the statistical calculations required by the porcupine measure. To reduce the computational cost of the porcupine measure, the naive implementation can be optimized by subdividing the attainment surface only as necessary to accommodate the shape of the attainment surface. Figure 14 depicts an attainment surface with the subdivision lines (dashed) as generated by the optimized implementation.
Note that the algorithm yields the minimum number of subdivisions such that the results are independent of the dimension ordering of the Pareto-optimal points. This is by design to allow for the reproducibility and increased stability of the results.
The optimized implementation of the porcupine measure is summarized in Algorithm 6. For a more detailed algorithm listing, please refer to Appendix B.
Similar to the naive implementation, the porcupine measure is defined as the weighted sum of the intersection lines where a statistically significant difference exists over the sum of all the weights (the percentage of the surface area of the attainment surface, as determined by the weights, where one algorithm statistically performs superior to another).
Figure 15 depicts an attainment surface with subdivisions and intersection vectors generated using the optimized implementation. As can be seen in the figure, the optimized implementation resulted in notably fewer subdivisions and intersection vectors. The lower number of intersection vectors considerably reduces the computational complexity of the measure.

6.4. Stability Analysis

In order to show that the optimized implementation provides results similar to the naive implementation, 30 independent runs of each measure were executed. Each measurement run calculated the porcupine measure using the approximated POFs as calculated by 30 independent runs of each of the algorithms being compared. A total of 30 × 30 = 900 runs were executed for each algorithm being compared.
Table 6 lists the results for the algorithm pairs that were compared. The algorithm pairs are listed without a separator line between them. The results should thus be interpreted by looking at both lines of the comparison. For each algorithm, the mean, standard deviation ( σ ), minimum, and maximum for the naive and optimized implementations of the porcupine measure with a maximum side length of 0.1 are listed.
The experimental results in [22] show that a maximum side length of 0.1 for the optimized implementation yielded a good trade-off between accuracy of the results and performance when compared with the naive implementation.
Statistical testing was performed to determine if there were any statistically significant differences between the naive and optimized implementations’ results. The Mann–Whitney U test was used at a significance level of α = 0.05 . The purpose of the statistical testing was to determine if there was any information loss due to using the optimized implementation compared to the naive implementation. The results indicated that for 52 out of the 54 measurements, or 96 % , there were no statistically significant differences. Only for two cases, namely the OMOPSO in the WFG3 OMOPSO vs. VEPSO comparison and the SMPSO in the WFG3 VEPSO vs. SMPSO comparison, was a statistically significant difference noted. In spite of the statistical difference, the ranking of the algorithms did not change. Based on the results, it can, therefore, be concluded that the optimized implementation yielded the same results as the naive implementation and no information was lost. Because no information was lost when using the optimized implementation, it can be concluded that the optimized implementation, with less computational complexity when compared to the naive implementation, can be used when conducting comparisons of multi-objective algorithms.
Algorithm 6 Porcupine measure (optimized implementation).
  1:
Input: The found POF
  2:
Output: The porcupine attainment surface
  3:
Let q max be the nadir vector defined as ( max { q s 1 } , max { q s 2 } , , max { q s M } ) for each of the known Pareto-optimal points, q s , of dimension M
  4:
for each of the objective space basis vectors, m c , do
  5:
   Project the attainment surface parallel to the basis vector, m c , onto the orthogonal ( M 1 ) -dimensional subspace
  6:
   for each of the Pareto-optimal points, q s  do,
  7:
     Let q ^ be the vector with dimensional values set to the minimum dimensional values that are dominated or the corresponding q max value if no minimum exists
  8:
     Let Q ^ be the set of all points that dominate q ^ but not q s , filtered for non-dominance
  9:
     for each dimension m do,
10:
        Let Q m be the set of all the m’th dimensional values of the points in Q ^ that fall inside the range [ q s m , q ^ m ]
11:
     end for
12:
     Let Q m i n be the set of all the minimum points that will affect the calculation of p
13:
     Let Q m a x be the set of all the maximum points that will affect the calculation of p
14:
     Add all dimensional values of the points in Q min and Q max that fall inside the range [ q s m , q ^ m ] to the corresponding Q m set
15:
     Add additional values to the Q m sets to limit the side lengths to Δ max
16:
     Subdivide the projected attainment surface in each of the m dimensions at every value in the Q m set into hyper-rectangles
17:
     for each hyper-rectangle do,
18:
        Let c be the center point of the hyper-rectangle
19:
        Let w be the weight of the hyper-rectangle, equal to the product of the hyper-rectangle’s edge lengths
20:
        for each dimension m do
21:
            Let min m be the smallest of the mth-dimensional values of all the Pareto-optimal points where at least one other dimensional value is less than or equal to that of the corresponding center vector’s dimensional value
22:
          Let max m is the largest of the mth-dimensional values of all the Pareto-optimal points
23:
        end for
24:
        Let p be the intersection vector, calculated as p m = c m min m max m min m , where p m is the mth component of the vector p
25:
        drawIntersectionLineWithWeight (from = c p , to = c + p , weight = w)
26:
     end for
27:
   end for
28:
end for
The mean standard deviation was 2.743 , and the maximum standard deviation was 7.022 . The conclusion that can be drawn from the data is that the optimized implementation of the porcupine measure is very robust because measurement values for each of the samples were close to the average.
For the experimentation that was carried out for this study, the runtime of the optimized implementation was notably faster than that of the naive implementation. A difference on the orders of a few magnitudes was noticeable.
The computational complexity of the naive implementation is directly proportional to the size of the Q m sets. It should be noted that, for the tested algorithms with an approximated POF size of 50 points tested over 30 independent samples, the optimal POF had a typical size of 1250 points. The size of the Q m sets were thus approximately 1250 points. For three dimensions, this resulted in a minimum computational complexity of at least 1250 3 , or rather 1,953,125,000 (almost two billion). The optimized implementation resulted in much-reduced computational complexity because only the necessary subdivisions were made. The size of the Q m sets were much smaller. For the three-dimensional case, the maximum edge length leads to a minimum complexity of at least ( 1 0.1 ) 3 , or rather 1000 times lower than that of the naive version.

7. Conclusions

This article investigated shortcomings that may have led to the lack of adoption of attainment-surface-based quantitative performance measurements for multi-objective optimization algorithms. It was shown that the quantitative measure proposed by Knowles and Corne was biased against convex Pareto-optimal fronts (POFs) when using rotational intersection lines. The attainment-surface-shaped intersection lines (ASSIL) generation approach was proposed. The ASSIL generation approach was shown not to be biased against any attainment surface shape.
An algorithm for an M-dimensional attainment-surface-based quantitative measure, named the porcupine measure, was presented. Additionally, a computationally optimized implementation of the porcupine measure was introduced and analyzed. The results indicated that the optimized implementation performed as well as the naive implementation.
The porcupine measure allows for a quantitative comparison between M-dimensional approximated POFs through the use of attainment surfaces. The porcupine measure provides additional information on an algorithm’s performance when compared to another algorithm, which was previously not quantifiable. A thorough comparison comparing the state-of-the-art multi-objective optimization algorithms using the porcupine measure is left as future work.

Author Contributions

Conceptualization, C.S. and A.E.; methodology, C.S. and A.E.; formal analysis, C.S.; investigation, C.S.; resources, C.S.; data curation, C.S.; writing–original draft preparation, C.S.; writing–review and editing, A.E.; visualization, C.S.; supervision, A.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to further research being conducted on the data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Naive Porcupine Measure Implementation

This appendix provides more detailed, step-by-step pseudocode implementation of the naive porcupine measure.
Algorithm A1 Naive porcupine measure intersection line generation.
First, determine the optimal POF using the POFs for all the samples of all the algorithms being compared. Then, create sorted value sets containing the dimensional values of all the Pareto-optimal points.
  1:
Let Q = { q i : i { 1 , , I } } be the optimal POF with
    q i = ( q i 1 , q i 2 , , q i m , , q i M ) .
  2:
Let Q m = s o r t { q i m : i { 1 , , I } } such that Q m 1 is the
   first element in the set Q m and Q m I is the last.
Loop m c over the dimension 1 through M and loop d over all the index value combinations for the sorted value sets.
  3:
n = 1
  4:
for all  m c { 1 , , M }  do
  5:
   for all combinations of
  5:
       d = ( d 1 , d 2 , , d m c 1 , 0 , d m c + 1 , , d m , , d M )
      where d m { 1 , , | Q m | 1 } m { 1 , , M } ,
       m m c  do
Next, determine the “locked” dimension’s value, if it exists; if not, skip to the next combination of d . Next, determine the value of the m c dimension (hereafter referred to as the “locked” dimension). If no value exists, skip to the next combination of d as no hyper-rectangle exists for the current combination of d .
  6:
     Let z = m i n { q i m c : q i m Q m d m m m c ,
          m { 1 , M } }
          m { 1 , , M } }
  7:
     if z exist then
Determine the center and the weight factor of the hyper-rectangle formed from Q d m to Q d m + 1 for all dimensions m, except for the dimension m c , which is already calculated as z.
  8:
        Let c = ( c 1 , , c m , , c M ) with
             c m = { Q m d m + Q m ( d m + 1 ) 2 i f m m c z o t h e r w i s e
  9:
        Let w = m = 1 , m m c M ( Q m ( d m + 1 ) Q m d m ) BEGIN: Intersection vector
Determine the intersection vector, p , that is projected from the center point, c . The result is saved as a tuple, f n , containing the center, intersection vector, and weight factor.
10:
        Let p = ( p 1 , , p m , p M )
11:
        for all  m p { 1 , , M }  do
12:
           p m i n = m i n { q i m p : m { 1 , , M } , m m p :
                q i m c m }
13:
           p m a x = m a x { q i m p : m { 1 , , M } , m m p :
                q i m c m }
14:
           p m p = c m p p m i n p m a x p m i n
15:
        end forEND: Intersection vector
16:
        Let f n = ( c , p , w )
17:
        n = n + 1
18:
     end if
19:
   end for
20:
end for
21:
N = n

Appendix B. Optimized Porcupine Measure Implementation

This appendix provides a more detailed, step-by-step pseudocode implementation of the optimized, computationally more efficient, porcupine measure.
Algorithm A2 Optimized Porcupine Measure Intersection Line Generation
First, the optimal Pareto-optimal front and nadir vector, q max , are calculated.
  1:
let q i Q where Q is the optimal POF and
    q i = ( q i 1 , q i 2 , , q i M ) .
  2:
let q max = ( max { q i 1 } , , max { q i M } )
  3:
n = 1
Each point, q s , in the optimal POF is processed separately for each dimension m c .
  4:
for all  q s Q  do Hyper-rectangle sub-division based on non-dominance
  5:
   for all  m c { 1 , , M }  do
q ^ is set equal to the minimum value for each dimension that is dominated. If no such value exists for the dimension, the value is set to the nadir vector’s value. The hyper-rectangles formed by q s are all contained between q s and q ^ .
  6:
     let q ^ = ( q ^ 1 , , q ^ M )
  7:
     for all  m b { 1 , , M } , m b m c  do
  8:
        let q ^ m b = min { q max m b q i m b : q i m b > q s m b
             q i m q s m m { 1 , , M } , m m b }
  9:
     end for
The set Q ^ s is defined as the set of all points that dominate parts of the area dominated by q s when all the m c -dimensional values are ignored. The set is filtered by removing all the points dominated by other points in the set, again ignoring the dimension m c . Q ^ s effectively forms a mini-POF for q s . For example, for point q 1 with m c = 3 , the set Q ^ s will contain q 2 , q 3 , and q 4 , and for m c = 2 , the set Q ^ s will contain q 2 and q 6 .
10:
     let Q ^ s * = { q i : q i m q ^ m q i m c q s m c
          m * { 1 , , M } , m * m c : q i m * > q s m * ,
          m { 1 , , M } , m m c }
11:
     let Q ^ s = { q : q Q ^ s * , q ˜ Q ^ s * : q ˜ m q m ,
          m { 1 , , M } , m m c q ˜ m < q m ,
          m { 1 , , M } , m m c }
The set Q m is defined as the set of all the dimension values that are dominated by q s along with the maximum boundary value, q ^ m , and the minimum boundary value, q s .
12:
     for all  m { 1 , , M } , m m c  do
13:
        let Q m = { q i m : q s m < q i m < q ^ m ,
             q i Q ^ s q ^ m q s m }
14:
     end for
15:
     let z = q s m c
The Q m set can now be used similarly to how it is used in the naive implementation to calculate the hyper-rectangle subdivisions. However, additional subdivisions are necessary to allow for accurate calculation of the intersection vector, p .
The set Q m i n is iteratively constructed as the set that contains all the minimum points that will influence the intersection vectors for hyper-rectangles that lie between the selected point q s and the maximum boundary point q ^ .
16:
     let Q m i n =
17:
     for all  m p { 1 , , M }  do
18:
        let q ^ 1 * = q ^
19:
         t = 1
20:
        repeat
21:
          find q i such that q i m p = min { q i m p :
21:
                m { 1 , , M } , m m p : q i m < q ^ t m * }
22:
          add q i to Q m i n iff q i Q m i n
23:
           q ^ ( t + 1 ) m * = q i m if q i m > q s m q i m < q ^ t m * q ^ t m * otherwise
24:
          t = t + 1
25:
        until  m { 1 , , M } , m m p : q i m q s m
26:
     end for
Dimensional values that fall within the range q s m to q ^ m are added to the corresponding Q m set.
27:
     for all  m { 1 , , M } , m m c  do
28:
        add q i m to Q m q i Q m i n , q s m < q i m < q ^ m
29:
     end for
Similar to Q m i n , the set Q m a x is iteratively constructed as the set that contains all the maximum points that will influence the intersection vectors for hyper-rectangles that lie between the selected point q s and the maximum boundary point q ^ .
30:
     let Q m a x =
31:
     for all  m p { 1 , , M }  do
32:
        let q ^ 1 * = q ^
33:
         t = 1
34:
        repeat
35:
          find q i such that q i m p = max { q i m p :
                m { 1 , , M } , m m p : q i m < q ^ t m * }
36:
          add q i to Q m a x iff q i Q m a x
37:
           q ^ ( t + 1 ) m * = q i m if q i m > q s m q i m < q ^ t m * q ^ t m * otherwise
38:
          t = t + 1
39:
        until  m { 1 , , M } , m m p : q i m q s m
40:
     end for
Dimensional values that fall within the range q s m to q ^ m are added to the corresponding Q m set.
41:
     for all  m { 1 , , M } , m m c  do
42:
        add q i m to Q m q i Q m a x , q s m < q i m < q ^ m
43:
     end for
Additional subdivision values can be added to the corresponding Q m set to control the maximum side length, Δ m a x , of the hyper-rectangles.
44:
     for all  m { 1 , , M } , m m c  do
45:
        for all  q m Q m  do
46:
          if  q m * = min ( q m * Q m , q m * > q m )  then
47:
              δ c = q m * q m Δ m a x
48:
              δ l = q m * q m δ c
49:
             for  δ = 1 . . δ c  do
50:
               add ( δ × δ l + q m ) to Q m
51:
             end for
52:
          end if
53:
        end for
54:
     end for
Similar to the naive implementation, the Q m sets can now be used to calculate the hyper-rectangles, the intersection vectors, the center points, and the weight factors. Hyper-rectangles only exist if the Q m sets contain at least two values for all dimensions, except the “locked” dimension, m c .
55:
     if  | Q m | 2 m { 1 , , M } , m m c  then
56:
        for all  m { 1 , , M } , m m c  do
57:
          let Q m = s o r t { Q m }
58:
        end for
59:
        for all combinations of
             d = ( d 1 , d 2 , , d m c 1 , 0 , d m c + 1 , , d m , , d M )
            where d m { 1 , , | Q m | 1 } m { 1 , , M } ,
             m m c  do
Only process hyper-rectangles that lie in the area that is exclusively dominated by the point q s and no other point in the set Q ^ s , ignoring the “locked” dimension, m c . If Q ^ s is empty ( | Q ^ s | = 0 ), then there exists a single hyper-rectangle between q s and q ^ with the “locked” dimension, m c , equal to q s m c .
60:
          if  | Q ^ s | = 0 or q ˜ Q ^ s : q ˜ m q ˇ m
                m { 1 , , M } , m m c q ˜ m < q ˇ m
                m { 1 , , M } , m m c with q ˇ m = Q m d m
                m { 1 , , M } , m m c  then
Determine the center point and weight factor of the hyper-rectangle formed from Q d m to Q d m + 1 for all dimensions m, except for the dimension m c , which is set to the value z.
61:
             let c = ( c 1 , , c m , , c M ) with
                   c m = { Q m d m + Q m ( d m + 1 ) 2 i f m m c z o t h e r w i s e
62:
             let w = m = 1 , m m c M ( Q m ( d m + 1 ) Q m d m )
Finally, determine the intersection vector, p , that is projected from the center point, c . The result is saved as a tuple, f n , containing the center, intersection vector, and weight factor.
63:
             let p = ( p 1 , , p m , p M )
64:
             for all  m p { 1 , , M }  do
65:
                p m i n = m i n { q i m p : q i Q m i n
                      m { 1 , , M } , m m p : q i m c m }
66:
                p m a x = m a x { q i m p : q i Q m a x
                      m { 1 , , M } , m m p : q i m c m }
67:
                p m p = c m p p m i n p m a x p m i n
68:
             end for
69:
             Let f n = ( c , p , w )
70:
             n = n + 1
71:
          end if
72:
        end for
73:
     end if
74:
   end for
75:
end for
76:
N = n

References

  1. Fonseca, C.M.; Fleming, P.J. On the Performance Assessment and Comparison of Stochastic Multiobjective Optimisers. In Parallel Problem Solving from Nature—PPSN IV; Springer: Berlin/Heidelberg, Germany, 1995; Volume 1141, pp. 584–593. [Google Scholar]
  2. Zitzler, E.; Thiele, L. Multiobjective Optimization Using Evolutionary Algorithms—A Comparative Case Study. In Parallel Problem Solving from Nature—PPSN V; Springer: Berlin/Heidelberg, Germany, 1998; pp. 292–301. [Google Scholar] [CrossRef]
  3. Van Veldhuizen, D.A. Multiobjective Evolutionary Algorithms: Classifications, Analyses and New Innovations. Ph.D. Thesis, Air Force Institute of Technology, Dayton, OH, USA, 1999. [Google Scholar] [CrossRef]
  4. Coello Coello, C.A.; Reyes-Sierra, M. A Study of the Parallelization of a Coevolutionary Multi-Objective Evolutionary Algorithm. In Proceedings of the Third Mexican International Conference on Artificial Intelligence, Mexico City, Mexico, 26–30 April 2004; pp. 688–697. [Google Scholar] [CrossRef]
  5. Reyes-Sierra, M.; Coello Coello, C.A. A New Multi-Objective Particle Swarm Optimizer with Improved Selection and Diversity Mechanisms; Technical report; Evolutionary Computation Group at CINVESTAV-IPN: Mexico City, México, 2004. [Google Scholar]
  6. Ishibuchi, H.; Masuda, H.; Tanigaki, Y.; Nojima, Y. An Analysis of Quality Indicators Using Approximated Optimal Distributions in a Three-dimensional Objective Space. In Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization, Guimaraes, Portugal, 29 March–1 April 2015; pp. 110–125. [Google Scholar]
  7. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  8. Knowles, J.D.; Corne, D.W. Approximating the nondominated front using the Pareto Archived Evolution Strategy. Evol. Comput. 2000, 8, 149–172. [Google Scholar] [CrossRef] [PubMed]
  9. Gibbons, J.D.; Chakraborti, S. Nonparametric Statistical Inference, 5th ed.; Chapman and Hall: Strand, UK; CRC Press: Boca Raton, FL, USA, 2010; p. 630. [Google Scholar]
  10. Knowles, J.D. A summary-attainment-surface plotting method for visualizing the performance of stochastic multiobjective optimizers. In Proceedings of the 5th International Conference on Intelligent Systems Design and Applications 2005, ISDA ’05, Warsaw, Poland, 8–10 September 2005; Volume 2005, pp. 552–557. [Google Scholar] [CrossRef]
  11. Smith, K.I.; Everson, R.M.; Fieldsend, J.E. Dominance measures for multi-objective simulated annealing. In Proceedings of the IEEE Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; Volume 1, pp. 23–30. [Google Scholar] [CrossRef]
  12. Da Fonseca, V.G.; Fonseca, C.M.; Hall, A.O. Inferential Performance Assessment of Stochastic Optimisers and the Attainment Function. In Proceedings of the Evolutionary Multi-Criterion Optimization, Zurich, Switzerland, 7–9 March 2001; Volume 1993, pp. 213–225. [Google Scholar] [CrossRef]
  13. López-Ibáñez, M.; Paquete, L.; Thomas, S. Exploratory Analysis of Stochastic Local Search Algorithms in Biobjective Optimization. In Experimental Methods for the Analysis of Optimization Algorithms; Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 209–222. [Google Scholar] [CrossRef]
  14. Fonseca, C.M.; Da Fonseca, V.G.; Paquete, L. Exploring the Performance of Stochastic Multiobjective Optimisers with the Second-Order Attainment Function. In Proceedings of the Evolutionary Multi-Criterion Optimization, Guanajuato, Mexico, 9–11 March 2005; Volume 3410, pp. 250–264. [Google Scholar] [CrossRef]
  15. Fonseca, C.M.; Guerreiro, A.P.; López-Ibáñez, M.; Paquete, L. On the Computation of the Empirical Attainment Function. In Proceedings of the Evolutionary Multi-Criterion Optimization: 6th International Conference, EMO 2011, Ouro Preto, Brazil, 5–8 April 2011; pp. 106–120. [Google Scholar] [CrossRef]
  16. Tušar, T.; Filipič, B. Visualizing Exact and Approximated 3D Empirical Attainment Functions. Math. Probl. Eng. 2014, 2014, 569346. [Google Scholar] [CrossRef]
  17. Parsopoulos, K.E.; Vrahatis, M.N. Particle swarm optimization method in multiobjective problems. In Proceedings of the ACM Symposium on Applied Computing, Madrid, Spain, 10–14 March 2002; pp. 603–607. [Google Scholar] [CrossRef]
  18. Reyes-Sierra, M.; Coello Coello, C.A. Improving PSO-Based Multi-objective Optimization Using Crowding, Mutation and ϵ-Dominance. In Proceedings of the Third International Conference on Evolutionary Multi-Criterion Optimization, Guanajuato, Mexico, 9–11 March 2005; Volume 3410, pp. 505–519. [Google Scholar] [CrossRef]
  19. Nebro, A.J.; Durillo, J.J.; García-Nieto, J.; Coello Coello, C.A.; Luna, F.; Alba, E. SMPSO: A New PSO-based Metaheuristic for Multi-objective Optimization. In Proceedings of the IEEE Symposium on Multi-Criteria Decision-Making, Nashville, TN, USA, 30 March–2 April 2009; Volume 2, pp. 66–73. [Google Scholar] [CrossRef]
  20. Zitzler, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed]
  21. Huband, S.; Barone, L.; While, L.; Hingston, P. A Scalable Multi-objective Test Problem Toolkit. In Proceedings of the Third International Conference on Evolutionary Multi-Criterion Optimization, Guanajuato, Mexico, 9–11 March 2005; pp. 280–295. [Google Scholar] [CrossRef]
  22. Scheepers, C. Multi-guided Particle Swarm Optimization: A Multi-Objective Particle Swarm Optimizer. Ph.D. Thesis, University of Pretoria, Pretoria, South Africa, 2018. [Google Scholar]
Figure 1. Example Pareto-optimal front and attainment surface. (a) An approximated Pareto-optimal front. (b) Attainment surface.
Figure 1. Example Pareto-optimal front and attainment surface. (a) An approximated Pareto-optimal front. (b) Attainment surface.
Algorithms 16 00283 g001
Figure 2. Attainment surfaces. (a) Example attainment surfaces with intersection lines. (b) Example attainment surfaces with unequally spread intersection lines. (c) Grand attainment surface.
Figure 2. Attainment surfaces. (a) Example attainment surfaces with intersection lines. (b) Example attainment surfaces with unequally spread intersection lines. (c) Grand attainment surface.
Algorithms 16 00283 g002
Figure 3. Attainment surfaces with rotation-based intersection lines. (a) Concave POF. (b) Convex POF.
Figure 3. Attainment surfaces with rotation-based intersection lines. (a) Concave POF. (b) Convex POF.
Algorithms 16 00283 g003
Figure 4. Attainment surfaces with outward/inward intersection lines. (a) Inward. (b) Outward.
Figure 4. Attainment surfaces with outward/inward intersection lines. (a) Inward. (b) Outward.
Algorithms 16 00283 g004
Figure 5. Attainment surface with Manhattan distance calculations.
Figure 5. Attainment surface with Manhattan distance calculations.
Algorithms 16 00283 g005
Figure 6. Attainment surfaces with unbiased ASSIL. (a) Convex POF. (b) Concave POF.
Figure 6. Attainment surfaces with unbiased ASSIL. (a) Convex POF. (b) Concave POF.
Algorithms 16 00283 g006
Figure 7. Test case Pareto-optimal fronts. Dots represent algorithm A, and triangles represent algorithm B. (a) Case 1. (b) Case 2. (c) Case 3. (d) Case 4. (e) Case 5. (f) Case 6.
Figure 7. Test case Pareto-optimal fronts. Dots represent algorithm A, and triangles represent algorithm B. (a) Case 1. (b) Case 2. (c) Case 3. (d) Case 4. (e) Case 5. (f) Case 6.
Algorithms 16 00283 g007
Figure 8. Test case Pareto-optimal front geometries. (a) Concave. (b) Convex. (c) Linear. (d) Mixed. (e) Disconnected.
Figure 8. Test case Pareto-optimal front geometries. (a) Concave. (b) Convex. (c) Linear. (d) Mixed. (e) Disconnected.
Algorithms 16 00283 g008
Figure 9. Convex POF and attainment surface with WASSILs.
Figure 9. Convex POF and attainment surface with WASSILs.
Algorithms 16 00283 g009
Figure 10. 3-dimensional attainment surface.
Figure 10. 3-dimensional attainment surface.
Algorithms 16 00283 g010
Figure 11. Top, front, and side view of attainment surface with naive subdivisions. (a) Top. (b) Front. (c) Side.
Figure 11. Top, front, and side view of attainment surface with naive subdivisions. (a) Top. (b) Front. (c) Side.
Algorithms 16 00283 g011
Figure 12. A 3-dimensional attainment surface with naive subdivisions.
Figure 12. A 3-dimensional attainment surface with naive subdivisions.
Algorithms 16 00283 g012
Figure 13. A 3-dimensional attainment surface with naive subdivisions and intersection vectors.
Figure 13. A 3-dimensional attainment surface with naive subdivisions and intersection vectors.
Algorithms 16 00283 g013
Figure 14. A 3-dimensional attainment surface with optimized subdivisions.
Figure 14. A 3-dimensional attainment surface with optimized subdivisions.
Algorithms 16 00283 g014
Figure 15. A 3-dimensional attainment surface with optimized subdivisions and intersection vectors.
Figure 15. A 3-dimensional attainment surface with optimized subdivisions and intersection vectors.
Algorithms 16 00283 g015
Table 1. Comparison of the results of KC measure with ASSIL; blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
Table 1. Comparison of the results of KC measure with ASSIL; blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
CaseGeometryTrueIntersection Line Generation Approach
Rotation-BasedRandomASSIL
Case 1Concave(73.27, 26.73)(71.00, 29.00)(76.80, 23.20)(73.20, 26.80)
Convex(70.37, 29.63)(85.10, 14.90)(85.60, 14.40)(70.30, 29.70)
Line(70.00, 30.00)(74.60, 25.40)(79.30, 20.70)(70.00, 30.00)
Mixed(69.67, 30.33)(73.20, 26.80)(82.10, 17.90)(69.80, 30.20)
Disconnected(77.50, 22.50)(82.70, 17.30)(86.90, 13.10)(77.50, 22.50)
Case 2Concave(50.00, 50.00)(41.00, 59.00)(67.60, 32.40)(50.00, 50.00)
Convex(86.60, 13.40)(83.00, 17.00)(96.40, 3.60)(86.60, 13.40)
Line(66.67, 33.33)(59.00, 41.00)(81.30, 18.70)(66.60, 33.40)
Mixed(66.99, 33.01)(59.60, 40.40)(81.60, 18.40)(66.90, 33.10)
Disconnected(60.00, 40.00)(51.60, 48.40)(75.80, 24.20)(60.00, 40.00)
Case 3Concave(79.21, 20.79)(73.80, 26.20)(92.60, 7.40)(79.20, 20.80)
Convex(97.81, 2.19)(97.20, 2.80)( 99.90, 0.10)(97.80, 2.20)
Line(86.67, 13.33)(83.00, 17.00)(96.50, 3.50)(86.60, 13.40)
Mixed(88.01, 11.99)(84.80, 15.20)(95.60, 4.40)(87.90, 12.10)
Disconnected(90.00, 10.00)(87.30, 12.70)(97.60, 2.40)(90.00, 10.00)
Case 4Concave(50.00, 50.00)(50.00, 50.00)(49.00, 51.00)(50.00, 50.00)
Convex(50.00, 50.00)(50.00, 50.00)(50.60, 49.40)(50.00, 50.00)
Line(50.00, 50.00)(50.00, 50.00)(54.00, 46.00)(50.00, 49.90)
Mixed(55.71, 44.29)(54.30, 45.70)(50.70, 49.30)(55.70, 44.30)
Disconnected(69.14, 30.86)(73.00, 27.00)(77.80, 22.20)(69.10, 30.90)
Case 5Concave(50.00, 50.00)(50.00, 50.00)(49.50, 50.50)(50.00, 50.00)
Convex(50.00, 50.00)(50.00, 50.00)(50.40, 49.60)(50.00, 50.00)
Line(50.00, 50.00)(50.00, 50.00)(49.20, 50.80)(50.00, 50.00)
Mixed(45.56, 54.44)(45.30, 54.70)(43.40, 56.60)(45.60, 54.40)
Disconnected(45.43, 54.57)(45.00, 55.00)(45.40, 54.60)(45.40, 54.60)
Case 6Concave(50.00, 50.00)(50.00, 50.00)(48.00, 52.00)(50.00, 50.00)
Convex(50.00, 50.00)(50.00, 50.00)(49.50, 50.50)(50.00, 50.00)
Line(50.00, 50.00)(50.00, 50.00)(49.30, 50.70)(50.00, 50.00)
Mixed(42.47, 57.53)(42.90, 57.10)(37.70, 62.30)(42.50, 57.50)
Disconnected(45.00, 55.00)(44.70, 55.30)(44.10, 55.90)(45.00, 55.00)
Table 2. Intersection line comparison between VEPSO ( V ), SMPSO ( S ), and OMOPSO ( O ); blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
Table 2. Intersection line comparison between VEPSO ( V ), SMPSO ( S ), and OMOPSO ( O ); blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
ProblemIntersections V vs. O V vs. S S vs. O
ZDT1Rotational(14.9,66.4)(6.1,80.6)(79.4,1.1)
Inward ( 25.6 , 55.5 ) (13.8, 65.4) ( 70.2 , 3.9 )
Outward ( 19.8 , 62.7 ) ( 8.7 , 75.6 ) ( 77.3 , 1.6 )
ASSIL ( 22.5 , 60.1 ) ( 11.4 , 72.2 ) ( 72.5 , 4.3 )
ZDT2Rotational ( 8.4 , 64.3 ) ( 2.9 , 73.6 ) ( 58.9 , 3.9 )
Inward ( 8.0 , 63.4 ) ( 2.0 , 74.3 ) ( 56.3 , 0.9 )
Outward ( 10.4 , 62.7 ) ( 4.2 , 70.5 ) ( 60.1 , 5.7 )
ASSIL ( 9.0 , 63.9 ) ( 3.4 , 72.6 ) ( 60.0 , 3.8 )
ZDT3Rotational(13.4, 77.9) ( 3.9 , 90.8 ) (81.4, 7.3)
Inward(7.1, 61.6)(2.1, 83.2) ( 72.4 , 10.5 )
Outward ( 16.4 , 73.2 ) ( 4.9 , 87.7 ) ( 78.4 , 8.7 )
ASSIL ( 12.5 , 72.9 ) ( 3.2 , 89.3 ) ( 74.8 , 9.5 )
ZDT4Rotational ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) (99.9, 0.0)
Inward(0.0, 87.9)(0.0, 88.1)(80.9, 0.0)
Outward ( 0.0 , 100.0 ) ( 0.0 , 100.0 ) (100.0, 0.0)
ASSIL ( 0.0 , 100.0 ) ( 0.0 , 97.7 ) ( 93.7 , 0.0 )
ZDT6Rotational(40.1, 13.3)(15.2, 25.9)(58.9, 11.1)
Inward(64.6, 2.8)(15.6, 35.3)(73.3, 0.7)
Outward(50.6, 7.8)(6.1, 52.9)(64.5, 5.0)
ASSIL ( 59.4 , 4.3 ) ( 10.9 , 41.8 ) ( 52.3 , 14.1 )
Table 3. Intersection line comparison between VEPSO ( V ), SMPSO ( S ), and OMOPSO ( O ); blue indicatses performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
Table 3. Intersection line comparison between VEPSO ( V ), SMPSO ( S ), and OMOPSO ( O ); blue indicatses performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
ProblemIntersections V vs. O V vs. S S vs. O
WFG1Rotational ( 0.0 , 99.9 ) ( 0.2 , 96.9 ) ( 28.6 , 65.8 )
Inward ( 0.0 , 100.0 ) ( 0.2 , 97.6 ) (38.4,55.6)
Outward ( 0.0 , 99.9 ) ( 0.2 , 97.5 ) ( 26.6 , 68.0 )
ASSIL ( 0.0 , 99.9 ) ( 0.2 , 97.5 ) ( 27.2 , 67.2 )
WFG2Rotational ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) (0.0,65.5)
Inward ( 0.0 , 100.0 ) ( 0.0 , 100.0 ) (0.0,86.6)
Outward ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) (0.0,63.9)
ASSIL ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 73.1 )
WFG3Rotational ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 88.8 )
Inward ( 0.0 , 100.0 ) ( 0.0 , 100.0 ) ( 0.0 , 87.3 )
Outward ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 89.2 )
ASSIL ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 88.8 )
WFG4Rotational ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
Inward ( 0.0 , 100.0 ) ( 0.0 , 100.0 ) ( 100.0 , 0.0 )
Outward ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
ASSIL ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
WFG5Rotational ( 16.9 , 20.0 ) ( 0.1 , 62.7 ) ( 52.4 , 2.0 )
Inward(10.1,21.0) ( 0.2 , 59.8 ) (39.9,0.5)
Outward ( 18.6 , 23.4 ) ( 0.2 , 64.4 ) ( 55.0 , 2.9 )
ASSIL ( 16.2 , 19.1 ) ( 0.2 , 61.1 ) ( 50.3 , 1.7 )
WFG6Rotational ( 25.6 , 13.5 ) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
Inward(9.9,13.5) ( 0.0 , 100.0 ) ( 100.0 , 0.0 )
Outward(29.6,13.7) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
ASSIL ( 23.3 , 13.7 ) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
WFG7Rotational ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 92.7 )
Inward ( 0.0 , 100.0 ) ( 0.0 , 100.0 ) ( 0.0 , 95.4 )
Outward ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 92.6 )
ASSIL ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 93.2 )
WFG8Rotational ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 94.6 )
Inward ( 0.0 , 100.0 ) ( 0.0 , 100.0 ) ( 0.0 , 95.8 )
Outward ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 94.4 )
ASSIL ( 0.0 , 99.9 ) ( 0.0 , 99.9 ) ( 0.0 , 95.1 )
WFG9Rotational ( 8.0 , 30.3 ) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
Inward(2.6,41.1) ( 0.0 , 100.0 ) ( 100.0 , 0.0 )
Outward ( 8.8 , 27.9 ) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
ASSIL ( 7.1 , 31.5 ) ( 0.0 , 99.9 ) ( 99.9 , 0.0 )
Table 4. Properties of the ZDT and WFG problems.
Table 4. Properties of the ZDT and WFG problems.
NameSeparabilityModalityGeometry
ZDT1SeparableUnimodalConvex
ZDT2SeparableUnimodalConcave
ZDT3SeparableUnimodal/multimodalDisconnected
ZDT4SeparableUnimodal/multimodalConvex
ZDT6SeparableMultimodalConcave
WFG1SeparableUnimodalConvex, mixed
WFG2Non-separableUnimodal/multimodalConvex, disconnected
WFG3Non-separableUnimodalLinear, degenerate
WFG4SeparableMultimodalConcave
WFG5SeparableMultimodalConcave
WFG6Non-separableUnimodalConcave
WFG7SeparableUnimodalConcave
WFG8Non-separableUnimodalConcave
WFG9Non-separableMultimodal, deceptiveConcave
Table 5. Comparison of the results of the KC measure with WASSIL; blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
Table 5. Comparison of the results of the KC measure with WASSIL; blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
CaseGeometryTrueIntersection Line Generation Approach
Rotation-BasedRandomASSILWASSIL
Case 1Concave(73.27, 26.73)(71.00, 29.00)(79.90, 20.10)(73.20, 26.80)(73.27, 26.73)
Convex(70.37, 29.63)(85.10, 14.90)(87.40, 12.60)(70.30, 29.70)(70.37, 29.63)
Line(70.00, 30.00)(74.60, 25.40)(78.80, 21.20)(70.00, 30.00)(70.00, 30.00)
Mixed(69.67, 30.33)(73.20, 26.80)(78.30, 21.70)(69.80, 30.20)(69.67, 30.33)
Disconnected(77.50, 22.50)(82.70, 17.30)(87.60, 12.40)(77.50, 22.50)(77.50, 22.50)
Case 2Concave(50.00, 50.00)(41.00, 59.00)(67.00, 33.00)(50.00, 50.00)(50.00, 50.00)
Convex(86.60, 13.40)(83.00, 17.00)(97.20, 2.80)(86.60, 13.40)(86.60, 13.40)
Line(66.67, 33.33)(59.00, 41.00)(85.60, 14.40)(66.60, 33.40)(66.67, 33.33)
Mixed(66.99, 33.01)(59.60, 40.40)(82.10, 17.90)(66.90, 33.10)(66.99, 33.01)
Disconnected(60.00, 40.00)(51.60, 48.40)(77.40, 22.60)(60.00, 40.00)(60.00, 40.00)
Case 3Concave(79.21, 20.79)(73.80, 26.20)(93.20, 6.80)(79.20, 20.80)(79.21, 20.79)
Convex(97.81, 2.19)(97.20, 2.80)(100.00, 0.00)(97.80, 2.20)(97.81, 2.19)
Line(86.67, 13.33)(83.00, 17.00)(96.60, 3.40)(86.60, 13.40)(86.67, 13.33)
Mixed(88.01, 11.99)(84.80, 15.20)(95.70, 4.30)(87.90, 12.10)(88.01, 11.99)
Disconnected(90.00, 10.00)(87.30, 12.70)(97.50, 2.50)(90.00, 10.00)(90.00, 10.00)
Case 4Concave(50.00, 50.00)(50.00, 50.00)(49.40, 50.60)(50.00, 50.00)(50.00, 50.00)
Convex(50.00, 50.00)(50.00, 50.00)(48.70, 51.30)(50.00, 50.00)(50.00, 50.00)
Line(50.00, 50.00)(50.00, 50.00)(52.30, 47.70)(50.00, 49.90)(50.00, 50.00)
Mixed(55.71, 44.29)(54.30, 45.70)(51.00, 49.00)(55.70, 44.30)(55.71, 44.29)
Disconnected(69.14, 30.86)(73.00, 27.00)(76.70, 23.30)(69.10, 30.90)(69.14, 30.86)
Case 5Concave(50.00, 50.00)(50.00, 50.00)(48.90, 51.10)(50.00, 50.00)(50.00, 50.00)
Convex(50.00, 50.00)(50.00, 50.00)(47.80, 52.20)(50.00, 50.00)(50.00, 50.00)
Line(50.00, 50.00)(50.00, 50.00)(51.00, 49.00)(50.00, 50.00)(50.00, 50.00)
Mixed(45.56, 54.44)(45.30, 54.70)(43.40, 56.60)(45.60, 54.40)(45.56, 54.44)
Disconnected(45.43, 54.57)(45.00, 55.00)(40.60, 59.40)(45.40, 54.60)(45.43, 54.57)
Case 6Concave(50.00, 50.00)(50.00, 50.00)(51.70, 48.30)(50.00, 50.00)(50.00, 50.00)
Convex(50.00, 50.00)(50.00, 50.00)(48.60, 51.40)(50.00, 50.00)(50.00, 50.00)
Line(50.00, 50.00)(50.00, 50.00)(51.70, 48.30)(50.00, 50.00)(50.00, 50.00)
Mixed(42.47, 57.53)(42.90, 57.10)(38.50, 61.50)(42.50, 57.50)(42.47, 57.53)
Disconnected(45.00, 55.00)(44.70, 55.30)(42.60, 57.40)(45.00, 55.00)(45.00, 55.00)
Table 6. Naive vs. optimized porcupine measure (3-objective WFG problem set).
Table 6. Naive vs. optimized porcupine measure (3-objective WFG problem set).
ProblemAlgorithmNaiveOptimized
Mean σ MinMaxMean σ MinMax
WFG1OMOPSO3.0542.1780.25969.43.0922.2450.51069.728
SMPSO32.346.05417.2943.9232.386.13216.5742.92
OMOPSO1.3081.110.24184.8861.321.1530.24455.071
VEPSO43.554.0630.4352.7143.164.03132.2752.73
VEPSO19.626.6198.85747.6318.734.1498.67928.34
SMPSO6.6312.9490.801717.616.7012.7823.49117.05
WFG2OMOPSO49.886.11632.8360.5749.296.51730.8560.54
SMPSO0.018690.068690.00.37110.017360.064660.00.3442
OMOPSO59.434.40347.6367.3559.644.39747.6166.87
VEPSO0.00.00.00.00.00.00.00.0
VEPSO0.00.00.00.00.00.00.00.0
SMPSO59.764.61246.366.9159.834.71646.2667.61
WFG3OMOPSO26.984.08818.4437.3228.994.52420.4343.72
SMPSO3.0992.2620.15698.9273.0782.0970.21267.497
OMOPSO62.553.34155.1668.4266.073.72258.3673.14
VEPSO0.51990.14560.29310.82330.58750.16950.25270.8897
VEPSO1.1770.480.21312.3151.2430.52780.20952.5
SMPSO59.173.40252.0464.4762.573.66855.6669.12
WFG4OMOPSO26.782.43322.730.6526.372.91920.331.05
SMPSO9.3481.9155.00213.579.3061.9895.12213.62
OMOPSO63.04.53955.2175.7463.824.79855.7577.11
VEPSO0.023370.035890.00.15690.024680.042440.00.1929
VEPSO0.026150.097730.00.5330.024060.088140.00.4763
SMPSO71.113.91962.7880.3871.924.07263.282.07
WFG5OMOPSO25.674.39317.4734.025.794.47217.5534.4
SMPSO17.613.6411.1627.6317.653.82410.6827.83
OMOPSO26.212.31422.230.7226.312.43221.4831.12
VEPSO17.942.6510.622.4217.542.69110.5621.72
VEPSO25.134.30114.7133.2725.124.46313.933.76
SMPSO5.1431.2772.7368.0695.1871.3382.5398.049
WFG6OMOPSO10.212.3986.65115.7610.552.3567.03515.89
SMPSO45.33.6137.8253.5745.623.75438.1254.17
OMOPSO11.934.7046.09822.6512.294.8545.99123.65
VEPSO16.055.1585.42125.8816.535.4145.56626.5
VEPSO1.020.89520.019293.7951.0460.92040.0075993.983
SMPSO20.616.9078.90437.2420.947.0228.94537.69
WFG7OMOPSO38.863.6431.7744.2739.283.51632.1244.38
SMPSO3.8920.97211.6845.5223.8881.0181.6415.507
OMOPSO74.412.68167.7578.6874.722.74367.879.15
VEPSO0.00.00.00.00.00.00.00.0
VEPSO0.0013610.0074560.00.04084 9.506 × 10 5 0.00052060.00.002852
SMPSO73.392.59868.577.9473.692.68268.9279.15
WFG8OMOPSO41.214.78931.0849.2841.554.95430.9549.24
SMPSO3.0490.9481.4435.0663.1060.95681.545.088
OMOPSO70.183.91863.0477.3670.493.84763.477.46
VEPSO 6.884 × 10 5 0.00028570.00.0014670.00.00.00.0
VEPSO0.14890.23620.01.0230.14580.2230.00.869
SMPSO62.883.84156.1472.5863.153.86156.6573.03
WFG9OMOPSO15.892.46211.1221.2915.872.4711.4821.66
SMPSO37.583.0332.1443.9437.683.04831.9544.27
OMOPSO20.72.52616.8625.2820.612.51516.926.05
VEPSO23.591.80119.2126.8824.01.79519.6127.51
VEPSO1.2060.75760.22383.1621.2190.84110.25263.605
SMPSO21.915.09312.2431.9621.85.30711.1232.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Scheepers, C.; Engelbrecht, A. The Porcupine Measure for Comparing the Performance of Multi-Objective Optimization Algorithms. Algorithms 2023, 16, 283. https://doi.org/10.3390/a16060283

AMA Style

Scheepers C, Engelbrecht A. The Porcupine Measure for Comparing the Performance of Multi-Objective Optimization Algorithms. Algorithms. 2023; 16(6):283. https://doi.org/10.3390/a16060283

Chicago/Turabian Style

Scheepers, Christiaan, and Andries Engelbrecht. 2023. "The Porcupine Measure for Comparing the Performance of Multi-Objective Optimization Algorithms" Algorithms 16, no. 6: 283. https://doi.org/10.3390/a16060283

APA Style

Scheepers, C., & Engelbrecht, A. (2023). The Porcupine Measure for Comparing the Performance of Multi-Objective Optimization Algorithms. Algorithms, 16(6), 283. https://doi.org/10.3390/a16060283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop