In general, a constrained minimization problem cannot be directly solved by the Nelder-Mead algorithm. However, the constrained problem can be transformed into an unconstrained problem by applying coordinate transformations and penalty functions. Then, the unconstrained minimization problem is solved using the Nelder-Mead algorithm or MATLAB’s built-in function
fminsearch [
34,
35,
36]. Since our optimization problem (
4) is constrained with scalar bounds, it is sufficient to use the coordinate transformation techniques [
37] detailed in
Section 3.1. It should be noted that our goal is to minimize the maximum eigenvalue (Perron eigenvalue) function, which, in general, cannot be expressed as an analytic function of the variables
.
The procedure to specify the objective function (Perron eigenvalue), to minimize numerically, is given as follows:
3.1. The Coordinate Transformation Method
Here, we use a simple coordinate transformation technique [
38] (pp. 23–24); [
34,
36] evolved from a trigonometric function
for dual bound constraints (lower and upper bounds).
Let be the original k-dimensional variable vector (or, equivalently, k missing comparisons), and be the new search vector.
Let be an invertible function such that .
Again, define a function such that . It is obvious that for all .
From the fact that
,
, and hence
is bounded by (
) and (
), the coordinate transformation function could be expressed by
such that:
From the initial values
, where
, the initial values
are calculated as
In addition, the diameter of the initial simplex region may be vanishingly small. As a result, in order to avoid this problem, it is recommended to shift the initial coordinate values by 2
[
34], meaning that:
Note that is the ith component of , and is the ith component of .
3.2. Nelder-Mead Algorithm
The Nelder-Mead algorithm (also known as the
simplex search algorithm) is one of the most popular direct search methods for unconstrained optimization problems since it originally appeared in 1965 [
33], and it is suitable for the minimization of functions of several variables. It does not require derivative information, and it is more appropriate for the function minimization where its derivative is unknown or discontinuous [
39].
The Nelder-Mead algorithm uses a simplex with
vertices (points) for
dimensional vectors (for a function of
k variables) in finding the optimum point. In each iteration of the algorithm, the
vertices are updated and ordered according to the increasing values of the objective function on the
vertices (see, e.g., [
39,
40]).
The algorithm has four possible steps in a single iteration: reflection, expansion, contraction and shrink. The associated scalar parameters are: coefficients of reflection (), expansion (), contraction () and shrink (). They must meet the following constraints’ criteria:
In most circumstances, the Nelder-Mead algorithm achieves substantial improvements and produces very pleasant results in the first few iterations. In addition, except for shrink transformation, which is exceedingly unusual in practice, the algorithm normally only requires one or two function evaluations per iteration. This is very useful in real applications where the function evaluation takes a lengthy amount of time or the evaluation is costly. For such situations, the method is frequently faster than other existing derivative-free optimization methods [
41,
42].
The Nelder-Mead algorithm is popular and widely used in practice. The fundamental reason for its popularity in practice, aside from its simplicity to be understood and coded, is its ability to achieve a significant reduction in function value with a little number of function evaluations [
41]. The method has been vastly used in practical applications, especially in chemistry, medicine and chemical engineering [
42]. It has also been implemented and included in different libraries and software packages. For instance, numerical recipes in C [
43]; MATLAB’s
fminsearch [
44]; Python [
45]; and MATHEMATICA [
46].
The practical implementation of the Nelder-Mead algorithm is often reasonable, although it may, in some rare cases, get stuck in a non-stationary point. Restarting the algorithm several times can be used as a heuristic approach when stagnation occurs [
47]. The convergence properties of the algorithm for low and high dimensions were studied by [
39,
48,
49,
50]. Furthermore, the complexity analysis for a single iteration of the Nelder-Mead algorithm has been reported in the literature [
51,
52].
Another version of the standard Nelder-Mead algorithm [
48] has been implemented using adaptive parameters and the authors found that the algorithm performs better than MATLAB’s
fminsearch by utilizing several benchmark testing functions for higher dimensional problems, but they have not clearly stated the convergence properties of the method.
A modified Nelder-Mead algorithm [
35] has also been proposed for solving a general nonlinear constrained optimization problem that handles linear and nonlinear (in)equality constraints. Several benchmark problems have been examined and compared with various methods (the
constrained method with mutation [
53]; the genetic algorithm [
54]; and the bees algorithm [
55]—to mention a few) to evaluate the performance of their algorithm. Regarding the effectiveness and efficiency, the authors discovered that it is competitive to such algorithms. Nonetheless, our approach to handle the interval constraints is different.
To solve the constrained eigenvalue minimization problem (
4), we first transform the coordinates
for all
using transformation (
6) and then we apply the standard Nelder-Mead algorithm for the unconstrained problem
.
Here, we use the standard Nelder-Mead algorithm implementation, where the standard values are
and
, which are often suggested in the literature [
33,
39,
44,
56,
57].
The algorithm starts with an initial simplex with
non-degenerate vertices
around a given initial point
. Vertex
can be chosen arbitrarily. However, the most common choice of
in implementation is
in order to make proper restarts of the algorithm [
41]. The remaining
k vertices are then generated on the basis of step-size 0.05 in the direction of the unit vector
[
44]:
The initial simplex
is a convex hull of
vertices
. The ordering of the vertices
should satisfy the increasing function values of the form:
We consider as the best vertex (vertex that results in the minimum function value) and as the worst vertex (a vertex which results in the maximum function value). The centroid is calculated as , which is the average of the non-worst k vertices, i.e., all vertices (points) except for .
According to [
39,
56], the description of one simplex iteration of the standard Nelder-Mead algorithm is presented in Algorithm 1. At each iteration, the simplex vertices are ordered as
according to the increasing values of the objective function.
Note that the new working simplex in a nonshrink iteration of the algorithm has only one new vertex. In this case, the new vertex replaces the worst vertex in the former simplex S. In the case of a shrink step, the new simplex S contains k new vertices: . In this situation, vertex will be the old vertex from the former simplex S.
Now we can solve the constrained eigenvalue minimization problem (
4) using the standard Nelder-Mead Algorithm 1, equivalent to MATLAB’s
fminsearch algorithm, in connection with
coordinate transformation (
6) for each simplex iteration; henceforth, we shall simply call it
Nelder-Mead algorithm.
For the respective given values to TolX, TolFun, MaxIter, and MaxFunEvals, the Nelder-Mead algorithm terminates when one of the following three conditions is satisfied:
- (C1)
TolX and TolFun;
- (C2)
The number of iterations reaches MaxIter;
- (C3)
The number of function evaluations reaches MaxFunEvals.
It is important to note that problem (
4) is a non-convex problem, but this can be transformed into a convex minimization problem by using exponential parameterization [
15]. From now on, we consider that our optimization problem is convex and constrained with scalar bounds.
Algorithm 1 One iteration of the standard Nelder-Mead algorithm. |
Compute an initial simplex . Compute . Sort the vertices so that ( 10) holds. ▹ A simplex is updated iteratively. whileTolX & TolFun do
▹ Calculate centroid. ▹ Reflection if then ▹ Expansion if then ▹ Accept and replace the worst vertex with else ▹ Accept and replace the worst vertex with end if else if then ▹ Accept and replace with else if then ▹ Outside contraction if then ▹ Accept and replace with else Compute k new vertices ▹ Shrink ▹ Compute end if else ▹ Inside contraction if then ▹ Accept and replace with else Compute k new vertices ▹ Shrink end if end if Update vertices Apply the coordinate transformation ( 6) for each new (accepted) vertex . Compute . Sort the vertices of the simplex S with an increasing objective function values end while
|
Here, we apply the proposed algorithm for solving the
constrained eigenvalue minimization (
4) for the given
incomplete PCM
. Moreover, we use Saaty’s inconsistency ratio (
CR) for our inconsistency measure hereinafter.