Next Article in Journal
Quiescence Generates Moving Average in a Stochastic Epidemiological Model with One Host and Two Parasites
Next Article in Special Issue
An Efficient Methodology for Locating and Sizing PV Generators in Radial Distribution Networks Using a Mixed-Integer Conic Relaxation
Previous Article in Journal
A Note on Outer-Independent 2-Rainbow Domination in Graphs
Previous Article in Special Issue
A Comprehensive Approach for an Approximative Integration of Nonlinear-Bivariate Functions in Mixed-Integer Linear Programming Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Meta-Optimization of Dimension Adaptive Parameter Schema for Nelder–Mead Algorithm in High-Dimensional Problems

Department of Electronics, Faculty of Electrical Engineering, University of Ljubljana, SI-1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2288; https://doi.org/10.3390/math10132288
Submission received: 20 May 2022 / Revised: 21 June 2022 / Accepted: 27 June 2022 / Published: 30 June 2022
(This article belongs to the Special Issue Optimization Theory and Applications)

Abstract

:
Although proposed more than half a century ago, the Nelder–Mead simplex search algorithm is still widely used. Four numeric constants define the operations and behavior of the algorithm. The algorithm with the original constant values performs fine on most low-dimensional, but poorly on high-dimensional, problems. Therefore, to improve its behavior in high dimensions, several adaptive schemas setting the constants according to the problem dimension were proposed in the past. In this work, we present a novel adaptive schema obtained by a meta-optimization procedure. We describe a schema candidate with eight parameters subject to meta-optimization and define an objective function evaluating the candidate’s performance. The schema is optimized on up to 100-dimensional problems using the Parallel Simulated Annealing with Differential Evolution global method. The obtained global minimum represents the proposed schema. We compare the performance of the optimized schema with the existing adaptive schemas. The data profiles on the Gao–Han modified quadratic, Moré–Garbow–Hilstrom, and CUTEr (Constrained and Unconstrained Testing Environment, revisited) benchmark problem sets show that the obtained schema outperforms the existing adaptive schemas in terms of accuracy and convergence speed.

1. Introduction

Today, we can find optimization algorithms in almost every field of science and technology. A number of optimization algorithms exist, invented to fulfill various requirements regarding convergence rate, precision, robustness, and more. One of them (the Nelder–Mead simplex algorithm [1]), although more than half a century old, is still extensively used for solving a wide range of continuous optimization problems. The algorithm’s popularity is due to its simplicity and reasonably good performance observed in practical optimization cases.
Despite its popularity, the algorithm has proven convergence issues. McKinnon [2] presented a family of two-dimensional functions that cause the Nelder–Mead Algorithm (NMA) to converge to a non-stationary point. Galántai [3] provided a sufficient condition for repeated inside contractions in two dimensions, causing non-convergence. A relatively poor theoretical background on algorithm convergence is available for up to two-dimensional problems. Lagarias et al. [4] proved various limited convergence results for two-dimensional strictly convex objective functions. Further, Lagarias et al. [5] proved convergence for a restricted NMA, a version of the NMA without expansion steps, on two-dimensional strictly convex C 2 functions with bounded level sets.
Many modifications of the original NMA were proposed to avoid the algorithm’s known deficiencies and provide convergence. Kelley [6] proposed an oriented restart in case of detected stagnation. Tseng’s [7] and Nazareth and Tseng’s [8] versions of the algorithm guarantee convergence with a sufficient descent approach. Price et al. [9] used simplex reshaping to achieve convergence on C 1 functions, again satisfying sufficient descent conditions. Bűrmen et al. [10], on the other hand, introduced a grid-restrained version of the NMA, thus making the algorithm a pattern search method. The approach was generalized to a successive approximation of the objective function by Bűrmen and Tuma [11].
However, convergence analysis of the unmodified, original NMA stays a hard mathematical problem. No theoretical background is available above two dimensions. Torczon [12] discovered that the NMA fails because search direction and downhill gradient become orthogonal when the problem dimension is large enough. Wright [13] reported that several scholars observed how the NMA deteriorates with dimensionality but without any explanation. Further, Han and Neumann [14] showed that the NMA makes less and less progress per iteration with an increasing problem dimension. Gao and Han [15] suggested that poor performance in high dimensions is due to an increasing fraction of reflection steps.
Researchers addressed the poor performance of the original NMA in high dimensions in two ways. First, they proposed various algorithm modifications to improve the convergence rate in high-dimensional parameter spaces. Fajfar et al. [16] used genetic programming to evolve a direct search procedure using reflection, expansion, and contraction steps. Musafer’s [17] modification adjusts simplex size and direction by performing different NMA steps on various axial combinations. Fajfar et al. [18] proposed random centroid perturbation for improving the search direction, to name a few.
The second approach deals with NMA parameters and does not modify the algorithm itself in any way. Gao and Han [15] proposed the first schema of dimension-dependent NMA parameters to maintain the algorithm’s descent property in high dimensions. Kumar and Suri [19] suggested another schema obtained from parameter sensitivity analysis on five test functions. Mehta [20], on the other hand, observed that two schemas based on Chebyshev spaced points outperform Gao–Han’s and Kumar–Suri’s schemas.
This paper presents the global minimum of the meta-optimization of the NMA adaptive parameter schema. We used the Parallel Simulated Annealing with Differential Evolution (PSADE) robust global optimization method [21] running on a cluster of personal computers as the meta-optimization method. The subjects of meta-optimization are eight coefficients whose values define an individual adaptive parameter Schema Candidate (SC). The schema’s mathematical formulation is set in advance and does not evolve during the procedure. We run the NMA using the SC on several test functions and evaluate the SC in each iteration of the meta-optimization. The global minimum represents the best adaptive parameter schema corresponding to the predefined mathematical formulation of the schema and used objective function. We compare the performance of the NMA using the optimized schema with the NMA using the existing schemas on modified quadratic, i.e., Gao–Han (GH) [15], Moré–Garbow–Hilstrom (MGH) [22], and Constrained and Unconstrained Testing Environment, revisited (CUTEr) [23], sets of benchmark problems. Since the proposed adaptive parameter schema results from a meta-optimization procedure, we do not provide a mathematical background explaining the schema’s performance. However, the proposed schema outperforms all the other schemas and thus, to the best of our knowledge, currently represents the best dimension adaptive parameter schema for the NMA.

2. Adaptive Parameter Schemas for NMA

Each mathematical symbol used in Section 2 and Section 3 is explained at first use. However, for clarification, all the symbols are also listed in the Abbreviations section at the end of the paper.
The original NMA [1,4] is well known, therefore we provide only a brief introduction. The NMA is an unconstrained minimization algorithm operating on an objective function f of n p variables handling ( n p + 1 ) points P i , i.e., simplex vertices, with objective function values f ( P i ) , i = 0 , 1 n p . In one iteration of the NMA, the following steps are performed:
  • Order and relabel simplex vertices to satisfy f ( P 0 ) f ( P 1 ) f ( P n p ) . Calculate the centroid point P ¯ = 1 n p i = 0 n p 1 P i excluding the highest objective function value point.
  • Reflect P n p over P ¯ to obtain the reflected point P r = P ¯ + α ( P ¯ P n p ) , α > 0 .
  • If f ( P r ) < f ( P 0 ) , expand P r to obtain the expanded point
    P e = P ¯ + β α ( P r P ¯ ) = P ¯ + β ( P ¯ P n p ) , β > α .
    If f ( P e ) < f ( P r ) , replace P n p with P e and end the iteration.
  • If f ( P r ) < f ( P n p 1 ) , replace P n p with P r and end the iteration.
  • If f ( P r ) < f ( P n p ) , contract P r towards P ¯ to obtain the contracted point
    P r c = P ¯ + γ α ( P r P ¯ ) = P ¯ + γ ( P ¯ P n p ) , γ < α .
    If f ( P r c ) < f ( P n p ) , replace P n p with P r c and end the iteration.
  • If f ( P r ) f ( P n p ) , contract P n p towards P ¯ to obtain the contracted point
    P n c = P ¯ + γ ( P n p P ¯ ) = P ¯ γ ( P ¯ P n p ) .
    If f ( P n c ) < f ( P n p ) , replace P n p with P n c and end the iteration.
  • Shrink the entire simplex towards P 0 , P i : = P 0 + δ ( P i P 0 ) , δ < 1 , i = 1 , 2 , n p .
NMA iterations are repeated until convergence is achieved. Algorithm behavior depends on α (reflection), β (expansion), γ (contraction), and δ (shrink) parameter values. The NMA default values are
α = 1 , β = 2 , γ = 1 2 , δ = 1 2 .
Adaptive parameter schemas define NMA parameter values as functions of the number of variables n p . The existing schemas (Gao–Han Schema (GHS) [15], Kumar–Suri Schema (KSS) [19], Chebyshev Crude Schema (CCS) [20], and Chebyshev Refined Schema (CRS) [20]) considered in this paper are
GHS : α = 1 , β = 1 + 2 n p , γ = 3 4 1 2 n p , δ = 1 1 n p KSS : α = 1 + 3 5 n p , β = 6 5 , γ = 19 20 3 n p 3 n p 2 , δ = 1 1 n p CCS : α = 1 + cos ( n p 1 n p % 2 ) π 2 n p , β = 1 + cos ( n p 3 n p % 2 ) π 2 n p , γ = 1 + cos ( n p + 3 + n p % 2 ) π 2 n p , δ = 1 + cos ( n p + 1 + n p % 2 ) π 2 n p CRS : α = 1 + cos ( n c 1 ) π 2 n c , β = 1 + cos ( n c 3 ) π 2 n c , γ = 1 + cos ( n c + 5 ) π 2 n c , δ = 1 + cos ( n c + 3 ) π 2 n c , n c = 2 ( 9 + n p 1 5 )
where % denotes modulo operation, and the floor function.
In general, the initial simplex vertices P i are random. In this paper, however, to assure repeatability of the results, the initial simplex is generated from the starting point x 0 using Pfeffer’s method [15]. The first vertex is starting point P 0 = x 0 . The remaining vertices are generated by varying the i th component P i = x 0 + ϵ i e i . e i is the i th component unit vector, and ϵ i is given by
ϵ i = 0.05 x 0 e i x 0 e i 0 0.00025 x 0 e i = 0 , i = 1 , 2 n p .
The starting point x 0 is [ 1 , 1 1 ] T for GH benchmarks [15], and as given in [22] for MGH benchmarks.
A Nelder–Mead run terminates when the simplex becomes too flat or shrinks below a certain size. In this paper, we use tolerances T o l f for simplex flatness and T o l X for simplex size. A Nelder–Mead run stops when both criteria (4) are met:
max i = 1 n p | f ( P i ) f ( P 0 ) | < T o l f max j = 0 n p 1 max i = 1 n p | P i j P 0 j | < T o l X ,
where P i = [ P i 0 , P i 1 , P i ( n p 1 ) ] T is the i th vertex of the simplex.

3. Optimization of the Adaptive Parameter Schema

The default (1) and adaptive parameter schema functions (2) shown in Figure 1 have in general similar behavior. By choosing appropriate values c 0 p and c 1 p , an individual parameter c from a particular schema could be closely fitted with function c = c 0 p + c 1 p n .
At this point, a question arises: can a better adaptive parameter schema of the formulation (5) be obtained by choosing the right values for c 0 α , c 1 α , c 0 β , c 1 β , etc.? Do such values exist, and what are they? A meta-optimization procedure could provide some answers.
α = c 0 α + c 1 α n p , β = c 0 β + c 1 β n p , γ = c 0 γ + c 1 γ n p , δ = c 0 δ + c 1 δ n p
Let us first define the meta-optimization procedure. As in any optimization, we need optimization parameters, an objective function, and an optimization method. Optimization parameters ( c 0 α c 1 δ ) follow from the mathematical formulation describing an SC (5). Therefore, we have an eight-dimensional meta-optimization parameter space.
The meta-optimization objective function measures the weighted difference in data profiles [24] between the best reference schema and an SC. For a mathematical formulation of the objective function, some definitions are needed. A data profile function of a schema s over a set of benchmark problems P
d s P ( κ ) = | p P : t p s n p + 1 κ | | P |
defines the fraction of problems in a set P that schema s solves in κ simplex gradient estimates. s is a schema from the set of schemas S ( s S = { NMA, GHS, KSS, CCS, CRS, SC}). | · | denotes cardinality of a set. t p s is the number of objective function evaluations needed by schema s to achieve convergence on problem p, and n p is the problem dimension. Since one simplex gradient estimate corresponds to n p + 1 objective function evaluations, fraction t p s n p + 1 is the number of simplex gradient estimates required for convergence. Convergence is achieved when
f ( x ) f L + τ ( f ( x 0 ) f L ) ,
where x is an evaluated point in n p -dimensional parameter space, and f L is the lowest objective function value reached by any of the schemas s S in κ max simplex gradient estimates. Tolerance τ specifies accuracy level. Moré and Wild [24] use tolerance values 10 1 , 10 3 , 10 5 , and 10 7 . We set convergence condition tolerance τ to 10 7 as Mehta did in his work [20]. If a particular schema s fails to satisfy condition (7) for problem p, then t p s is set to infinity.
The final meta-optimization objective function h ( SC) evaluates a particular SC defined by eight meta-optimization parameters c 0 α c 1 δ with
h ( SC ) = P X κ = 1 κ max ( max r R d r P ( κ ) d SC P ( κ ) ) × w P + max r R d r P ( κ ) > d SC P ( κ ) w P max r R d r P ( κ ) d SC P ( κ ) .
The GH and MGH benchmark problems are treated separately, X { GH , MGH } . For every number of simplex gradient estimates κ , the data profile of the SC is compared with the best of the reference profiles, r R = S { SC}. The difference is weighted with w P + when the best reference is better, and w P otherwise. Weights w P + and w P define a trade-off between under- and over-achieving the optimization goal (which in turn is the best schema’s performance). Usually, under-achieving is penalized more than over-achieving is rewarded. In our case, the weight values were set to w GH + = w MGH + = 10 , and w GH = w MGH = 1 .
Finally, we have to choose an optimization method to perform our meta-optimization procedure. We are searching for a global minimum in eight-dimensional parameter space. Therefore, a global optimization method with proven convergence can do the job. We chose PSADE [21], a parallel version of [25] since it is available in the PyOPUS Python package [26]. Among others, the PyOPUS package includes optimization algorithms (original NMA included), parallel processing support, and benchmark problems (GH, MGH, and CUTEr problems included), all the ingredients needed in our meta-optimization procedure. It can be found in the Python Package Index (PyPI) software repository. The PSADE method exhibited good performance on global benchmark functions as well as on real optimization problems [27,28,29]. Further, PSADE is an asynchronous global method achieving speedups up to the number of slave computational cores when run in parallel. We ran PSADE on a cluster of 25 personal computers. Instead of PSADE, one of a plethora of newer global methods, e.g., [30,31,32,33], could be used. However, besides faster convergence, we do not expect significantly better results.
A meta-optimization search for a better NMA parameter schema requires significant computational power. An individual SC, represented by eight meta-optimization parameter values c 0 α c 1 δ , has to be evaluated against reference parameter schemas in every meta-optimization iteration. One SC evaluation requires as many Nelder–Mead runs as there are problems included in the objective function evaluation. In general, a single Nelder–Mead run stops when the termination criteria are achieved, e.g., when the simplex becomes flat or shrinks below the tolerance. However, additional improvement is possible if the procedure runs further. When driven beyond tolerances, a non-convergent SC may become convergent, although rather slow. Gao and Han [15], Kumar and Suri [19], and Mehta [20] all set an absolute limit to the number of objective function evaluations to 10 6 , i.e., 9900–90,909 simplex gradient estimates. However, in their results, 1000–2000 estimates are needed on average for the NMA using an adaptive parameter schema to converge. They set termination tolerances T o l f , T o l X in range 10 10 10 4 . Therefore, after some preliminary tests, we set κ max to 5000. More would be better. And—after some preliminary experimental optimization runs—we established that around 10 6 meta-optimization iterations are required to achieve convergence in an unconstrained eight-dimensional parameter space using the robust PSADE global optimization method [21]. Again, the more, the better. Thus, the total number of required objective function evaluations can be estimated as the number of simplex gradient estimates per benchmark problem times the number of simplex vertices summed over all benchmark problems times the number of meta-optimization iterations. For GH and MGH benchmarks altogether, this is over 10 13 evaluations.
Therefore, brute force is not very promising. Instead, we tuned the meta-optimization parameters by conducting a series of shorter meta-optimization runs. We varied parameter space constraints, NMA termination tolerances T o l f and T o l X , the gradient estimates limit κ max per Nelder–Mead run, meta-optimization iteration limit, and we also experimented with the objective function definition at the beginning. By observing the results, we gradually eliminated parts that were not essential, e.g., setting NMA termination tolerances T o l f and T o l X too low can significantly extend the meta-optimization procedure without producing a significantly better result. The final meta-optimization parameter values are as follows: the parameter space is a discrete eight-dimensional box with 0.01 grid and constraints set to [ c 0 α , c 1 α , c 0 β , c 1 β , c 0 γ , c 1 γ , c 0 δ , c 1 δ ] T [ [ 0.80 , 1.20 ] , [ 0.20 , 0.60 ] , [ 0.85 , 1.25 ] , [ 0.35 , 0.75 ] , [ 0.65 , 1.05 ] , [ 0.50 , 0.10 ] , [ 0.05 , 0.45 ] , [ 0.40 , 0.00 ] ] T . NMA termination tolerances are set to T o l f = T o l X = 10 4 . The simplex gradient estimates limit was set to κ max = 5000 , enough to catch degenerated SCs. The final objective function formulation is given in (8). Meta-optimization iteration limit was set to 10 6 .
The final meta-optimized parameter values, i.e., the global minimum of (8), are
c 0 α = 1.02 , c 1 α = 0.31 , c 0 β = 1.06 , c 1 β = 0.53 , c 0 γ = 0.82 , c 1 γ = 0.27 , c 0 δ = 0.28 , c 1 δ = 0.19 .
This is the global minimum when (5) is used as the adaptive parameter schema. We chose (5) as the adaptive parameter schema because of its similarity to the existing schemas.

4. Evaluation of the Optimized Schema with Discussion

In this section, we present and discuss the properties of the optimized schema (9). We analyze its performance and compare it with the other schemas, (1) and (2). Since the schema is a result of the meta-optimization procedure, we do not deal with the issue of why the schema performs well. A mathematical evaluation of an arbitrary schema is given with the objective function definition (8). However, an analytical solution to the defined meta-optimization problem exceeds the scope of this paper.
The optimized parameter schema (9) is compared with the original NMA (1) and the existing adaptive parameter schemas (2), GHS, KSS, CCS, and CRS, in Figure 2. We can observe that the optimized schema follows the same pattern as the other schemas. However, the optimized schema approaches its high-dimensional value faster which is the consequence of smaller | c 1 p / c 0 p | ratios. Otherwise, the optimized schema curves do not significantly deviate from the others. Shrink parameter δ is an exception. While δ in all the other adaptive schemas approaches 1, the optimized schema’s high-dimensional δ value is c 0 δ = 0.28 which is far lower, even than 0.5 of the original NMA. On the other hand, the shrink parameter turns out to be insignificant for GH and MGH benchmarks. The fraction of shrink steps is at most 0.5% for Penalty I and Penalty II problems. However, the fraction of shrink steps in the optimized schema is 0.0% for all GH and MGH benchmarks.
To evaluate the obtained schema further, we compare it in terms of accuracy and speed. Note that the meta-optimization objective function (8) in combination with NMA termination tolerances rewards speed. Therefore, accuracy was not a subject of meta-optimization.
Table 1 shows accuracy results for a GH modified quadratic function (10) up to problem dimension n p = 100 . Parameter ϵ 0 defines condition number of matrix D , and σ 0 specifies deviation from quadratic form. The minimum value of (10) is zero ( min x R n p f ( x ) = 0 ) for any ϵ , σ , or n p . An individual Nelder–Mead run was stopped after κ max = 25,000 simplex gradient estimates, that is, after N eval = 25 , 000 ( n p + 1 ) objective function evaluations. We applied no tolerance-based termination criteria, i.e., T o l f = T o l X = 0 . The table shows the lowest achieved objective function values. A schema is considered to be accurate if the achieved minimum value is correct to at least six decimal places, i.e., when f ( x ) < 5 × 10 7 . The accurate schema values are shown in bold. If a schema does not converge according to condition (7) for τ = 10 7 , the value is shown in italics.
f ( x ) = x T D x + σ ( x T B x ) 2 D = diag [ ( 1 + ϵ ) , ( 1 + ϵ ) 2 , ( 1 + ϵ ) n p ] , B = U T U , U = 1 1 1
Given 25,000 simplex gradient estimates, the optimized schema is accurate and convergent for all 40 GH benchmark problems, as all the existing schemas also are. As expected, the original NMA encounters accuracy and convergence problems in high dimensions.
Accuracy results for MGH benchmark problems [22] are shown in Table 2. The same rules (25,000 simplex gradient estimates, T o l f = T o l X = 0 , etc.) apply. Minima of the used MGH functions are all zero with the exception of Penalty I and Penalty II functions. For n p = 10 , the corresponding minima are 7.0876515…×10 5 and 0.00029366054…, respectively. Thus, the six digit accuracy criteria are given by f Penalty I ( x ) < 7.087655 ×10 5 , and f Penalty II ( x ) < 0.0002936615.
Again, with 25,000 simplex gradient estimates available, the optimized schema is accurate and convergent for all MGH benchmarks, except for four trigonometric functions, where the result is approaching the minimum, although not reaching it. We can observe similar behavior for the other schemas as well. Furthermore, for trigonometric function benchmarks, the schemas achieved relatively small objective function value reduction, which is reflected in relation τ f ( x 0 ) f L 1 . Consequently, f L τ ( f ( x 0 ) f L ) , and convergence condition (7) degenerates into f ( x ) f L . Strictly following the convergence condition, only schemas reaching the lowest value f L are considered convergent. Nevertheless, the optimized schema managed to produce the lowest objective function value in all trigonometric function benchmarks, except one.
Convergence speed of the obtained schema (9) is compared to the other schemas (1) and (2) using data profiles (6) and (7) [24]. Figure 3 shows data profiles for GH and MGH benchmark sets separately and combined. The profiles are calculated at τ = 10 7 with κ max = 25,000 simplex gradient estimates and without tolerance-based algorithm termination ( T o l f = T o l X = 0 ). The graphs are shown for up to 15,000 simplex gradient estimates.
Graphs in Figure 3 reveal a slight advantage in terms of speed for the optimized schema over the existing adaptive parameter schemas. The schemas perform similarly when only GH benchmarks are considered. Although the optimized schema can be declared as the fastest (solves the highest percentage of problems at almost any given κ ), the remaining schemas quickly follow. All the adaptive schemas solve 100% of GH problems after approximately 2500 simplex gradient estimates. The CCS is the first to achieve this goal at ~2100 simplex gradient estimates. The original NMA, on the other hand, manages to be competitive with the adaptive schemas for the first 10% of problems. With the lower-dimensional problems solved, it starts to lag, finally solving less than 30% of GH problems.
The advantage of the optimized schema becomes notable when observing the MGH benchmark set. At ~1600 simplex gradient estimates, the optimized schema solves more than 90% of the problems while the remaining adaptive schemas solve up to 83%, and the original NMA merely 50% of the problems. It is clearly the fastest schema and has the highest final percentage of solved problems (98%). Other adaptive schemas solve up to 93%, and the original NMA solves 59% of the problems.
The optimized schema remains the fastest when all GH and MGH benchmark problems are considered. Its advantage over the existing adaptive schemas is 6% at 1000, 5% at 2000, and 3% at 3000 simplex gradient estimates. The original NMA manages to keep up for less than 20% of the problems at ~150 simplex gradient estimates. The final percentage of solved problems is 99% for the optimized schema, up to 97% for the remaining adaptive schemas, and 44% for the original NMA.
In Figure 4, the same measurement of convergence speed is repeated with tolerance based algorithm termination set to T o l f = T o l X = 10 4 . The profiles are once more calculated for τ = 10 7 . The maximum number of simplex gradient estimates κ max = 25,000 does not play any role in this experiment because none of the individual Nelder–Mead runs ever uses the entire budget of available simplex gradient estimates. The algorithm is always terminated earlier by the tolerance-based criterion. The graphs are shown for up to 8000 simplex gradient estimates. No further progress is made beyond that point.
With tolerance-based termination enabled, the optimized schema performs even better than the other schemas. When we consider only GH benchmarks, the schemas again perform similarly, although, in general, the optimized schema is still slightly faster. The CCS is the first that solves 100% of GH problems in ~2100 estimates. Other adaptive schemas quickly follow. The only notable alteration can be observed for the original NMA which now performs worse and solves less than 20% of GH problems. The original NMA manages to converge in an additional ~10% of GH problems when it is allowed to run beyond the tolerance-based stopping criterion.
The advantage of the optimized schema becomes apparent in MGH data profiles. The KSS, CCS, and CRS keep pace for up to ~370 simplex gradient estimates where ~70% of the MGH problems are solved. The GHS starts to lag at ~240 estimates with ~40% of the problems solved. It catches up at 3000 estimates, finally solving 70% of the MGH problems. KSS, the best of the existing adaptive schemas, ends at 71%. The optimized schema is clearly better with 80% of the problems solved in 730 estimates, ending with 82% at 1660 estimates. The original NMA again performs worse compared to the run without tolerance-based termination, ending with 23% of solved problems.
For all-inclusive GH and MGH benchmark data profiles, the optimized schema starts to stand out from the rest of the existing adaptive schemas at ~60% of solved problems, achieved within ~300 simplex gradient estimates. The optimized schema reaches its final result, i.e., 90% of solved problems, in 2400 estimates. Its advantage over the best existing adaptive schema, i.e., KSS, is 6%. The KSS comes to 84% in 7020 estimates. As expected, the original NMA ends at a modest 21% of solved problems within 1200 estimates.
Figure 5 shows our last convergence speed measurement on GH, MGH, and CUTEr benchmark problems, 169 problems in total. The data profiles are shown for the CUTEr benchmark set, and GH, MGH, and CUTEr benchmark sets combined. They are calculated at convergence condition tolerance τ = 10 7 with κ max = 25,000 simplex gradient estimates limit. Cases without the tolerance-based algorithm termination ( T o l f = T o l X = 0 ), and with it ( T o l f = T o l X = 10 4 ), are shown. When the tolerance-based criterion is applied, the Nelder–Mead runs are always terminated before the limit of κ max simplex gradient estimates is reached.
Data profiles in Figure 5 confirm our previous observations. Although not meta-optimized on CUTEr benchmark problems, the optimized schema solves the highest percentage of the problems in all shown cases at any given κ . It makes a difference of 4 to 6% compared to the first follower at ~400 simplex gradient estimates when tolerance-based algorithm termination is applied, and at ~1100 estimates when it is not. The optimized schema reaches or comes close to its final result in ~4000 estimates. It solves 91 to 97% of problems, GHS 84 to 93%, CRS 84 to 93%, KSS 86 to 91%, CCS 81 to 88%, and the original NMA manages 13 to 36%.
Besides the 40 GH and 46 MGH benchmarks, the following problems are included in data profiles in Figure 5: BrownAlmostLinear with dimensions n p = { 20 , 30 , 40 , 50 , 70 , 100 } from MGH set, HilbertQuadratic with dimensions n p = { 10 , 30 , 60 , 90 } , OrenPower [34] with dimensions n p = { 30 , 50 , 60 , 70 , 80 , 90 , 100 } , and ARWHEAD_100, DQDRTIC_50, DQDRTIC_100, SPARSINE_50, SPARSINE_100, CHNROSNB_25, CHNROSNB_50, SCOSINE_10, LIARWHD_100, FLETCHBV_100, DIXON3DQ_100, OSCIGRAD_25, OSCIGRAD_100, NONCVXUN_10, NONCVXUN_100, PENALTY1_50, PENALTY1_100, SINQUAD_50, SINQUAD_100, FLETCBV3_100, PENALTY2_100, TOINTGSS_50, TOINTGSS_100, ARGLINC_50, EXTROSNB_100, COSINE_100, TRIDIA_50, TRIDIA_100, NONDQUAR_100, QUARTC_25, QUARTC_100, FREUROTH_100, WATSON_31, ERRINROS_25, ERRINROS_50, NONDIA_20, NONDIA_30, NONDIA_50, NONDIA_90, NONDIA_100, MANCINO_20, MANCINO_30, DQRTIC_100, ENGVAL1_50, ENGVAL1_100, HILBERTA_10, FLETCBV2_100, TQUARTIC_10, EDENSCH_36, ARGLINA_50, ARGLINA_100, BOX_100, POWELLSG_36, POWELLSG_40, POWELLSG_60, POWELLSG_80, POWELLSG_100, POWER_75, POWER_100, HILBERTB_50, ARGLINB_50, MOREBV_50, BDQRTIC_100, SCURLY10_100, VAREIGVL_49, VAREIGVL_99 from CUTEr benchmark problem set.
Speed of convergence of a particular adaptive parameter schema is mirrored in the simplex’s best value descent during an individual Nelder–Mead run. The descent rate can be expressed by cos θ , where θ is an angle between the search direction d and the gradient of the objective function f ( x ) :
cos θ = d T f ( x ) | d | | f ( x ) | .
Search direction d is locally descending when cos θ < 0 . The fastest descent is achieved at cos θ = 1 . According to the NMA definition, the search direction is d = c ( P n p P ¯ ) , where c is the reflection ( α ), expansion ( β ), or contraction ( γ ) NMA parameter. The descent rate in a non-shrinking NMA iteration is therefore calculated as
cos θ = ( P n p P ¯ ) T f ( P n p ) | P n p P ¯ | | f ( P n p ) | .
The simplex’s best value descents and corresponding descent rates for n p = 100-dimensional GH benchmark problems are shown in Figure 6. The figure shows that all the existing schemas as well as the optimized adaptive schema manage to maintain some level of descent during the entire Nelder–Mead run. A higher descent rate, i.e., more negative cos θ , ensures faster objective function descent and fulfillment of the termination criteria. The optimized schema is the fastest or near fastest in all shown cases except for the ϵ = 0.0 , σ = 0.0001 case. This is partly reflected in the poorer descent rate of the optimized schema for that particular case.
The tolerance boundary intersections shown in Figure 6 are t p s values from (6). Data profiles in Figure 3, Figure 4 and Figure 5 summarize tolerance boundary intersections over the entire benchmark problem set.
The absence of a sufficient descent rate is fatal for the original NMA. The original NMA manages some slow descent only in the ϵ = 0.0 , σ = 0.0 case. In all remaining cases, cos θ approaches 0 . Search direction d becomes orthogonal to the negative gradient which was first observed by Torczon [12]. As a consequence, the original NMA stops descending and does not achieve the convergence boundary.
In [4], the authors prove that NMA does not perform shrink iteration when the objective function is strictly convex. Furthermore, for a uniformly convex objective function, the descent rate is provided by expansion and contraction iterations [15], although the effect diminishes with problem dimension n p . In other words, to maintain a sufficient descent rate, an adequate share of expansion and contraction iterations is needed. Note that a uniformly convex function is also strictly convex. Since the modified quadratic function (10) is uniformly convex, the above applies to the GH benchmark set. The share of non-reflection iterations, i.e., expansion and contraction iterations combined, is shown in Figure 7. In general, it declines with the problem dimension for all schemas and ( ϵ , σ ) pairs. Nevertheless, all the adaptive parameter schemas manage to keep the share above 5%, which provides a sufficient descent rate. The CCS stands out with its lowest non-reflection share above 35%, yet, such a high share is not reflected in better performance. The non-reflection share alone, therefore, does not guarantee high convergence speed. The lowest non-reflection share of the optimized schema is 12% at n p = 100 in the ϵ = 0.0 , σ = 0.0001 case. The original NMA’s share is, on the other hand, never greater than 26% (which in turn is achieved for lower-dimensional problems). With problem dimension increase, it quickly drops as low as to 0.56% in the worst case, which confirms the poor performance and convergence problems of the original NMA schema (1).

5. Conclusions

Adaptive parameter schemas address poor NMA performance on high-dimensional problems. We used a meta-optimization procedure to find a novel adaptive parameter schema presented in this paper. Although the meta-optimization problem seems simple, brute force optimization is not feasible due to the immense computing power required. To set up the problem, we defined a mathematical formulation of the adaptive parameter schema and an objective function evaluating a schema’s performance. We tuned the meta-optimization parameters in a series of shorter meta-optimization runs. The final settings constrain the meta-optimization parameter space, define a single NMA run termination criteria to evaluate an SC’s performance, limit the number of NMA iterations to catch non-convergent SCs, and limit the number of meta-optimization iterations. We used PSADE, a robust global parallel asynchronous method.
The performance of the proposed adaptive parameter schema is discussed and compared with the existing schemas. The share of non-reflection iterations and the descent rate do not show any significant deviation of the proposed schema from the existing ones. However, data profiles on GH modified quadratic, MGH, and CUTEr benchmark problems show that the proposed schema outperforms the existing ones in both accuracy and convergence speed. We performed the evaluation with and without tolerance-based termination of the NMA.
The proposed schema is a result of a meta-optimization procedure. We evaluate its performance but, on the other hand, provide no mathematical explanation for why the schema performs so well. The proposed schema is the global minimum determined by the schema’s mathematical formulation and meta-optimization objective function definition.

Author Contributions

Conceptualization, Á.B. and J.P.; methodology, Ž.R. and J.P.; software, J.O. and J.P.; validation, Á.B.; formal analysis, J.P.; investigation, Ž.R., J.O. and J.P.; data curation, J.P.; writing—original draft preparation, J.P.; writing—review and editing, Á.B. and J.P.; supervision, Á.B. and T.T.; project administration, T.T.; funding acquisition, T.T. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support from the Slovenian Research Agency (research core funding No. P2-0246 ICT4QoL—Information and Communications Technologies for Quality of Life).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NMANelder–Mead Algorithm
PSADEParallel Simulated Annealing with Differential Evolution
SCSchema Candidate
GHGao–Han
MGHMoré–Garbow–Hilstrom
CUTErConstrained and Unconstrained Testing Environment, revisited
GHSGao–Han Schema
KSSKumar–Suri Schema
CCSChebyshev Crude Schema
CRSChebyshev Refined Schema
PyPIPython Package Index
fobjective function
n p number of optimized variables or problem dimension
P i , P ¯ ith simplex vertex and centroid of simplex vertices
P r , P e reflected point and expanded point
P r c , P n c reflected point and worst point contracted towards centroid
α , β , γ , δ NMA reflection, expansion, contraction, and shrink parameters
n c CRS constant calculated from n p
x 0 , x starting point, an arbitrary point in n p -dimensional parameter space
ϵ i , e i ith component Pfeffer’s constant and unit vector
T o l f , T o l X simplex flatness and size tolerances
P i j jth component of ith simplex vertex
c 0 α , c 1 α , c 0 β , c 1 β , etc.meta-optimization variables defining an SC
p, P single benchmark problem and set of benchmark problems
s, S single parameter schema and set of all parameter schemas
r, R reference parameter schema and set of all reference parameter schemas
X set of sets of GH and MGH benchmark problems
κ number of simplex gradient estimates
κ max κ available for schema evaluation per single p
t p s number of objective function evaluations needed on problem p by schema s to satisfy (7)
d s P ( κ ) share of problems from set P solved by schema s in κ simplex gradient estimates
f L lowest objective function value reached in κ max simplex gradient estimates by any of the schemas s S on a particular problem
τ convergence condition tolerance
w P + weight used in meta-optimization objective function when at least one of the reference schemas outperforms the evaluated SC on set of benchmark problems P
w P weight used in meta-optimization objective function when the evaluated SC outperforms all the reference schemas on set of benchmark problems P

References

  1. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  2. McKinnon, K.I.M. Convergence of the Nelder-Mead simplex method to a non-stationary point. J. Optim. 1998, 9, 148–158. [Google Scholar]
  3. Galántai, A. A convergence analysis of the Nelder-Mead simplex method. Acta Polytech. Hungarica 2021, 18, 93–105. [Google Scholar] [CrossRef]
  4. Lagarias, J.C.; Reeds, J.A.; Wright, M.H.; Wright, P.E. Convergence properties of the Nelder-Mead simplex method in low dimensions. J. Optim. 1998, 9, 112–147. [Google Scholar] [CrossRef] [Green Version]
  5. Lagarias, J.C.; Poonen, B.; Wright, M.H. Convergence of the restricted Nelder-Mead algorithm in two dimensions. J. Optim. 2012, 22, 501–532. [Google Scholar] [CrossRef] [Green Version]
  6. Kelley, C.T. Detection and remediation of stagnation in the Nelder-Mead algorithm using a sufficient decrease condition. J. Optim. 1999, 10, 43–55. [Google Scholar] [CrossRef]
  7. Tseng, P. Fortified-descent simplicial search method: A general approach. J. Optim. 1999, 10, 269–288. [Google Scholar] [CrossRef]
  8. Nazareth, L.; Tseng, P. Gilding the lily: A variant of the Nelder-Mead algorithm based on golden-section search. Comput. Optim. Appl. 2002, 22, 133–144. [Google Scholar] [CrossRef]
  9. Price, C.J.; Coope, I.D.; Byatt, D. A convergent variant of the Nelder-Mead algorithm. J. Optim. Theory. Appl. 2002, 113, 5–19. [Google Scholar] [CrossRef] [Green Version]
  10. Bűrmen, Á.; Puhan, J.; Tuma, T. Grid restrained Nelder-Mead algorithm. Comput. Optim. Appl. 2006, 34, 359–375. [Google Scholar] [CrossRef] [Green Version]
  11. Bűrmen, Á.; Tuma, T. Unconstrained derivative-free optimization by successive approximation. Comput. Appl. Math. 2009, 223, 62–74. [Google Scholar] [CrossRef] [Green Version]
  12. Torczon, V.J. Multi-Directional Search: A Direct Search Algorithm for Parallel Machines. Ph.D. Thesis, Rice University, Houston, TX, USA, 1989. [Google Scholar]
  13. Wright, M. Direct search methods: Once scorned, now respectable. In Proceedings of the 16th Dundee Biennial Conference in Numerical Analysis, Dundee, Scotland, 27–30 June 1996; pp. 191–208. [Google Scholar]
  14. Han, L.; Neumann, M. Effect of dimensionality on the Nelder-Mead simplex method. Optim. Methods Softw. 2006, 21, 1–16. [Google Scholar] [CrossRef]
  15. Gao, F.; Han, L. Implementing the Nelder-Mead simplex algorithm with adaptive parameters. Comput. Optim. Appl. 2012, 51, 259–277. [Google Scholar] [CrossRef]
  16. Fajfar, I.; Puhan, J.; Bűrmen, Á. Evolving a Nelder-Mead algorithm for optimization with genetic programming. Evol. Comput. 2017, 25, 351–373. [Google Scholar] [CrossRef] [PubMed]
  17. Musafer, H.A.; Mahmood, A. Dynamic Hassan–Nelder-Mead with simplex free selectivity for unconstrained optimization. IEEE Access 2018, 6, 39015–39026. [Google Scholar] [CrossRef]
  18. Fajfar, I.; Bűrmen, Á.; Puhan, J. The Nelder-Mead simplex algorithm with perturbed centroid for high-dimensional function optimization. Optim. Lett. 2019, 13, 1011–1025. [Google Scholar] [CrossRef]
  19. Kumar, G.N.S.; Suri, V.K. Multilevel Nelder-Mead’s simplex method. In Proceedings of the 2014 9th International Conference on Industrial and Information Systems (ICIIS), Gwalior, India, 15–17 December 2014; Volume 9, pp. 1–6. [Google Scholar]
  20. Mehta, V.K. Improved Nelder-Mead algorithm in high dimensions with adaptive parameters based on Chebyshev spacing points. Eng. Optim. 2020, 52, 1814–1828. [Google Scholar] [CrossRef]
  21. Olenšek, J.; Tuma, T.; Puhan, J.; Bűrmen, Á. A new asynchronous parallel global optimization method based on simulated annealing and differential evolution. Appl. Soft Comput. 2011, 11, 1481–1489. [Google Scholar] [CrossRef]
  22. Moré, J.J.; Garbow, B.S.; Hilstrom, K.E. Testing unconstrained optimization software. ACM Trans. Math. Softw. 1981, 7, 17–41. [Google Scholar] [CrossRef]
  23. Gould, N.I.M.; Orban, D.; Toint, P.L. CUTEr and SifDec: A constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 2003, 29, 373–394. [Google Scholar] [CrossRef] [Green Version]
  24. Moré, J.J.; Wild, S.M. Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 2009, 20, 172–191. [Google Scholar] [CrossRef] [Green Version]
  25. Olenšek, J.; Bűrmen, Á.; Puhan, J.; Tuma, T. DESA: A new hybrid global optimization method and its application to analog integrated circuit sizing. J. Glob. Optim. 2009, 44, 53–77. [Google Scholar] [CrossRef]
  26. Bűrmen, Á. PyOPUS-Simulation, Optimization, and Design. Available online: http://fides.fe.uni-lj.si/pyopus (accessed on 15 May 2022).
  27. Bűrmen, B.; Locatelli, I.; Bűrmen, Á.; Bogataj, M.; Mrhar, A. Mathematical modeling of individual gastric emptying of pellets in the fed state. J. Drug. Deliv. Sci. Technol 2014, 24, 418–424. [Google Scholar] [CrossRef]
  28. Rojec, Ž.; Olenšek, J.; Fajfar, I. Analog circuit topology representation for automated synthesis and optimization. Inf. MIDEM 2018, 48, 29–40. [Google Scholar]
  29. Rojec, Ž.; Bűrmen, Á.; Fajfar, I. Analog circuit topology synthesis by means of evolutionary computation. Eng. Appl. Artif. Intell. 2019, 80, 48–65. [Google Scholar] [CrossRef]
  30. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. CCSA: Conscious Neighborhood-based Crow Search Algorithm for Solving Global Optimization Problems. Appl. Soft Comput. 2019, 85, 105583. [Google Scholar] [CrossRef]
  31. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar] [CrossRef]
  32. Nadimi-Shahraki, M.H.; Zamani, H. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 2022, 98, 116895. [Google Scholar] [CrossRef]
  33. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Engrg. 2022, 392, 114616. [Google Scholar] [CrossRef]
  34. Shanno, D.F.; Phua, K. Matrix conditioning and nonlinear optimization. Math. Program. 1978, 14, 149–160. [Google Scholar] [CrossRef]
Figure 1. Parameter schema functions for the original Nelder–Mead Algorithm (NMA), Gao–Han Schema (GHS), Kumar–Suri Schema (KSS), Chebyshev Crude Schema (CCS), and Chebyshev Refined Schema (CRS).
Figure 1. Parameter schema functions for the original Nelder–Mead Algorithm (NMA), Gao–Han Schema (GHS), Kumar–Suri Schema (KSS), Chebyshev Crude Schema (CCS), and Chebyshev Refined Schema (CRS).
Mathematics 10 02288 g001
Figure 2. Comparison of NMA adaptive schema parameters.
Figure 2. Comparison of NMA adaptive schema parameters.
Mathematics 10 02288 g002
Figure 3. Data profiles for GH and MGH benchmark sets, separated and combined. No tolerance-based algorithm termination was applied ( T o l f = T o l X = 0 ).
Figure 3. Data profiles for GH and MGH benchmark sets, separated and combined. No tolerance-based algorithm termination was applied ( T o l f = T o l X = 0 ).
Mathematics 10 02288 g003
Figure 4. Data profiles for GH and MGH benchmark sets, separated and combined. Tolerance-based algorithm termination was applied ( T o l f = T o l X = 10 4 ).
Figure 4. Data profiles for GH and MGH benchmark sets, separated and combined. Tolerance-based algorithm termination was applied ( T o l f = T o l X = 10 4 ).
Mathematics 10 02288 g004
Figure 5. Data profiles for Constrained and Unconstrained Testing Environment, revisited (CUTEr) benchmark set and GH, MGH, and CUTEr benchmark sets combined, without ( T o l f = T o l X = 0 ) and with tolerance-based algorithm termination applied ( T o l f = T o l X = 10 4 ).
Figure 5. Data profiles for Constrained and Unconstrained Testing Environment, revisited (CUTEr) benchmark set and GH, MGH, and CUTEr benchmark sets combined, without ( T o l f = T o l X = 0 ) and with tolerance-based algorithm termination applied ( T o l f = T o l X = 10 4 ).
Mathematics 10 02288 g005
Figure 6. Best objective function values ( f ( P 0 ) ) and corresponding descent rates ( cos θ ) during Nelder–Mead run for n p = 100 -dimensional GH benchmarks. Tolerance-based algorithm termination applied ( T o l f = T o l X = 10 4 ). Black line represents convergence boundary (7) for τ = 10 7 .
Figure 6. Best objective function values ( f ( P 0 ) ) and corresponding descent rates ( cos θ ) during Nelder–Mead run for n p = 100 -dimensional GH benchmarks. Tolerance-based algorithm termination applied ( T o l f = T o l X = 10 4 ). Black line represents convergence boundary (7) for τ = 10 7 .
Mathematics 10 02288 g006
Figure 7. Share of expansion and contraction iterations for GH benchmark problems.
Figure 7. Share of expansion and contraction iterations for GH benchmark problems.
Mathematics 10 02288 g007
Table 1. Accuracy of the original NMA, the existing adaptive schemas (GHS, KSS, CCS, CRS), and the optimized schema on Gao–Han (GH) modified quadratic benchmark problems.
Table 1. Accuracy of the original NMA, the existing adaptive schemas (GHS, KSS, CCS, CRS), and the optimized schema on Gao–Han (GH) modified quadratic benchmark problems.
n p f ( x ) NMA f ( x ) GHS f ( x ) KSS f ( x ) CCS f ( x ) CRS f ( x ) opt .
ϵ = 0.0103.5 × 10−3230.00.03.5 × 10−3230.00.0
σ = 0.0202 × 10−3220.00.00.00.010−323
301.14 × 10−110.00.00.00.00.0
402.03 × 10−40.00.00.00.00.0
505.54 × 10−40.00.00.00.00.0
601.38 × 10−50.00.05 × 10−3240.05 × 10−324
705.76 × 10−50.00.00.00.00.0
804.87 × 10−65 × 10−3230.00.00.00.0
902.75 × 10−61.4 × 10−3220.05 × 10−3240.00.0
1003.19 × 10−66 × 10−3230.02 × 10−3230.00.0
ϵ = 0.05100.00.00.00.00.00.0
σ = 0.0206.23 × 10−3220.00.00.05 × 10−3240.0
305.31 × 10−30.00.00.00.05 × 10−324
401.32 × 10−20.00.00.00.00.0
501.62 × 10−10.00.00.00.00.0
6012.70.00.00.00.00.0
708.242 × 10−3230.00.00.00.0
8032.21.24 × 10−3220.05 × 10−3240.00.0
903.775.4 × 10−3235 × 10−3245 × 10−3240.00.0
1002786 × 10−3230.010−3230.00.0
ϵ = 0.0100.00.00.00.00.00.0
σ = 0.0001202.05 × 10−30.00.00.00.00.0
301.91 × 10−50.00.00.00.00.0
4016.70.00.00.00.00.0
502.630.00.00.00.00.0
6010.90.00.010−3230.00.0
702765 × 10−3240.00.00.00.0
802928 × 10−3230.00.00.00.0
9011.76 × 10−3235 × 10−3245 × 10−3240.00.0
10048.77.4 × 10−3230.010−3230.00.0
ϵ = 0.05101.5 × 10−3230.00.00.00.00.0
σ = 0.0001201.93 × 10−40.00.00.05 × 10−3245 × 10−324
301.12 × 10−20.00.00.00.05 × 10−324
407.31 × 10−10.00.00.00.00.0
5037.20.00.00.00.05 × 10−324
601790.00.00.00.00.0
7018.73 × 10−3230.00.00.00.0
8016.47.4 × 10−3230.05 × 10−3240.00.0
9014801.3 × 10−3221.5 × 10−3230.00.00.0
10038025 × 10−3230.010−3230.00.0
accurate 7/4040/4040/4040/4040/4040/40
Table 2. Accuracy of the original NMA, the existing adaptive schemas (GHS, KSS, CCS, CRS), and the optimized schema on Moré–Garbow–Hilstrom (MGH) benchmark problems.
Table 2. Accuracy of the original NMA, the existing adaptive schemas (GHS, KSS, CCS, CRS), and the optimized schema on Moré–Garbow–Hilstrom (MGH) benchmark problems.
Function n p f ( x ) NMA f ( x ) GHS f ( x ) KSS f ( x ) CCS f ( x ) CRS f ( x ) opt .
Extended122.91 × 10−282.35 × 10−291.73 × 10−284.64 × 10−295.18 × 10−296.88 × 10−29
Rosenbrock1820.06.97 × 10−296.51 × 10−291.44 × 10−285.66 × 10−281.35 × 10−28
2412.51.72 × 10−282.58 × 10−281.86 × 10−283.72 × 10−286.21 × 10−28
3034.54.09 × 10−286.70 × 10−287.28 × 10−283.70 × 10−272.76 × 10−28
3649.18.72 × 10−286.64 × 10−286.81 × 10−284.77 × 10−281.11 × 10−27
Extended128.34 × 10−553.33 × 10−577.09 × 10−593.09 × 10−571.07 × 10−591.18 × 10−58
Powell241.33 × 10−91.83 × 10−543.45 × 10−565.37 × 10−561.67 × 10−533.39 × 10−53
singular401.69 × 10 6 1.06 × 10−502.34 × 10−521.46 × 10−525.22 × 10−522.33 × 10−53
604.16 × 10−49.71 × 10 6 3.43 × 10−502.88 × 10−521.28 × 10−371.16 × 10−46
Penalty I107.57 × 10−57.09 × 10−57.09 × 10−57.60 × 10 5 7.09 × 10−57.09 × 10−5
Penalty II102.98 × 10−42.94 × 10−42.94 × 10−42.98 × 10−42.95 × 10−42.94 × 10−4
Variably124.773.72 × 10−301.47 × 10−293.64 × 10−292.30 × 10−291.78 × 10−29
dimensioned184.228.96 × 10−302.06 × 10−291.52 × 10−294.74 × 10−294.25 × 10−29
2411.58.22 × 10−298.37 × 10−297.52 × 10−299.23 × 10−292.27 × 10−28
3040.58.08 × 10−291.08 × 10−281.06 × 10−283.38 × 10−284.49 × 10−28
3660.14.21 × 10−281.46 × 10−288.82 × 10−298.35 × 10−287.60 × 10−28
Trigonometric102.80 × 10 5 2.80 × 10−52.80 × 10−52.80 × 10−52.80 × 10−52.80 × 10−5
201.35 × 10−61.35 × 10−66.03 × 10−66.86 × 10−61.35 × 10−61.35 × 10−6
302.20 × 10−59.90 × 10−79.90 × 10−75.65 × 10−69.90 × 10−75.98 × 10−7
401.41 × 10−51.55 × 10−63.95 × 10−61.68 × 10−75.58 × 10−71.55 × 10−6
502.52 × 10−52.24 × 10−73.41 × 10−69.23 × 10−71.11 × 10−62.24 × 10−7
603.87 × 10−58.68 × 10−77.57 × 10−77.57 × 10−71.27 × 10−64.62 × 10−7
Discrete106.85 × 10−323.03 × 10−338.36 × 10−322.20 × 10−313.07 × 10−321.59 × 10−32
boundary204.69 × 10−307.24 × 10−322.51 × 10−322.39 × 10−321.05 × 10−313.92 × 10−32
value309.87 × 10−61.10 × 10−311.43 × 10−311.19 × 10−312.02 × 10−319.29 × 10−32
406.46 × 10−64.58 × 10−313.55 × 10−311.37 × 10−314.76 × 10−313.45 × 10−31
505.72 × 10−66.02 × 10−315.70 × 10−312.84 × 10−315.35 × 10−314.35 × 10−31
603.19 × 10−62.46 × 10−308.09 × 10−316.64 × 10−311.11 × 10−307.39 × 10−31
Discrete101.91 × 10−314.24 × 10−331.44 × 10−322.27 × 10−312.56 × 10−323.08 × 10−33
integral207.69 × 10−304.62 × 10−322.90 × 10−326.27 × 10−323.40 × 10−322.37 × 10−32
equation307.11 × 10−42.22 × 10−312.50 × 10−313.45 × 10−318.55 × 10−322.50 × 10−31
403.63 × 10−43.82 × 10−313.07 × 10−313.21 × 10−314.25 × 10−313.04 × 10−31
503.05 × 10−38.51 × 10−311.34 × 10−305.95 × 10−311.47 × 10−307.34 × 10−31
604.46 × 10−42.24 × 10−301.30 × 10−301.59 × 10−309.74 × 10−316.12 × 10−31
Broyden103.99 × 10−303.12 × 10−307.31 × 10−303.28 × 10−292.92 × 10−302.92 × 10−30
tridiagonal203.20 × 10−261.63 × 10−293.34 × 10−296.15 × 10−292.45 × 10−293.15 × 10−29
304.70 × 10−261.68 × 10−281.19 × 10−289.66 × 10−296.50 × 10−298.18 × 10−29
409.11 × 10−142.24 × 10−286.73 × 10−283.70 × 10−282.45 × 10−282.24 × 10−28
502.67 × 10−135.82 × 10−286.98 × 10−284.61 × 10−286.86 × 10−284.54 × 10−28
603.78 × 10−119.12 × 10−281.69 × 10−271.01 × 10−271.34 × 10−271.10 × 10−27
Broyden104.18 × 10−284.61 × 10−304.81 × 10−306.82 × 10−297.43 × 10−302.36 × 10−30
banded201.85 × 10−262.63 × 10−296.04 × 10−297.47 × 10−291.60 × 10−284.77 × 10−29
3012.22.25 × 10−281.34 × 10−282.08 × 10−283.15 × 10−282.89 × 10−28
402.02 × 10 6 3.48 × 10−287.41 × 10−289.32 × 10−281.34 × 10−283.25 × 10−28
509.33 × 10 5 6.08 × 10−281.38 × 10−276.78 × 10−285.98 × 10−281.04 × 10−27
605.13 × 10 6 2.93 × 10−273.44 × 10−274.09 × 10−277.98 × 10−287.59 × 10−28
accurate 15/4640/4640/4639/4639/4642/46
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rojec, Ž.; Tuma, T.; Olenšek, J.; Bűrmen, Á.; Puhan, J. Meta-Optimization of Dimension Adaptive Parameter Schema for Nelder–Mead Algorithm in High-Dimensional Problems. Mathematics 2022, 10, 2288. https://doi.org/10.3390/math10132288

AMA Style

Rojec Ž, Tuma T, Olenšek J, Bűrmen Á, Puhan J. Meta-Optimization of Dimension Adaptive Parameter Schema for Nelder–Mead Algorithm in High-Dimensional Problems. Mathematics. 2022; 10(13):2288. https://doi.org/10.3390/math10132288

Chicago/Turabian Style

Rojec, Žiga, Tadej Tuma, Jernej Olenšek, Árpád Bűrmen, and Janez Puhan. 2022. "Meta-Optimization of Dimension Adaptive Parameter Schema for Nelder–Mead Algorithm in High-Dimensional Problems" Mathematics 10, no. 13: 2288. https://doi.org/10.3390/math10132288

APA Style

Rojec, Ž., Tuma, T., Olenšek, J., Bűrmen, Á., & Puhan, J. (2022). Meta-Optimization of Dimension Adaptive Parameter Schema for Nelder–Mead Algorithm in High-Dimensional Problems. Mathematics, 10(13), 2288. https://doi.org/10.3390/math10132288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop