Next Article in Journal
Fatigue Crack Growth Monitoring and Investigation on G20Mn5QT Cast Steel and Welds via Acoustic Emission
Previous Article in Journal
Study on the Bearing Structure of Key Strata and the Linkage Evolution Mechanism of Surface Subsidence in Shallow Coal Seam Mining
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient and Fast Hybrid GWO-JAYA Algorithm for Design Optimization

1
Department of Mechanics, Mathematics and Management, Polytechnic University of Bari, Via Edoardo Orabona, 4, 70125 Bari, Italy
2
School of Engineering and the Built Environment, Buckinghamshire New University, 59 Walton Street, Aylesbury HP21 7OG, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(20), 9610; https://doi.org/10.3390/app14209610
Submission received: 5 September 2024 / Revised: 14 October 2024 / Accepted: 16 October 2024 / Published: 21 October 2024

Abstract

:
Metaheuristic algorithms (MHAs) are widely used in engineering applications in view of their global optimization capability. Researchers continuously develop new MHAs trying to improve the computational efficiency of optimization search. However, most of the newly proposed algorithms rapidly lost their attractiveness right after their release. In the present study, two classical and powerful MHAs, namely the grey wolf optimizer (GWO) and the JAYA algorithm, which still attract the attention of optimization experts, were combined into a new hybrid algorithm called FHGWJA (Fast Hybrid Grey Wolf JAYA). FHGWJA utilized elitist strategies and repairing schemes to generate high-quality new trial solutions that may always improve the current best record or at least the old population. The proposed FHGWJA algorithm was successfully tested in seven engineering optimization problems formulated in the fields of robotics, hydraulics, and mechanical and civil engineering. Design examples included up to 29 optimization variables and 1200 nonlinear constraints. The optimization results proved that FHGWJA always was superior or very competitive with the other state-of-the-art MHAs including other GWO and JAYA variants. In fact, FHGWJA always converged to the global optimum and very often achieved 0 or nearly 0 standard deviation, with all optimization runs practically converging to the target design. Furthermore, FHGWJA always ranked 1st or 2nd in terms of average computational speed, and its fastest optimization runs were better or highly competitive with those of the best MHA taken for comparison.

1. Introduction

Optimization searches for the minimum or the maximum of an NDV–variables function W X   subject to a set of NCON inequality/equality constraint functions G X or H X . Optimization problems may be iteratively solved with gradient based algorithms (GBAs) or metaheuristic algorithms (MHAs). GBAs formulate and solve a series of approximate sub-problems until the search process converges to the optimal solution; sub-problems are built by evaluating first-order and second-order derivatives of cost functions and constraints at the solution point found in the previous iteration. MHAs do not require gradients: local information provided by gradients in GBAs are replaced in MHAs by global information gathered from a population of candidate designs. Trial solutions of MHAs are randomly generated using a mathematical model inspired by evolutionary theory/processes, biology, physics, chemistry, mathematics, astronomy, astrophysics, herd behavior of animals, human behavior and activities, social sciences, etc. MHAs evaluate the new trial solutions randomly generated and then attempt to improve the design with respect to the previous iterations.
Metaheuristic algorithms can inherently deal with highly nonlinear and non-convex optimization problems. The random search allows MHAs to explore larger portions of search space than GBAs without being trapped in local optima. This ability together with their straightforward software implementation made MHAs become the standard approach to the solution of complex engineering optimization problems. Exploration and exploitation are the two typical phases carried out in metaheuristic optimization. Exploration is the leading search mechanism in the early optimization stages where large perturbations are given to optimization variables to quickly find the best regions of the search space. Exploitation governs the search process as the optimizer converges toward the global optimum: local search is carried out in properly selected neighborhoods of the most promising solutions. Exploration and exploitation must be appropriately combined in metaheuristic optimization to fully search the design space and find the globally optimal solution with low computational effort.
MHAs can be categorized in four groups: (i) evolutionary algorithms; (ii) science-based algorithms; (iii) human-based algorithms; (iv) swarm intelligence-based algorithms. Evolutionary algorithms such as genetic algorithms (GAs) [1,2], differential evolution (DE) [3,4], evolutionary programming (EP) [5], evolution strategies (ESs) [6], and biogeography-based optimization (BBO) [7] reproduce evolution theory and evolutionary processes.
Science-based MHAs mimic laws of physics, chemistry, astronomy and astrophysics, and mathematics. Simulated annealing (SA) [8,9], charged system search (CSS) [10], magnetic charged system search (MCSS) [11], colliding bodies optimization (CBO) [12], water evaporation optimization (WAO) [13], thermal exchange optimization (TEO) [14], equilibrium optimizer (EO) [15], gases Brownian motion optimization (GBMO) [16], and Henry gas solubility optimization (HGSO) [17] are physics/chemistry-based MHAs that reproduce equilibrium conditions of mechanical, electro-magnetic, physical/chemical, or thermal systems subject to external perturbations. Ray optimization (RO) [18] and light spectrum optimizer (LSO) [19] reproduce optics laws to define search directions containing high-quality trial solutions. Big bang–big crunch optimization (BB-BC) [20] and the gravitational search algorithm (GSA) [21] are inspired by astronomy/astrophysics phenomena such as the expansion (big bang)—contraction (big crunch) cycles leading to the formation of new star–planetary systems and gravitational interactions between masses. Mathematics inspired the sine cosine algorithm (SCA) [22], the Runge–Kutta optimizer (RUN) [23], and the arithmetic optimization algorithm (AOA) [24].
Tabu search (TS) [25], harmony search optimization (HS) [26], teaching–learning-based optimization (TLBO) [27], JAYA [28], the group teaching optimization algorithm (GTOA) [29], the mother optimization algorithm (MOA) [30], the preschool education optimization algorithm (PEOA) [31], the learning cooking algorithm (LCA) [32], the imperialist competitive algorithm (ICA) [33], and the political optimizer (PO) [34] are representative algorithms reproducing human activities, behaviors, learning/education processes, social sciences, international strategies, and politics.
Swarm intelligence-based algorithms represent the largest category in metaheuristic optimization. These algorithms mimic the social/individual behavior of animals (insects, terrestrial animals, birds and other flying animals, and aquatic animals) in reproduction, food search, hunting, migration, etc. Particle swarm optimization (PSO) [35], reproducing interactions between individuals of bird/fish swarms, is the most cited metaheuristic algorithm according to the Scopus database; in PSO, a population of candidate designs (the particles) are generated, and their positions and velocities are updated referring to the position of the leader(s) and the best positions of individual particles in each iteration. Insect behavior-inspired algorithms, for example, include ant colony optimization (ACO) [36], artificial bee colony (ABC) [37], the firefly algorithm (FFA) [38], and the ant lion optimizer (ALO) [39].
The grey wolf optimizer (GWO) [40], coyote optimization algorithm (COA) [41], snake optimizer (SO) [42], and snow leopard optimization algorithm (SLOA) [43] simulate the behavior of terrestrial animals. The GWO, reproducing the hunting behavior of grey wolves, is the 2nd most cited algorithm in terms of citations/year after PSO. However, the GWO is preferred to PSO in view of its simpler formulation that does not include internal parameters except population size and a limit to the number of iterations.
The bat algorithm (BA) [44], cuckoo search (CS) [45], crow search algorithm (CSA) [46], Harris hawks optimization (HHO) [47], and starling murmuration optimizer (SMO) [48] are inspired by flying animals like bats and birds. CS (reproducing the parasitic behavior of some cuckoo species that mix their eggs with those of other birds to guarantee the survival of their chicks) and HHO (reproducing the cooperative behavior of Harris hawks in nature, specifically their surprise attacks during the chase) are the most cited MHAs of this sub-category.
Last, the dolphin echolocation algorithm (DEA) [49], whale optimization algorithm (WOA) [50], salp swarm algorithm (SSA) [51], marine predators algorithm (MPA) [52], and giant trevally optimizer (GTO) [53] reproduce social behavior, hunting strategy, swarming, and foraging of aquatic animals; WOA, SSA, and MPA are the most cited MHAs in this sub-category.
Hybrid/improved/enhanced methods also were developed in the optimization literature by adding new equations in the original formulation or merging two or more MHAs. The goal always was finding the best balance between exploration and exploitation to minimize computational cost (i.e., the number of structural analyses or function evaluations required by the optimizer), improve robustness, and limit the number of internal parameters of the metaheuristic formulation. Hybrid metaheuristic algorithms may have a parallel or a serial architecture [54]. In the former case, the component algorithms are independently run on parallel computers, while in the latter case, they are sequentially executed on the same machine.
Metaheuristic optimization algorithms are widely employed in various engineering fields such as, for example, the mechanical characterization of materials and structural identification [55], static and dynamic structural optimization [56,57], damage identification [58], vehicle routing optimization [59], 3D printing process optimization [60], cancer classification [61], and image processing [62]. However, in spite of the large diffusion of MHAs in engineering practice, some issues remain open in metaheuristic optimization: (i) no metaheuristic algorithm can always outperform all other MHAs in all optimization problems; (ii) MHAs may require a very large number of function evaluations (analyses) for completing optimization process; (iii) sophisticated algorithmic variants and hybrid MHAs combining multiple methods often increase the computational complexity of the optimizer also because of the presence of additional internal parameters that are difficult to be tuned. As a matter of fact, newly developed MHAs often added very little to the optimization field and stopped attracting potential users even after a rather short time from their release.
The main objective of this study was to develop an efficient and robust hybrid metaheuristic algorithm for engineering optimization. Generally speaking, a hybrid optimizer should combine high-performance MHAs that complement each other in terms of exploration and exploitation. Furthermore, component algorithms should be versatile so as to successfully deal with as many different optimization problems as possible. Lastly, component algorithms should be simple enough in order to simplify the formulation of the hybrid optimizer and to limit the number of additional internal parameters governing the switch from one component algorithm to another. In view of this, the grey wolf optimizer (GWO) and the JAYA algorithms were selected in the present study and combined into the novel FHGWJA hybrid algorithm (the acronym stands for Fast Hybrid Grey Wolf JAYA) because they certainly satisfy the above-mentioned requisites.
The GWO mimics the leadership hierarchy and hunting mechanism of grey wolves [40]. The leadership hierarchy is simulated by defining four types of grey wolves (α, β, δ, and ω). The hunting mechanism is simulated with different algorithmic operators that represent searching for prey, encircling prey, and attacking prey. The α wolf is the group leader. The β wolf helps the α in decision-making or other pack activities. The lowest ranking is represented by ω wolves. The δ wolves rank between the α-β wolves and the ω wolves. Alpha, beta, and delta wolves estimate the position of the prey, and other wolves randomly update their positions around the prey.
As previously mentioned, the GWO is the 2nd most cited metaheuristic algorithm after PSO according to the Scopus database. It was successfully applied to many fields because of its simple formulation and computational efficiency (see, for example, the surveys on GWO applications presented in [63,64,65]). However, the GWO may suffer from lack of exploitation, risk of stagnation especially in complex problems, and limited adaptability to variations in the problem landscape during the optimization process. Enhanced GWO formulations including (i) new operators to capture other characteristic behaviors of wolves (i.e., gaze cues learning [64] and dimension learning-based hunting [66]), (ii) Cauchy-Gaussian mutation (increasing the search range of leader wolves when they tend to the local optimal solution) along with greedy selection (to maintain population diversity) and improved search strategy considering average position of all individuals [67], (iii) chaotic grouping and dynamic regrouping of individuals to increase population diversity [68], (iv) optimization of initial population along with a nonlinear control parameter convergence strategy and nonlinear tuning strategy of parameters [69], and (v) update of wolves’ positions with spiral movements [70], as well as (vi) hybrid algorithms combining GWO with other powerful MHAs (i.e., particle swarm optimization, biogeography-based optimization, differential evolution, Harris hawks optimization, and the whale optimization algorithm [71,72,73,74,75]) were proposed in order to overcome the above-mentioned limitations.
JAYA [28] utilizes the most straightforward search scheme amongst all MHAs consisting of only one equation to perturb optimization variables: to approach the population’s best solution and move away from the worst solution, thus achieving a significant convergence capability. JAYA is a very versatile algorithm with a large number of applications documented in the literature [76,77]. JAYA has an inherently hybrid nature combining basic features of evolutionary algorithms (the survival of fittest individual) and swarm-based algorithms where the swarm normally follows the leader in the search of the optimal solution. These characteristics make JAYA a very good candidate component of new hybrid metaheuristic algorithms.
Similar to the GWO, JAYA may present a rather weak exploitation phase. Furthermore, JAYA uses only one equation to perturb optimization variables involving only the best and the worst individuals of the population: this may produce stagnation and also limit the exploration phase. To overcome these issues, improved JAYA formulations using sub-populations [78], or involving also the current average solution and the historical solutions (a population of candidate solutions initially generated besides the standard population and permuted in the optimization process according to a probabilistic criterion) [79], were developed. Fuzzy clustering competitive learning, experience learning, and Cauchy mutation mechanisms were also implemented [80] to effectively utilize population information, speed up the convergence rate, improve the balance between exploiting the previously visited regions and exploring new regions of search space in the search process, and reduce the risk of being trapped into local optimum by fine-tuning the quality of the so-far best solution. The high computational cost is another issue in JAYA optimization. In [81,82], it was attempted to reduce the number of analyses and increase convergence speed by directly rejecting heavier designs than population individuals, but this strategy missed the global optimum in many structural optimization problems. Similarly, in [83], the population was updated not only if the new trial solutions improve the individuals currently stored in the population but also if they have the same values for the cost function or penalized cost function. In [84], generation of trial designs also relied on two randomly selected individuals X m and X n , and the perturbation equation varied if m > n or m < n as well as if W X m < W X n or W X m > W X n . JAYA was also hybridized with other efficient MHAs such as, for example, harmony search optimization [56], genetic algorithms [85], crow search [86], the Rao-1 optimizer [87], teaching–learning-based optimization [88], and differential evolution [89].
The main features of the novel FHGWJA algorithm developed in this study and the advancements with respect to the state-of-the-art can be summarized as follows:
(1)
GWO and JAYA were simply merged into FHGWJA without complicating the algorithmic structure resulting from the hybridization process: this is a significant step further with respect to the above-mentioned studies where performance improvements achieved with the GWO and JAYA variants often were problem-sensitive and entailed complicated optimization formulations that were not easy to reproduce;
(2)
FHGWJA utilizes elitist and repair strategies to generate high-quality candidate solutions that always have a significantly high probability of improving the current best record. The movements assigned to optimization variables in the perturbation process may be adjusted according to a rank-proportional variation strategy. This approach is more effective than those adopted in the GWO/JAYA variants proposed in the literature that do not guarantee each new trial solution be better than the current best record. Since elitist/repair strategies are utilized regardless of performing exploration or exploitation, FHGWJA actually optimizes the balance between these two phases;
(3)
The positions of leading wolves are dynamically updated to avoid stagnation and convergence to local optima. For that purpose, FHGWJA utilizes a mirroring strategy based on the concept of descent direction, which is much simpler than the strategies documented in the literature;
(4)
FHGWJA evaluates constraints only after having verified that the new trial design can effectively improve the current best record in the current or subsequent iterations. Should this not occur, repair strategies are activated. This allows for reduced computational costs of optimization to a great extent;
(5)
FHGWJA does not require any internal parameters aside from population size and a limit to the number of iterations. The elitist strategies and convergence criterion implemented via FHGWJA actually make it unnecessary to specify the limit to the number of iterations.
The proposed FHGWJA algorithm was successfully tested in seven “real world” engineering problems. The selected test cases, including up to 29 optimization variables and 1200 nonlinear constraints, regarded, in particular, the following: (i–ii) 2D path planning (minimization of trajectory lengths, respectively, with 7 or 10 obstacles); (iii) shape optimization of a concrete gravity dam with additional earthquake load (volume minimization); (iv–v) calibration of nonlinear hydrologic models (the Muskingum problem solved with 3 or 25 unknown model parameters to be identified); (vi) optimal crashworthiness design of a vehicle subject to side impact; (vii) weight minimization of a planar 200-bar truss structure subject to three independent loading conditions. The present algorithm was compared with the best performing algorithms indicated in the literature for each test problem and advanced variants of state-of-the-art MHAs.
The rest of this manuscript is structured as follows. Section 2 describes the new hybrid optimization algorithm FHGWJA developed in this study. Test problems and optimization results are presented and discussed in Section 3. Section 4 summarizes the main findings of this study and outlines directions of future research.

2. The FHGWJA Algorithm

The new hybrid metaheuristic algorithm FHGWJA developed in this study is now described in detail. The algorithm is composed of seven steps: (i) initialization; (ii) preliminary definition of trial solutions with the classical GWO; (iii) rank-based refinement of preliminary trial solutions defined with the classical GWO; (iv) evaluation/repair of trial solutions with elitist strategies and JAYA-based schemes; (v) population reordering to define the new best and worst individuals and update of the α-β-δ wolves trying to avoid stagnation; (vi) convergence check; (vii) end of optimization search and output.
  • Step 1. Initialization
FHGWJA randomly generates a population of NPOP candidate designs (i.e., wolves) as:
x j i = x j L + ρ j i     ( x j U x j L )         j = 1 , , N D V i = 1 , , N P O P
where NDV is the number of optimization variables; x j L and x j U , respectively, are the lower and upper bounds of the jth optimization variable; ρ j i   is a random number in the (0,1) interval.
In the minimization problem of the W( X ) function (depending on NDV variables stored in the solution vector X ) subjected to NCON inequality constraint functions Gk( X ) ≤ 0, the penalized cost function Wp( X ) is defined as follows:
W p X = W X + p · ψ
where p is a penalty coefficient. The penalty function ψ is defined as follows:
ψ = k = 1 N c o n m a x 0 , g j 2
The Wp( X ) penalized cost function coincides with the W( X ) cost function if the trial solution X   satisfies optimization constraints. No penalty function is defined in the case of unconstrained optimization problems. Any equality constraint H( X ) = 0 included in the optimization problem is transformed into two inequality constraints G( X ) ≤ 0 and −Gk( X ) ≤ 0. Candidate solutions are sorted in terms of penalized cost function: the current best solution X b e s t and the worst solution X w o r s t , respectively, correspond to the lowest and highest values of ψ .
  • Step 2. GWO phase: preliminary generation of new trial designs
FHGWJA utilizes the classical GWO scheme to preliminarily generate the new trial designs. The best three individuals stored in the population are ranked as wolves α, β, and δ. The other individuals of the population are ranked as wolves ω. Wolves encircle the prey during the hunt. Such a behavior is mathematically described as follows:
D = C · X p X i t
X i t + 1 = X p A · D
In Equations (4) and (5), it denotes the current iteration, A and C are coefficient vectors, X p is the position vector of the prey, and X i t   is the generic grey wolf (i.e., search agent) of the population updated in the current iteration to the new solution X i t + 1 . The “·” notation denotes the term-by-term multiplication between vectors that leads to defining another vector.
Vectors A and C are defined as follows:
A = 2 a · r 1 a
C = 2 r 2
In Equations (6) and (7), the components of the a vector decrease linearly from 2 to 0 as the optimization progresses; r 1 and r 2 are random vectors in the [0,1] interval.
While grey wolves actually recognize the prey’s location in the hunting phase, the optimum position (prey) usually is not known a priori in the optimization search. In order to reproduce wolves’ behavior in the GWO algorithm, it is assumed that the three best solutions of the population (i.e., α, β, and δ wolves) have a better knowledge of the potential prey’s location (i.e., the target optimal solution). The rest of population must be updated using the X α , X β , and X δ positions of the best three search agents. The generic solution X i of population is updated to X i , t r as follows:
D α = C 1 · X α X i D β = C 2 · X β X i D δ = C 3 · X δ X i
X 1 = X α A 1 ·   D α X 2 = X β A 2 ·   D β X 3 = X δ A 3 ·   D δ  
X i , t r p r e l = X 1 + X 2 + X 3 3
In Equation (9), vectors X 1 ,   X 2 , a n d X 3 are defined for each candidate design X i of the population. Equations (8)–(10) update the positions of all search agents.
In the optimization process, the wolves α, β, and δ estimate the prey’s position (i.e., the position of the optimal solution) and each candidate solution in the population updates its distance from the prey. The parameter a is reduced from 2 to 0 to emphasize exploration and exploitation, respectively. In the exploration phase, candidate solutions tend to search for another prey when A > 1. Conversely, in the exploitation phase, candidate solutions converge towards the prey when A < 1.
  • Step 3. Rank-based refinement of preliminary trial designs
In the classical GWO, the new population formed by the trial solutions X i , t r obtained by perturbing the individuals X i stored in the population in the previous iteration are sorted in terms of penalized cost function values to update positions X α , X β , and X δ . The process ends upon reaching the limit to the number of iterations or function evaluations defined by the user. However, the classical GWO search scheme has two inherent drawbacks: (i) since the position of the prey (optimal solution) is not known a priori, the search is directly biased toward the α wolf that has a null distance from the current best record Xbest as it corresponds to the current best record itself in view of population sorting; (ii) there is no guarantee that each new trial solutions X i , t r may improve the current best record X b e s t or at least the corresponding individual X i stored in the previous population.
In order to solve these issues, FHGWJA refines the preliminary trial solutions X i , t r p r e l obtained in Step 1 by means of a rank-based criterion. The population defined in the previous iteration is sorted with respect to the penalized cost function values Wp X i . The best solution is assigned the rank 1 while the worst solution is assigned the rank 1/NPOP: let rank(i) = 1/i be the rank of the generic individual X i . The average rate of variation for the penalized cost function with respect to the current best record γ(i) = ΔWp(i)S(i) is evaluated for each solution X i . The ΔWp(i) = [Wp X i Wp X b e s t ] numerator represents the increase in the penalized cost function that occurs when the design is perturbed by ΔS(i) = X i X b e s t from the current best record to the generic candidate solution X i . The average rate of variation γ ( a v e r ) = i = 1 N P O P γ ( i ) / N P O P is hence defined. The refinement step size μ ( i ) is defined for each individual X i as follows:
μ ( i ) = M i n 1 r a n k ( i ) ; γ ( i ) γ ( a v e r )
The preliminary trial solution X i , t r p r e l is adjusted to the trial solution X i , t r using the following equation:
X i , t r = X i , t r p r e l + X b e s t X i X b e s t X i μ ( i )
The rationale behind Equations (11) and (12) is the following. All trial solutions generated via the GWO must be evaluated to see if they improve design. Individuals ranked as high-quality solutions in the previous iteration are very likely to preserve their rank in the current iteration. However, rank may decrease if the cost function changes sharply in the neighborhood of the currently perturbed solution: for example, this may occur if the perturbed solution turns infeasible, and the penalty term increases. For this reason, Equation (11) tends to preserve the overall ranks of population individuals by setting small refinement step sizes μ ( i ) . Since the design improves by moving from X i to X b e s t , the direction S i = X b e s t X i is a descent direction with respect to the X i individual. Hence, FHGWJA attempts to perturb design along the S i direction to further reduce the cost function at least with respect to X i .
  • Step 4. Evaluation and modification/repair of new trial designs: elitist strategies and JAYA scheme
The trial designs X i , t r generated via FHGWJA in Steps 1 and 2 are then evaluated. In the classical GWO, if the new design X i , t r improves the old design X i , it replaces X i in the new population. However, this task entails a new constraint evaluation to compute the penalized cost function value for each new trial design.
In order to reduce the number of constraint evaluations required in the optimization search, FHGWJA utilizes an elitist strategy retaining only the trial solutions that are very likely to improve design. Hence, FHGWJA initially compares only the cost function W( X i , t r ) computed for the new trial design X i , t r with the penalized cost function Wp X b e s t computed for the current best record. If W( X i , t r ) > Wp X b e s t , the new trial design X i , t r is directly rejected because it certainly cannot improve the current best record as the cost function value X i , t r is by itself larger than the penalized cost of X b e s t which also accounted for any constraint violation.
If W( X i , t r ) < Wp X b e s t , the cost function value W( X i , t r ) computed for the new trial solution X i , t r is compared with 1.1 times the cost function value W( X b e s t ) computed for the current best record. Ifit holds W( X i , t r ) ≤ 1.1W( X b e s t ), the new trial solution X i , t r is provisionally included in the new population. The 1.1W( X b e s t ) threshold was selected upon the following rationale. Exploration is characterized by a high probability of improving design: hence, the W( X i , t r ) > 1.1W( X b e s t ) condition is not likely to occur. In exploitation, local minima should be bypassed by the optimizer to converge to the global optimum. Similar to SA, FHGWJA is allowed to provisionally accept slightly worse candidate solutions than X b e s t . The threshold level of acceptance probability in state-of-the-art SA formulations such as [81,90] is 0.9 if the ratio between the cost function increment [W( X i , t r ) − W( X b e s t )] and the annealing temperature T is 0.1. This means that there is a 90% probability to provisionally accept a worse design than X b e s t and make it improve in the next iterations. Since T is initially set equal to the expected optimum cost (or roughly corresponds to the cost function value W X b e s t ), it is reasonable to assume that solutions up to 10% worse than X b e s t may improve X b e s t itself in the next few iterations.
Since each new trial solution X i , t r provisionally included in the new population was generated by FHGWJA using the classical GWO scheme based on the positions of wolves α, β, and δ, FHGWJA adopts a JAYA-based scheme to avoid stagnation. For that purpose, when W( X i , t r ) ≤ 1.1W( X b e s t ), a new trial solution X i , t r is defined as follows:
X i , t r = X i , t r + λ 1   · X b e s t X i , t r λ 2   · X δ X i , t r
where λ 1 and λ 2 are two vectors of NDV random numbers generated in the [0,1] interval. Equation (13) is relative to exploitation as it perturbs the good design X i , t r that was provisionally included in the new population being close to the current best record or likely better than it. The δ wolf is temporarily selected as the worst individual of the population, and FHGWJA is forced to locally search in a region of design space containing only high-quality solutions.
The new trial solution X i , t r is compared with X i , t r to select the solution that finally updates population. If W(( X i , t r )′) < 1.1W( X b e s t ) and W(( X i , t r )′) < W( X i , t r ), X i , t r is finally retained in the new population, and X i , t r is rejected. If W(( X i , t r )′) < 1.1W( X b e s t ) but it occurs that W(( X i , t r )′) > W( X i , t r ), X i , t r is rejected, and X i , t r is finally retained in the new population. If W(( X i , t r )′) > 1.1W( X b e s t ), X i , t r is finally retained in the new population, and X i , t r is directly rejected.
FHGWJA adopts a repair strategy if the new trial solution X i , t r is such that W( X i , t r ) > 1.1W( X b e s t ). In this case, the trial solution X i , t r was sufficiently good to avoid immediate rejection (i.e., it holds W( X i , t r ) < Wp X b e s t ) but it is not good enough to be provisionally included in the new population. Another JAYA-based scheme is adopted in FHGWJA to define the new trial solution X i , t r as follows:
X i , t r = X i + λ 1   · X b e s t X i λ 2   · X w o r s t X i
where λ 1 and λ 2 are two vectors of NDV random numbers similar to those of Equation (13). Absolute values of optimization variables are taken for each component of vectors X i or X i , t r , respectively, in Equations (13) and (14).
Equation (14) is related to exploration and resembles the classical JAYA solution updating equation where each individual is perturbed trying to approach the current best solution X b e s t and moving away from the worst solution of the population X w o r s t . FHGWJA involves the whole population of NPOP search agents in the generation of new trial design ( X i , t r )′ because wolves α, β, and δ that drive search in the GWO phase failed to move other individuals (i.e., X i ) to good positions of the search space located near the prey (i.e., the current best record). Since population renewal aims to improve each individual X i , FHGWJA searches on the descent direction X b e s t X i that leads to a better design than X i and escape from the worst design of the population X w o r s t , which certainly cannot improve X i .
Similar to the process implemented for Equation (13), FHGWJA compares the cost function value W(( X i , t r )′) corresponding to the new trial design X i , t r defined with Equation (14) with 1.1 times the cost function value of the current best record W( X b e s t ). The modified trial design ( X i , t r )′ is included in the new population if it holds that W(( X i , t r )′) ≤ 1.1W( X b e s t ).
However, if the JAYA-based repair strategy fails and it still holds that W(( X i , t r )′) > 1.1W( X b e s t ), FHGWJA has to deal with three solutions, X i , X i , t r , and X i , t r , that do not improve the current best record X b e s t . Figure 1 shows the mutual positions of the cost function gradient W b e s t evaluated at the current best record X b e s t and the non-descent directions S i = X i X b e s t , S i , t r = X i , t r X b e s t , and S i , t r = X i , t r X b e s t ; Pbest is the point of the search space corresponding to the current best record X b e s t while the Pi, Pi,tr, and Pi,tr′ points, respectively, correspond to trial solutions X i , X i , t r , and   X i , t r . It can be seen from Figure 1 that non-descent directions make positive scalar products with the gradient W b e s t . Design may be improved with respect to X b e s t by moving on the descent directions S i ( m i r r ) = X i X b e s t , S i , t r ( m i r r ) = X i , t r X b e s t , and S i , t r   ( m i r r ) = X i , t r X b e s t that are, respectively, opposite to the non-descent directions S i , S i , t r , and   S i , t r and hence make negative scalar products with respect to W b e s t (see Figure 1). Hence, FHGWJA uses a “mirroring” strategy to define the new trial solution X i , t r   ( m i r r ) as follows:
X i , t r , m i r r = X b e s t + η m i r r , i S i ( m i r r ) + η m i r r , i t r S i , t r ( m i r r ) + η m i r r , ( i t r ) S i , t r   ( m i r r )
where ηmirr,i, ηmirr,i-tr, and ηmirr,(i-tr)’ are three random numbers in the [0,1] interval. Basically, FHGWJA takes the descent direction X i , t r   ( m i r r ) X b e s t obtained by combining three other descent directions that may improve design. Hence, the X i , t r   ( m i r r ) trial solution (corresponding to the point Pi,tr(mirr) of search space shown in Figure 1) is very likely to improve design. The new solution X i , t r   ( m i r r ) is evaluated as outlined above for the other trial solutions generated by FHGWJA.
  • Step 5. Resort population, update α, β, and δ wolves, X b e s t , and X w o r s t
Once FHGWJA updated all candidate solutions X i , the population is resorted and the NPOP individuals are ranked with′ respect to their penalty function values. Should Wp( X i , t r ) or Wp(( X i , t r )′ or Wp(( X i , t r )′) be greater than Wp( X i ), all new trial solutions generated/refined for X i are rejected and the old design X i is retained in the population. However, the elitist strategies based on the 1.1W( X b e s t ) threshold, JAYA-based schemes of Equations (13) and (14), and the “mirroring” strategy of Equation (15) greatly increase the probability of updating the whole population with better designs than those stored in the previous population.
The best three individuals of the new population are reset as wolves α, β, and δ with the corresponding design vectors X α , X β , and X δ . The best and worst solutions are set as X b e s t and X w o r s t , respectively.
In order to avoid stagnation, ranking of wolves α, β, and δ is checked with another elitist criterion if FHGWJA did not update their positions in the current iteration. The criterion is again based on the concept of descent direction. The S β = X b e s t X β and S δ = X b e s t X δ obviously are descent directions with respect to positions X β and X δ of wolves β and δ, and the design improves by moving towards the α wolf. Hence, FHGWJA perturbs X β and X δ along the descent directions S β and S δ . The positions of wolves β and δ are “mirrored” with respect to X b e s t as follows:
X β m i r r = 1 + η m i r r , β X b e s t η m i r r , β X β X δ m i r r = 1 + η m i r r , δ X b e s t η m i r r , δ X δ
where ηmirr,β and ηmirr,δ are two random numbers in the interval (0,1) that limit step sizes to reduce the probability of generating infeasible positions. The best three positions amongst X α , X β , X δ , X β m i r r , and X δ m i r r are taken as wolves α, β, and δ for the next iteration, while the two worst positions are compared with the rest of the population and may replace X w o r s t and the 2nd worst design of the old population. The latter operation also covers the case that X β m i r r and X δ m i r r did not improve any of the wolves α, β, and δ.
The mirror strategy used in FHGWJA is illustrated in Figure 2. In particular, the figure shows the following: (i) the original positions of wolves α, β, and δ, respectively, correspond to points Popt≡Pα, Pβ, and Pδ of the search space; (ii) the positions of “mirror” wolves βmirr and δmirr are defined with Equation (16) and, respectively, correspond to points Pβ,mirr and Pδ,mirr of the search space; (iii) the cost function gradient vector W b e s t is evaluated at X b e s t . It can be seen that the “mirror” wolves βmirr and δmirr defined by Equation (16) lie on descent directions and may even improve X b e s t , the position of wolf α. In fact, it holds that W b e s t × X β m i r r X o p t < 0 and W b e s t × X δ m i r r X o p t < 0 , where “×” denotes the scalar product between two vectors. Interestingly, since X δ m i r r X o p t is in principle a steeper descent direction than X β m i r r X o p t as Wp( X δ ) > Wp( X β ) > Wp( X b e s t ), wolf δ may have a higher probability than wolf β to replace wolf α even though it originally occupied a worse position than wolf β in the search space. This elitist approach forces FHGWJA to carry out a new exploration of the search space rather than trying to exploit solutions that have not improved the design in the last iteration. Furthermore, by eliminating the two worst designs of the population, FHGWJA increases the average quality of candidate solutions and achieves a higher probability of generating high-quality trial solutions in the next iteration.
  • Step 6. Check for convergence
Standard deviations on optimization variables and penalty function values of candidate solutions included in the updated population should decrease as the optimization search approaches the global optimum. Hence, FHGWJA normalizes standard deviations with respect to average design X a v e r = i = 1 N P O P X i / N P O P and average value of penalty function W p , a v e r = i = 1 N P O P W p ( X i ) / N P O P . The termination criterion is as follows:
M a x S T D X 1 X a v e r , X 2 X a v e r , , X N P O P X a v e r | | X a v e r | | ; S T D W p , 1 ,   W p , 2 , ,   W p , N P O P W p , a v e r ε c o n v
where the convergence limit ε c o n v is equal to 10−7. Since convergence occurs in the last stage of the optimization process when exploitation dominates, Equation (17) requires all search agents be actually located in the best region of the search space hosting the global optimum and contributing to exploitation. Population diversity must decrease to aggregate search agents in the neighborhood of X b e s t . Hence, Equation (17) normalizes the standard deviation of the search agents’ positions to average the solution to quantify the level of population diversity. The same rationale is adopted for the penalty function values by normalizing their standard deviation with respect to the average penalty function value: hence, competitive solutions correspond to the optimum solution only when they are effectively close to it, that is when the search process is near to end. Penalty function values are considered in Equation (17) to account for the effect of some solution that may turn locally infeasible if the set of active constraints changes as design is perturbed in the neighborhood of the global optimum. Penalty function values are equal to cost function values for feasible solutions or in the case of unconstrained optimization problems.
Steps 2 through 6 are repeated until FHGWJA converges to the global optimum.
  • Step 7. Terminate optimization process
FHGWJA terminates the optimization process and writes the output data (i.e., optimum design and optimized cost function value) in the results file.
Figure 3 shows the flow chart of the novel FHGWJA algorithm developed in this study. The present algorithm actually is an advanced grey wolf optimizer, which updates population taking care that the α, β, and δ wolves effectively lead to improve the current best record in each iteration. The JAYA schemes and the elitist strategies implemented in FHGWJA enhance exploration and exploitation forcing the present algorithm to increase diversity and select high-quality trial designs without performing too many function evaluations. The definition of all descent directions utilized in FHGWJA in the optimization search does not entail any new constraint evaluation, and all decisions are made in FHGWJA by comparing only cost function values. Interestingly, FHGWJA does not need any new internal parameters with respect to the classical GWO and JAYA formulations that require users to specify population size NPOP and limit number of iterations Nitermax. This feature of FHGWJA is not very common as simple hybrid metaheuristic algorithms may adopt a set of new heuristic parameters to switch from one component optimizer to the other. However, FHGWJA implements a master–slave algorithmic architecture where the slave algorithm, JAYA, is utilized only to refine or correct the trial designs generated by the master algorithm, GWO.
The computational cost of most GWO/JAYA implementations documented in the literature was NPOP × Nitermax function evaluations (also indicated as function calls or analyses for general problems or structural analyses for structural optimization problems) including constraint evaluations. The FHGWJA’s ability of generating high-quality trial solutions throughout the optimization process significantly reduces the computational cost of the optimization. As far as population size NPOP, it is well known that increasing NPOP in metaheuristic optimization may enhance exploration capabilities but also results in a very large number of function evaluations. Indeed, it is not necessary to work with a large population if the search engine can always generate high-quality designs that improve the current best record or the currently perturbed search agents such as is accomplished with FHGWJA. It should be noted that grey wolves hunt in nature in groups comprising, at most, 10–20 individuals (the typical family pack is formed by 5–11 animals but in some cases the pack can be formed by 2–3 families). In view of this, FHGWJA optimizations carried out in this study utilized a population of 10 individuals. Sensitivity analysis confirmed the validity of this setting.

3. Test Problems and Optimization Results

The FHGWJA algorithm proposed in this study was tested in seven engineering problems including up to 29 optimization variables and 1200 nonlinear constraints. Test cases regarded the following: (i–ii) 2D path planning (minimization of trajectory lengths with 7 or 10 obstacles); (iii) shape optimization of a concrete gravity dam with additional earthquake load (volume minimization); (iv–v) calibration of nonlinear hydrologic models (the Muskingum problem solved with 3 or 25 unknown model parameters to be identified); (vi) optimal crashworthiness design of a vehicle subject to side impact; (vii) weight minimization of a planar 200-bar truss structure under three independent loading conditions.
FHGWJA was implemented in MATLAB as a standalone optimization code. Optimization runs were carried out on a standard PC equipped with a 3.1 Mhz AMD processor and 16 MB of RAM memory. The Mersenne-Twister (MT19937) MATLAB default algorithm was utilized to generate random numbers uniformly distributed between 0 and 1; this algorithm was selected in view of its ability to generate high-quality numbers with (219937–1) long period. In order to draw statistically significant conclusions on the computational efficiency of FHGWJA, 20 independent optimization runs were executed for each test problem starting from different initial populations. The population size and limit number of iterations of FHGWJA were, respectively, set equal to 10 and 5000. Initial populations included some designs with up to 1000% constraint violation to check whether FHGWJA can quickly approach the best region of search space containing the optimum solution. Remarkably, FHGWJA always completed all optimization runs within a much lower number of analyses than the theoretical computational budget of 10 × 5000 = 50,000 analyses of the classical GWO and JAYA. The actual computational cost of FHGWJA was almost entirely due (about 90%) to the number of analyses performed in the optimization search. For example, the optimization runs performed with FHGWJA for the largest test problem solved in this study (weight minimization of a planar 200-bar truss structure including 29 sizing variables) were completed on average within about 25 min of wall clock time.

3.1. 2D Path Planning

Test cases (i–ii) regarded a classical robotics problem: to determine the optimal 2D path connecting two points A and B with a piecewise cubic spline trajectory, defined by a certain number of control points (NCP), that passes through points A, B, and a series of base points (NBP) while avoiding a set of obstacles (NOB). The problem objective is to minimize the total length of the trajectory. In an obstacle-free environment, the shortest path between A and B obviously is a straight line. Cubic splines allow for building a smooth and continuous curved path avoiding obstacles. Path planning complexity increases with the number of obstacles and the number of base points; coordinates of base points are selected as optimization variables.
The 2D path planning problem searching for the minimum length of a 2D trajectory, passing through NBP base points so as to avoid NOB obstacles, and defined by NCP control points connected by (NCP−1) spline segments, can be formulated as follows:
M i n i m i z e   W X = r = 1 N C P x c r + 1 x c r 2 + y c r + 1 y c r 2 + P E N A L T Y
S u b j e c t   t o   G X = r a d o b p x c r x o b p 2 + y c r y o b p 2 1 < 0 G X = r a d o b p x b i x o b p 2 + y b i x o b p 2 1 < 0         i = 1 , N B P   p = 1 , , N O B r = 1 , , N C P
In Equations (18) and (19), (xbi,ybi) are the coordinates of the generic i-th base point of the trajectory, (xcr,ycr) are the coordinates of the generic r-th control point of the trajectory, radob(p) is the radius of the circular area limited by the p-th obstacle, respectively. The coordinates of base points are taken as optimization variables.
In general, the optimization problem stated above includes 2.NBP design variables and (NCP + NBP) × NOB nonlinear constraints. Coordinates of base points must also satisfy the relationships x b i x b i + 1 0 and y b i y b i + 1 0 (i = 1,…,NBP−1) to preserve the local convexity of the trajectory. The “PENALTY” term in Equation (18) aggregates squared violations of constraint inequalities occurring for G( X ) > 0 multiplied by 1000 to amplify the effect of collision with obstacles; it is equal to 0 if the optimizer finds a feasible solution where the designed trajectory does not intersect the NOB obstacles. More details on the formulation of path planning problems in robotics can be found in Refs. [91,92,93].
Figure 4 shows the design space of the two path planning problems (all coordinates are expressed in mm) solved in this study considering the presence of 7 or 10 obstacles and the same number of base points. In the former case, there are the start/end points A(1,1) and B(29,29) mm to be connected and the seven obstacles, O1(6,6), O2(7,24), O3(9,15), O4(16,23), O5(20,11), O6(23,6), and O7(24,23) mm, limiting circular areas of radii 3, 1, 3, 2, 2, 1, and 2 mm, respectively. The optimization problem hence includes 14 variables corresponding to the coordinates of base points (i.e., 2.NBP = 2 × 7) that can vary between 1 and 29 mm, NOB = 7 obstacles, NCP = 100 control points, and 749 nonlinear constraints (i.e., (NCP+NBPNOB = (100 + 7)× 7 = 749).
In the latter case, there are three additional obstacles O8(12,10), O9(14,16), and O10(19.5,17.5) mm all limiting circular areas of radius 2 mm. The optimization problem hence includes 20 variables (i.e., 2.NBP = 2 × 10), NOB = 10 obstacles, NCP = 100 control points, and 1100 nonlinear constraints (i.e., (NCP + NBP) × NOB = (100 + 10) × 10 = 1100).
Besides the 20 independent optimization runs, 1 additional optimization run was carried out by setting the initial positions of base points equal to the coordinates of obstacle centers shifted by 0.001 mm in order to avoid constraint singularity. This is the worst-case scenario when the trajectory intersects all obstacles. The corresponding path lengths were 231.2 mm for 7 base points and 238.5 mm for 10 base points, while initial constraint violation raised to 159,816% and 204,741%, respectively.
For both problem variants, the FHGWJA hybrid optimizer developed here was compared with the best optimizers quoted in the literature: namely, hybrid harmony search (hybrid HS), hybrid big bang–big crunch (hybrid BBBC), and hybrid fast simulated annealing (HFSA) [81]. Such a comparison should be considered highly indicative because the hybrid optimizers developed in [81] combined efficient metaheuristic search engines with approximate line search strategies and gradient information while FHGWJA repeatedly utilizes elitist strategies based on the concept of descent direction. The classical grey wolf optimizer (GWO) and the classical and improved JAYA formulations [81] also were compared with FHGWJA to prove the advantages of the hybrid optimizer. Finally, modified harmony search optimization with adaptive parameter updating (mAHS) (derived from the AHS algorithm of Ref. [94]), modified big bang–big crunch with upper bound strategy (mBBBC–UBS) (derived from the BBBC–UBS algorithm of Ref. [95]), and modified sinusoidal differential evolution (MsinDE) (derived from the sinDE algorithm of Ref. [96]) were enhanced with the W( X i , t r ) ≤ 1.1W( X o p t ) elitist strategy of FHGWJA to gather specific information on the efficiency of this strategy.
The selected population size in Ref. [81] for the hybrid HS, hybrid BBBC, and improved JAYA algorithms was 20, while it was set equal to 10 in this study for the new optimization runs carried out for mAHS, mBBBC–UBS, and MsinDE to have a homogeneous basis of comparison with FHGWJA and its component algorithms, the standard GWO and standard JAYA. All other internal parameters required for SHGWJA’s competitors were left the same as those indicated in the original references [81,94,95,96]. The same approach was adopted for the two problem variants with 14 and 20 optimization variables.
Table 1 presents the optimization results for the 2D path planning problem variant with 14 design variables (seven base points and seven obstacles). All data listed in the table refer to feasible solutions; the number of analyses refers to the number of times that nonlinear constraints were evaluated. FHGWJA was the best optimizer overall and designed the shortest path of 40.811 mm, outperforming the component algorithms, the standard GWO and standard JAYA, that designed longer trajectories of 41.104 mm and 41.050 mm, respectively. The optimal positions of base points found with FHGWJA were (3.6617;1.3944), (6.5927;2.7934), (16.562;11.705), (21.231;17.081), (24.186;19.826), (27.855;26.254), and (28.626;28.196) mm. The other algorithms ranked as follows: hybrid HS [81], MsinDE (derived from [96]), hybrid BBBC [81], HFSA [81], mAHS (derived from [94]), mBBBC–UBS (derived from [95]), and improved JAYA [81].
The rate of success of FHGWJA was 100% because all independent optimization runs designed shorter trajectories than the best solutions obtained with its competitors. In particular, the worst solution of FHGWJA achieved a path length of 40.890 mm while the shortest paths designed via all other optimizers ranged between 40.983 and 41.104 mm. Such a behavior occurred because FHGWJA inherently balances exploration and exploitation phases enhancing the exploration capability of α, β, and δ wolves without risk of stagnation and forcing the optimizer to search very high-quality solutions regardless of performing exploration or exploitation. Interestingly, the W( X i , t r ) ≤ 1.1W( X b e s t ) elitist strategy implemented in FHGWJA was effective also for mAHS, mBBBC–UBS, and MsinDE that improved their optimized solutions with respect to the original AHS [94], BBBC–UBS [95], and sinDE [96] algorithms utilized in [81]: in particular, MsinDE enhanced its overall rank up to the 3rd position.
FHGWJA was the fastest optimizer overall in the 2D path planning problem with 14 variables, requiring on average only 4581 analyses vs. 50,000 and 17,204 analyses, respectively, required for standard GWO and JAYA to converge to their optimized solutions. Furthermore, the present algorithm was about 42.3% faster than the improved JAYA. The computational speed for the fastest optimization run of FHGWJA was practically the same as those of the HS/BBBC/SA hybrid algorithms: only 3837 analyses vs. 4150, 3611, and 4127 analyses, respectively. However, the present algorithm generated a feasible intermediate design with a path length of 40.961 mm (hence, shorter than optimized path lengths of hybrid HS/BBBC/SA ranging between 40.983 and 40.994 mm) within only 2875 analyses.
FHGWJA ranked 6th in terms of standard deviation on optimized path length after mBBBC–UBS, standard GWO, improved JAYA, mAHS, and MsinDE, which, however, completed their optimizations within more analyses than the present algorithm and converged to worse designs. The present algorithm was also robust enough with respect to computational cost showing only 24.6% dispersion on the number of analyses required in the 20 independent optimization runs.
Figure 5 confirms the superiority of FHGWJA over its competitors in terms of convergence behavior. The figure compares the optimization histories of the best runs for the algorithms listed in Table 1; the average FHGWJA convergence curve is also shown in the figure. The plot is limited to the first 4500 analyses of the optimization history and to the 40.5–48.5 mm trajectory length interval for the sake of clarity. It can be seen from the figure that the best and average optimization runs’ convergence curves of FHGWJA always lie below those of the other algorithms. FHGWJA started its best optimization run from the very large cost of 225.35 mm (i.e., about 5.5 times the globally optimum length of 40.811 mm found with FHGWJA) while the initial cost for all other optimizers ranged between 59.62 and 184.214 mm. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors. The hybrid simulated annealing algorithm (HFSA) of Ref. [81] and the modified big bang–big crunch algorithm (mBBBC–UBS) implementing the W( X i , t r ) ≤ 1.1W( X o p t ) elitist strategy of FHGWJA were the only optimizers to compete in convergence speed with FHGWJA, respectively, for the first 150 and 190 analyses.
Table 2 presents the optimization results for the 2D path planning problem variant solved with 20 design variables (10 base points and 10 obstacles). FHGWJA again was the best optimizer overall. In fact, it designed the very short trajectory of length 40.898 mm while the best solutions found with the other algorithms correspond to longer trajectories than 40.898 mm, ranging between 40.997 mm (for hybrid HS) and 41.141 mm (for improved JAYA). Furthermore, the worst solution of FHGWJA corresponds to the path length of 40.924 mm, again shorter than the best designs obtained with all its competitors. The optimal positions of base points found with FHGWJA were (3.9165;1.7729), (7.9192;3.6484), (20.853;16.019), (25.143;21.071), (27.178;24.426), (27.185;24.444), (27.188;24.449), (27.967;26.268), (27.998;26.345), and (28.981;28.578) mm.
FHGWJA was also significantly faster than the other algorithms, completing optimization runs on average within only 5153 analyses while computational cost of its competitors ranged between 7408 (for hybrid HS) and 50,000 (for standard GWO) analyses. Standard JAYA was slightly more efficient than improved JAYA while standard GWO was the worst optimizer overall. The present algorithm was the most robust optimizer, achieving the lowest standard deviation on path length and just 4.8% dispersion on the number of analyses.
The W( X i , t r ) ≤ 1.1W( X b e s t ) elitist strategy implemented in FHGWJA again improved significantly the performance of the mAHS, mBBBC–UBS, and MsinDE algorithms with respect to the original AHS, BBBC–UBS, and sinDE formulations documented in [94,95,96]: in particular, computational cost decreased on average by 36%. Interestingly, the high level of nonlinearity of 2D path planning problem variants including 14 and 20 optimization variables was better handled with FHGWJA than with the hybrid HS/BBBC/SA algorithms of Ref. [81]. This occurred in spite of the fact that hybrid HS/BBBC/SA utilized gradient information and multiple line searches to minimize the number of analyses. However, this limited the approximation quality and hence the ability of the optimizers to generate trial solutions effectively located in regions of the search space where X b e s t is likely to improve.
Figure 6 compares the optimization histories of the best runs for FHGWJA and its competitors in the 2D path planning problem solved with 20 design variables; the average FHGWJA convergence curve is also shown in the figure. The plot is limited to the first 9000 analyses of optimization history for the sake of clarity. Similar to the 14-variable problem variant, the best and average optimization runs’ convergence curves of FHGWJA practically always lie below those of the other algorithms. All algorithms started their best optimization runs from the very large cost of 238.54 mm (i.e., about 5.83 times the optimum trajectory length of 40.898 mm found with FHGWJA). However, the present algorithm recovered the gap in cost function with respect to target optimum in a much faster way than the other algorithms. The hybrid simulated annealing algorithm (HFSA) of Ref. [81] was the only optimizer to compete in convergence speed with FHGWJA but only after 2000 analyses, that is when the distance from target optimum was less than 5%.
Figure 7a compares the optimized trajectories obtained with FHGWJA and its competitors in the 2D path planning problem variant including 14 design variables. For the sake of clarity, the figure shows only the best six solutions quoted in Table 1. It can be seen that optimized paths involve only obstacles 1 and 7. Furthermore, the trajectory branch limited by the start point A and obstacle 1 is practically the same for all optimizers. Optimal trajectories diverge between obstacles 1 and 7: the slope is larger for hybrid HS, HFSA, and standard JAYA than for the present algorithm, hybrid BBBC, and MsinDE. Hybrid HS, hybrid BBBC, HFSA, and MsinDE found very similar path lengths between 40.983 and 40.994 mm; these trajectories are practically symmetric about the line connecting the point near obstacle 1, where trajectories diverge, and the end point B. Since the trajectory designed with standard JAYA between obstacles 1 and 7 is more curved than those designed with hybrid HS and HFSA, the length of standard JAYA’s optimal path increased to 41.050 mm.
The detailed analysis of the optimized trajectory revealed that, in order to minimize length, trajectory should be rectilinear between obstacles 1 and 7 as well as in the final branch from obstacle 7 to end point B. Interestingly, FHGWJA was the only optimizer to satisfy this requirement. In fact, the optimal path found with FHGWJA between obstacles 1 and 7 may be fitted in the XY plane via a linear regression with R2 = 0.999 vs. only R2 = 0.992 and R2 = 0.994, respectively, for hybrid HS (HFSA’s trajectory is very close to the one designed with hybrid HS) and hybrid BBBC (MsinDE’s trajectory is very close to the one designed with hybrid BBBC). Furthermore, the optimal path of FHGWJA may be linearly fitted with R2 = 0.998 vs. only R2 = 0.991 and R2 = 0.994 for hybrid HS and hybrid BBBC, respectively. Besides the higher curvature of the trajectories designed with the other algorithms between obstacles 1 and 7, the final branches of the optimal trajectories designed via hybrid HS, HFSA, hybrid BBBC, and MsinDE show changes in concavity that make trajectory deviate from a straight line.
Figure 7b compares the optimized trajectories obtained with FHGWJA and its competitors in the 2D path planning problem variant including 20 optimization variables. The figure shows the trajectories of the best and worst optimization runs of FHGWJA as well as the other five best solutions quoted in Table 2. The new obstacles 8 and 10 introduced in the problem became active for all algorithms while the optimized trajectory approached the new obstacle 9 only for mAHS. This explains why the optimized path lengths of hybrid HS, HFSA, MsinDE, and hybrid BBBC varied at most by 0.037 mm (i.e., between 40.997 mm and 41.034 mm) while path length increased by 0.041 mm just passing from hybrid BBBC to mAHS (i.e., from 41.034 mm to 41.075 mm). Again, FHGWJA designed the straightest trajectory in the path branches between obstacles 1 and 7 and between obstacle 7 and end point B. Such a behavior occurred for both the best and worst FHGWJA optimization runs. Conversely, trajectories designed with hybrid HS/BBBC/SA and MsinDE algorithms exhibited significant changes in slope and curvature especially in the final branch of the path or between obstacles 10 and 7. For example, the final branch of the FHGWJA’s optimal paths may be linearly fitted with R2 = 0.997 vs. only R2 = 0.965, R2 = 0.940, and R2 = 0.978 for hybrid HS, hybrid BBBC, and HFSA, respectively.
In summary, the FHGWJA algorithm developed in this study is an efficient and robust tool for solving highly nonlinear 2D path planning problems in robotics. The proposed algorithm could always design shorter paths requiring less analyses to complete optimization runs than its competitors.

3.2. Shape Optimization of a Concrete Gravity Dam Under Multiple Earthquake Loads

The test case (iii) solved in this study was the shape optimization of a concrete gravity dam under multiple earthquake loads. The real structure is located in the Shafaroud region of Northern Iran: the dam height (Hdam) is 150 m, water elevation in the normal state in the dam upstream (hwater) is 145 m, and the sediment height (hse) is 7 m. The 3D schematic of the dam is shown in Figure 8a. Here, the concrete volume per unit width of the dam was minimized by optimizing the dam cross-section shown in Figure 8b with the nine shape variables (x1,x2,x3,x4,x5,x6,x7,x8,x9) indicated in the figure.
The optimization problem was stated as follows:
M i n i m i z e   W X = H d a m x 1 + 1 2 x 2 x 3 + x 3 x 4 + x 6 + 1 2 x 4 x 5 + x 5 x 6 + 1 2 x 6 x 7 + 1 2 x 8 x 9
S u b j e c t   t o   G 1 X = 4 S F S 0 G 2 X = 1.5 S F O 0 G 3 4 X = 0 σ U σ M A X G 5 6 X = 0 σ D σ M A X G 7 X = x 4 + x 6 x 8 0 H 8 X = x 2 + x 4 + x 6 H d a m 2 = 0
where SFS is the safety factor against dam sliding; SFO is the safety factor against dam overturning; σU, σD, and σmax are the vertical fatigue specific loads in upstream and downstream and the corresponding fatigue limit, respectively. Side constraints of shape variables are as follows: 1 ≤ x1,x3 ≤ 20 m; 1 ≤ x2 ≤ 50 m; 5 ≤ x4,x5,x6,x7 ≤ 100 m; 50 ≤ x8,x9 ≤ 150 m.
In Equation (21), the SFS factor (involved in the inequality constraint G1) is defined as (f·ΣFv+σ·b)/ΣFH, where f = 0.7 is the static friction coefficient, σ = 3.5 MPa is the allowable shear tension in the cutting surface, and b = (x1+x3+x5+x7+x9) is the dam base length. The dam is subject to the vertical forces denoted as Fv: concrete weight W, vertical components of water pressure Fhv, and sediment pressure PSV directed downward; uplift force generated by the dam basement Fuplift, and earthquake force FE directed upward. The dam is also subject to horizontal forces FH in the positive X-direction: an additional earthquake force of magnitude 2FE, the horizontal components of water pressure Fhh, and sediment pressure PSO.
The SFO factor (involved in the inequality constraint G2) is defined as ΣMR/ΣMo where MR and Mo, respectively, are the torque of resisting forces (W, Fhv, PSV) and the torque of driving forces (Fuplift, 2FE, Fhh, PSO) on the dam.
Vertical fatigue loads σU and σD (involved in inequality constraints G5 and G6) are defined as (ΣFv/b − 6ΣMo/b2) and (ΣFv/b + 6ΣMo/b2), respectively.
The last inequality constraint G7 in Equation (21) is relative to geometric proportions in the vertical direction, while the equality constraint H8 requires the dam height to be 150 m.
The concrete volume per unit width of 10,502.1 m3 of the real dam was significantly reduced to 7448.774 m3 via the multi-level cross entropy optimizer (MCEO) [97] and the flying squirrel optimizer (FSO) [98]. However, Refs. [97,98] neither considered the equality constraint H8 nor the presence of the horizontal earthquake force 2FE. These issues were instead included in [81], which presented the results of hybrid HS, hybrid BBBC, hybrid fast simulated annealing (HFSA), adaptive harmony search (AHS) [94], big bang–big crunch with upper bound strategy (BBBC–UBS) [95], and sinusoidal differential evolution [96]. For this reason, the FHGWJA algorithm developed in this study was compared with hybrid HS/BBBC/SA as well as with modified adaptive harmony search (mAHS), modified big bang–big crunch with upper bound strategy (mBBBC–UBS), and modified sinusoidal differential evolution (MsinDE). Similar to the 2D path planning problem, mAHS, mBBBC–UBS, and MsinDE included the W( X i , t r ) ≤ 1.1W( X b e s t ) elitist strategy implemented in FHGWJA to enhance the performance of the original algorithms.
Similar to the 2D path planning problem, population size was set equal to 20 in Ref. [81] for hybrid HS, hybrid BBBC, and improved JAYA, while mAHS, mBBBC–UBS, and MsinDE were run with NPOP = 10 in this study.
Table 3 compares the optimization results of FHGWJA and its competitors for the dam problem. The number of structural analyses corresponds to the number of evaluations of constraints G1–7 X and H8 X . The average number of structural analyses and corresponding standard deviation along with the number of structural analyses required by the fastest optimization run also are reported in the table when available.
FHGWJA was the best optimizer overall also in this test problem. In fact, it converged to the lowest concrete volume for unit width of 7079.558 m3, which is 0.411% smaller than the 7108.772 m3 for the best design quoted in literature so far for hybrid BBBC [81]. All algorithms performed significantly better than MCEO [97] and FSO [98]: in fact, optimized values of unit volume ranged from 7059.558 m3 to 7121.134 m3 vs. 7448.774 m3 of Refs. [97,98]. The solutions quoted in Table 3 fully satisfied design constraints G1 X   through G7 X and practically the geometric equality constraint H8 X on dam height (in fact, violation never exceeded 6.067 × 10−3%). The algorithms ranked as follows in terms of optimized concrete volume values: FHGWJA, hybrid BBBC, mBBBC–UBS, mAHS, hybrid HS, MsinDE, HFSA, improved JAYA, standard GWO, and standard JAYA.
FHGWJA was also the fastest and most robust optimizer. In fact, it converged to the global optimum of 7059.558 m3 in all independent optimization runs with zero standard deviation. Conversely, none of its competitors could achieve zero standard deviation on optimized concrete volume values. The present algorithm was on average between 4.7- and 6.5-times faster than its competitors; the fastest optimization run converging to the global optimum was completed by FHGWJA within only 2207 structural analyses vs. 16,924 structural analyses required for mBBBC–UBS that, however, obtained a higher value for the dam’s concrete volume.
The hybrid formulation implemented in FHGWJA was significantly more efficient than its component algorithms GWO and JAYA that converged in their standard formulation to the best solutions of 7117.441 m3 and 7121.134 m3, respectively. The improved JAYA algorithm of Ref. [81], implementing a simplified elitist strategy where each new trial solution X i , t r is included in the new population only if it improves its counterpart individual X i (i.e., if it holds W( X i , t r ) ≤ W( X i )), also could not reduce the unit volume of the dam below 7117.44 m3. The classical GWO was on average about 13-times more computationally expensive than the present algorithm, while classical JAYA and improved JAYA were on average about 6.5- and 5.9-times slower than FHGWJA.
The W( X i , t r ) ≤ 1.1W( X b e s t ) elitist strategy implemented in FHGWJA again improved significantly the performance of the mAHS, mBBBC–UBS, and MsinDE algorithms with respect to the original AHS, BBBC–UBS, and sinDE formulations documented in [94,95,96]: in particular, the best solutions of mAHS, mBBBC–UBS, and MsinDE became very close to those of hybrid HS/BBBC/SA while computational cost decreased on average by 16.5%, 6.1%, and 35.2%, respectively.
Figure 9 clearly shows the superior convergence behavior of FHGWJA with respect to its competitors. The figure compares the optimization histories of the best runs for the algorithms listed in Table 3; the average FHGWJA convergence curve also is shown in the figure. The plot is limited to the first 21,000 structural analyses of the optimization history and to the 7000–14,000 m3 concrete volume interval for the sake of clarity. It can be seen from the figure that the best and average optimization runs’ convergence curves of FHGWJA always lie below those of the other algorithms. FHGWJA started its best optimization run from the very large cost of 18,372.761 m3 (i.e., about 2.6 times the globally optimum length of 7079.558 m3 found with FHGWJA) while the initial cost for all other optimizers ranged between 13,265.001 and 14,209.219 m3. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors. The hybrid simulated annealing algorithm (HFSA) of Ref. [81] was the only optimizer to compete in convergence speed with FHGWJA for the first 100 structural analyses.
Table 4 lists the optimized designs obtained with FHGWJA and its main competitors; coordinate values are expressed in meters. The data relative to the standard GWO and JAYA and the improved JAYA are omitted for the sake of brevity as these algorithms converged to the worst solutions overall. The 7079.558 m3 dam’s concrete volume design obtained with FHGWJA is very similar to the optimal solutions obtained with the other algorithms: all design variables changed by at most 5.6% except X3 that was fixed to its lower bound of 1 mm by FHGWJA and between 4.0001 m and 4.1358 m for the other algorithms.
Figure 10 compares the optimized shapes of the dam (coordinates are expressed in meters) obtained with FHGWJA and its competitors with the real structure and the optimum design of Refs. [97,98]. For the sake of clarity, the average of hybrid HS, hybrid BBBC, and HFSA configurations and the average of mAHS, mBBBC–UBS, and MsinDE configurations are shown in the figure. It can be seen that all designs resemble the aspect ratio of the real dam: in particular, the ratio between optimized base length and height is very close to 1. The optimized profiles are slightly shifted in the positive X-direction with respect to the real dam and the dam configurations optimized in [97,98] to counteract the effect of the additional horizontal earthquake force 2FE included in the design problem. The smaller inclination of the bottom segment of the dam’s upstream side increases the dam’s storage capacity with respect to the real structure.
The safety factors against sliding and overturning evaluated for the optimal solution of FHGWJA became 5.738 and 3.470, respectively, practically the same as those quoted in [81]. Hence, reducing the concrete volume did not affect at all the level of structural safety of the dam. This proves the suitability of the proposed FHGWJA algorithm for civil engineering design problems.

3.3. Calibration of Nonlinear Hydrologic Models

Test problems (iv–v) regarded the calibration of nonlinear hydrologic models including 3 or 25 unknown parameters. This is a typical inverse problem in the hydrology field [99,100,101]. The Muskingum model was selected in this study. The difference between observed river outflows and their counterpart computed with the Muskingum model over a period of 126 h, split in 21 intervals Δt = 6 h, was minimized. The inverse problem was formulated as an unconstrained optimization problem:
M i n i m i z e   S S Q = t = 1 N C T O o r o , t O c r o , t 2
where Ooro,t and Ocro,t, respectively, are the observed and computed river outflows at time t; NCT = 22 is the number of control points defined by the 21 time intervals. The computed river outflow Ocro,t can be determined using the classical three-parameter nonlinear Muskingum model:
O c r o , t = 1 1 h S t K 1 r 1 h   I t
In Equation (23), K, h, and r are the parameters of Muskingum model to be calibrated, respectively, taken as optimization variables x1, x2, and x3; St and It, respectively, are the channel storage and measured river inlet at time t. The channel storage St at time t is updated as follows:
S t + 1 = S t + S t ˙ Δ t
The storage rate S t ˙ at time t is as follows:
S t ˙ = 1 1 h S t K 1 r + 1 1 h I t
Two variants of the hydrologic model calibration problem were solved in this study: (i) the classical problem including only 3 optimization variables, the unknown parameters K, h, and r [102,103,104,105,106]; (ii) the extended problem solved in [81] including also the 22 storage values St as additional design variables for a total of 25 optimization variables. The side constraints on Muskingum model parameters and storage values are as follows: 0.01 ≤ k ≤ 0.2; 0.2 ≤ h ≤ 0.3; 1.5 ≤ r ≤ 2.5, and 1 ≤ St ≤ 1000 m3.
The FHGWJA algorithm developed in this study was compared with the best solutions of the literature obtained with following: (i) metaheuristic methods such as the hybrid HS/BBBC/SA algorithms of Ref. [81], other HS variants [102], improved immune clonal selection (IICSA) [103], backtracking search with orthogonal initialization and chaotic map (COBSA) [104], genetic algorithms (GA) [104], differential evolution (DE) [104], and particle swarm optimization (PSO) [104]; (ii) modified adaptive harmony search (mAHS), modified big bang–big crunch with upper bound strategy (mBBBC–UBS), and modified sinusoidal differential evolution (MsinDE) including the FHGWJA’s elitist strategy W( X i , t r ) ≤ 1.1W( X b e s t ) to improve the original AHS/BBBC–UBS/sinDE algorithms of Refs. [94,95,96]; (iii) gradient-based methods such as the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) [105]; (iv) zero-order heuristic methods such as the Nelder-Mead Simplex algorithm (NMS) [106]. FHGWJA was also compared with the standard GWO as well as standard and improved JAYA variants [81].
Similar to the previous test cases, population size was set equal to 20 in Ref. [81] for hybrid HS, hybrid BBBC, and improved JAYA, while mAHS, mBBBC–UBS, and MsinDE were run with NPOP = 10 in the present study.
Table 5 presents the optimization results of FHGWJA and its competitors for the three-variables problem variant. The table reports the average number of evaluations of the SSQ cost function (NFE) and corresponding standard deviation, along with the number of function evaluations required by the fastest optimization run when available.
It can be seen from Table 5 that the present algorithm always converged to the lowest value of SSQ = 36.760 in all independent optimization runs thus reaching 0 standard deviation on optimized cost. Hence, FHGWJA was the most robust optimizer overall. Standard and improved JAYA, modified AHS/BBBC–UBS/sinDE, COBSA, and NMS also found the globally optimum solution. However, none of these algorithms converged to the same design in all optimization runs: standard deviations on optimized SSQ values ranged between 1.442 × 10−4 (improved JAYA) and 0.01 (COBSA). All other methods found optimized solutions ranging between SQQ = 36.761 (standard GWO and hybrid HS/BBBC/SA) and 36.77 (HS, IICSA, GA, DE, and PSO).
The hybrid FHGWJA algorithm was significantly more efficient than its components, the GWO and JAYA. In particular, the standard GWO converged to SSQ = 36.761 after 50,000 function evaluations (i.e., 25-times slower than FHGWJA) while standard JAYA converged to SSQ = 36.760 but it was on average 2.7-times slower than the present algorithm. Improved JAYA was on average 20.3% slower than FHGWJA while their fastest optimization runs practically required the same number of function evaluations (i.e., 1637 for FHGWJA vs. 1766 for improved JAYA). However, the fastest optimization run of improved JAYA converged to SSQ = 36.762 vs. the global optimum of 36.760 reached with the present algorithm in all runs.
The computational efficiency of FHGWJA operators generating new trial solutions in the search process is confirmed by the fact that the W( X i , t r ) ≤ 1.1W( X b e s t ) elitist strategy allowed mAHS, mBBBC–UBS, and MsinDE algorithms to converge to the global optimum value SSQ = 36.760 and reduce computational cost by 9.1%, 11%, and 35.6% with respect to the original AHS, BBBC–UBS, and sinDE algorithms of Refs. [94,95,96]. Interestingly, mAHS and mBBBC–UBS became very competitive with FHGWJA in terms of average computational cost (respectively, 2368 and 2344 function evaluations vs. only 2005 of FHGWJA) and peak computational speed of the fastest optimization run (respectively, 1670 and 1711 vs. only 1637 of FHGWJA).
The hybrid HS/BBBC/SA algorithms of Ref. [81] that utilized gradient information and approximate line searches could even complete their fastest optimization runs within less function evaluations than FHGWJA (i.e., between 1281 and 1331 vs. 1637 of FHGWJA) but converged to higher values of SSQ than the global optimum 37.760 found with FHGWJA. Furthermore, hybrid HS/BBBC/SA required on average more function evaluations than FHGWJA (i.e., between 2468 and 2755 vs. only 2005 of FHGWJA). Again, in a highly nonlinear problem like the hydrologic model calibration, the elitist strategies implemented in FHGWJA were extremely efficient in generating high-quality trial solutions over the whole search process. This happened because generation of trial solutions in FHGWJA is not biased to any extent by the quality of gradient information that may instead drive hybrid HS/BBBC/SA towards poor trial solutions should the cost function gradient not be well approximated. However, looking at Table 5, it appears that computational cost was not an issue for this test problem: HS [102] and COBSA [104], respectively, required 20,000 and 100,000 function evaluations vs. only 2000 of FHGWJA while no information on the effective number of function evaluations were given in Refs. [103,104,105,106] for metaheuristic (i.e., IICSA, GA, DE, and PSO) as well as gradient-based algorithms (BFGS) and zero-order heuristic methods (NMS).
Figure 11 confirms the superiority of FHGWJA over its competitors in terms of convergence behavior also in this test problem. The figure compares the optimization histories of the best runs for the algorithms listed in Table 5; the average FHGWJA convergence curve is also shown in the figure. Since FHGWJA always converged to the global optimum of 36.760 in all optimization runs, its best run corresponds to the fastest one. The plot is limited to the first 4500 function evaluations of optimization history and to the 35–70 cost function interval for the sake of clarity. It can be seen from the figure that the best and average optimization runs’ convergence curves for FHGWJA lie below those of the other algorithms practically over the whole search history for this test problem. FHGWJA started its best optimization run from the very large cost of 1520.145 (i.e., about 41.3 times the target optimum of 36.760 quoted in Table 5) while the initial cost for all other optimizers ranged between 181.683 and 1122.263. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors reducing cost to 37.5 (just 2% more than the target optimum) within only 280 function evaluations while all other algorithms required at least 520 function evaluations to reach the same intermediate cost function value. The hybrid harmony search algorithm of Ref. [81] was competitive with FHGWJA only for the first 150 function evaluations, while the hybrid simulated annealing algorithm (HFSA) of Ref. [81] went close to the average convergence curve of FHGWJA at about 325 function evaluations. The W( X i , t r ) ≤ 1.1W( X o p t ) elitist strategy of FHGWJA allowed mAHS, mBBBC–UBS, and MsinDE to approach the cost function reduction rate of FHGWJA from 400 to 700 function evaluations, while the hybrid big bang–big crunch algorithm of Ref. [81] showed a similar convergence speed to FHGWJA after about 750 function evaluations.
Table 6 presents the results obtained with FHGWJA and its competitors in the problem variant with 25 variables; the same nomenclature of Table 5 is utilized. For the sake of clarity, the table reports only the optimal values of channel storage St (included as optimization variables 4 through 25) found with FHGWJA. The present algorithm was the best optimizer also in this test problem variant. In fact, it converged to the lowest value overall of SSQ = 36.761, followed by the HFSA and hybrid HS/BBBC algorithms of Ref. [81] that obtained SSQ = 36.762 and 36.763, respectively.
Standard JAYA ranked 5th overall in terms of SSQ but required on average 5.1-times more function evaluations than FHGWJA to complete optimization runs. Improved JAYA and standard GWO found the worst solutions amongst all optimizers (respectively, SSQ=36.806 and 36.818), requiring on average, respectively, about 1.64- and 11.3-times more function evaluations than the present algorithm. These results confirm the validity of the hybridization scheme implemented in FHGWJA.
The W( X i , t r ) ≤ 1.1W( X b e s t ) elitist strategy of FHGWJA significantly improved the performance of the AHS/BBBC–UBS/sinDE algorithms of Refs. [94,95,96] also in the 25-variables problem variant. In particular, in their best optimization runs, mAHS reduced the optimized cost from 37.078 of AHS [94] to 36.774 (i.e., the 6th best result amongst those quoted in Table 6), mBBBC–UBS from 36.826 of BBBC–UBS [95] to 36.786, and MsinDE from 37.101 of sinDE [96] to 36.786. Furthermore, mAHS, mBBBC–UBS, and MsinDE reduced average computational cost by 32.7%, 34.4%, and 28.4% with respect to the original algorithms of Refs. [94,95,96], thus becoming competitive enough with FHGWJA in terms of average computational speed.
The superiority of FHGWJA over its competitors is confirmed by the fact that the present algorithm achieved the lowest standard deviation on optimized cost (only 2.919 × 10−12 vs. 3.333 × 10−11 to 1.395 × 10−2 of the other algorithms). Remarkably, the worst solution of FHGWJA practically coincided with its best solution, and it was always better than the best solutions found with its competitors. Furthermore, FHGWJA was faster than the other optimizers completing on average the 20 independent optimization runs within only 4438 function evaluations vs. 4744 to 5848 evaluations of hybrid HS/BBBC/SA and modified AHS/BBBC–UBS/sinDE. Interestingly, the hybrid HS/BBBC/SA algorithms of Ref. [81] completed their fastest optimization runs within less function evaluations than FHGWJA (respectively, only 3277, 3656, and 3887 vs. 4122). However, FHGWJA found intermediate solutions corresponding to SSQ = 36.763 (i.e., the best solutions of hybrid HS and hybrid BBBC) always within only 3200 function evaluations, and SSQ = 36.762 (i.e., the best solution of HFSA) always within only 3700 function evaluations.
Figure 12 compares the optimization histories of the best runs for the algorithms listed in Table 6; the average FHGWJA convergence curve also is shown in the figure. The plot is limited to the first 4500 function evaluations of optimization history and to the 35–80 cost function interval for the sake of clarity. The convergence behavior depicted in the figure resembles that observed for the three-variable problem variant. The best and average optimization runs’ convergence curves for FHGWJA again lie below those of the other algorithms practically over the whole search history. FHGWJA started its best optimization run from the very large cost of 3547.602 (i.e., about 96.5 times the target optimum of 36.761 quoted in Table 6) while the initial cost for all other optimizers ranged between 461.502 and 2063.123, except the improved JAYA variant of Ref. [81] that started from 75.965. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors reducing cost to 37.2 (just 1.2% more than the target optimum) within only 320 function evaluations while all other algorithms required at least 995 function evaluations to reach the same intermediate cost function value. The hybrid big bang–big crunch algorithm of Ref. [81] was competitive with FHGWJA only for the first 190 function evaluations. The W( X i , t r ) ≤ 1.1W( X o p t ) elitist strategy of FHGWJA allowed mAHS and MsinDE to approach the cost function reduction rate of FHGWJA from 220 to 310 function evaluations, while mBBBC–UBS generated the closest intermediate designs to those of FHGWJA between 650 and 1000 function evaluations.
The excellent performance of FHGWJA in the two variants of the hydrologic model calibration problems confirms the suitability of the proposed algorithm for highly nonlinear optimization problems. Interestingly, the higher level of design freedom introduced in the optimization search by the additional 22 variables did not affect at all the robustness of FHGWJA that was able to converge practically to the same values (with at most 0.105% difference) of Muskingum model parameters (i.e., K, h, and m) and channel storage values St found in the three-variables problem variant.

3.4. Optimal Crashworthiness Design of a Vehicle Subject to Side Impact

The test case (vi) solved in this study regarded the optimal crashworthiness design of a vehicle. The goal was to minimize the weight of the vehicle subject to side impact. Crashworthiness optimization must ensure that the vehicle effectively absorbs and dissipates impact energy, minimizing forces transferred to occupants. The vehicle structure must be optimized to provide maximum protection without significantly increasing weight or compromising other performance aspects [107,108,109]. Figure 13a shows the typical finite element schematization of a vehicle subject to side impact by the barrier [108], while Figure 13b illustrates the vehicle frame parts to be optimized. Since in side impact tests the B-pillar zone often is most affected, design variables are correlated with this zone.
The optimal crashworthiness design problem solved in this study included 11 optimization variables of which 7 related with the vehicle’s structural geometry: B-pillar inner thickness (x1), B-pillar reinforcement thickness (x2), floor side inner thickness (x3), cross members thickness (x4), door beam thickness (x5), door beltline reinforcement (x6), and roof rail thickness (x7). Two discrete variables refer to material selection and quantify the efficiency of the structure in absorbing energy during impact: shape factor for the B-pillar inner (x8) and shape factor for the floor side inner (x9). The last two variables are related with the presence of the obstacle: the barrier height (x10), and the hitting position of the barrier with respect to the center of mass of the vehicle (x11). In summary, the optimization problem includes nine continuous and two discrete variables.
Here, the optimal crashworthiness design problem was stated as follows:
M i n i m i z e   W X = 1.98 + 4.90 x 1 + 6.67 x 2 + 6.98 x 3 + 4.01 x 4 + 1.78 x 5 + 2.73 x 7
Subject to:
G 1 X = 1.16 0.3717 x 2 x 4 0.00931 x 2 x 10 0.484 x 3 x 9 + 0.01343 x 6 x 10 1 G 2 X = 0.261 0.0159 x 1 x 2 0.188 x 1 x 8 0.019 x 2 x 7 + 0.0144 x 3 x 5 + 0.0008757 x 5 x 10 + 0.08045 x 6 x 9 + 0.00139 x 8 x 11 + 0.00001575 x 10 x 11 0.32 G 3 X = 0.214 + 0.00817 x 5 0.131 x 1 x 8 0.0704 x 1 x 9 + 0.03099 x 2 x 6 0.018 x 2 x 7 + 0.0208 x 3 x 8 + 0.121 x 3 x 9 0.00364 x 5 x 6 + 0.0007715 x 5 x 10 0.0005354 x 6 x 10 + 0.00121 x 8 x 11 + 0.00184 x 9 x 10 0.02 x 2 2 0.32 G 4 X = 0.074 0.61 x 2 0.163 x 3 x 8 + 0.001232 x 3 x 10 0.166 x 7 x 9 + 0.227 x 2 2 0.32 G 5 X = 28.98 + 3.818 x 3 4.2 x 1 x 2 + 0.0207 x 5 x 10 + 6.63 x 6 x 9 7.7 x 7 x 8 + 0.32 x 9 x 10 32 G 6 X = 33.86 + 2.95 x 3 + 0.1792 x 10 5.057 x 1 x 2 11.0 x 2 x 8 0.0215 x 5 x 10 9.98 x 7 x 8 + 22.0 x 8 x 9 32 G 7 X = 46.36 9.9 x 2 12.9 x 1 x 8 + 0.1107 x 3 x 10 32 G 8 X = 4.72 0.5 x 4 0.19 x 2 x 3 0.0122 x 4 x 10 + 0.009325 x 6 x 10 + 0.000191 x 11 2 4 G 9 X = 10.58 0.674 x 1 x 2 1.95 x 2 x 8 + 0.02054 x 3 x 10 0.0198 x 4 x 10 + 0.028 x 6 x 10 9.9 G 10 X = 16.45 0.489 x 3 x 7 0.843 x 5 x 6 + 0.0432 x 9 x 10 0.0556 x 9 x 11 0.000786 x 11 2 15.7
The expressions written above for the objective function W X (i.e., the structural weight of vehicle parts that must be optimized) and the constraints G X were fitted in [108] using response surface models. The 10 inequality constraint functions stated by Equation (27) define limitations on the force transferred to the dummy’s abdomen (G1), velocities of the dummy’s upper/middle/lower chest (respectively, G2, G3, and G4), deflections of the dummy’s upper/middle/lower ribs (respectively, G5, G6, and G7), force transferred to the dummy’s pubic symphysis (G8), velocity at B-pillar middle-point (G9), and velocity at B-pillar front door (G10). Forces are expressed in kN, velocities in mm/s, and deflections in mm.
The side constraints on optimization variables are as follows: 0.5 ≤ x1,x2,x3,x4,x5,x6,x7 ≤ 1.5 mm (continuous variables); x8,x9∈{0.192;0.345} (discrete variables); −30 ≤ x10,x11 ≤ 30 mm (continuous variables).
The best solution available in the literature for the optimal crashworthiness problem is that obtained with the multi-strategy fusion improved gray wolf optimization (IGWO) algorithm with a structural weight of 21.39473 kg [69]. In particular, IGWO outperformed the chaotic GWO, CSO, and tunicate swarm algorithm (TSA) that converged to optimized structural weights between 21.46164 and 22.70296 kg. The adaptive dynamic self-learning grey wolf optimization algorithm (ASGWO) [70] obtained a slightly higher optimized weight than TSA, 22.87188 kg. The starling murmuration optimizer (SMO) of Ref. [47] converged to the optimized weight of 22.84298 kg, practically the same as the improved continuous ant colony optimization algorithm (LIACOR) that obtained 22.84299 kg. The other metaheuristic algorithms tested in [47] (i.e., the krill herd algorithm (KH), the best variant of artificial bee colony (ABC), the Harris hawks optimizer (HHO), the comprehensive learning particle swarm optimizer (CLPSO), and the whale optimization algorithm (WOA)) found optimized weights ranging between 22.88596 and 23.12717 kg.
Table 7 compares the optimal solution obtained with the FHGWJA algorithm developed in this study with those of the aforementioned optimizers. The results are grouped as follows: (i) FHGWJA and its component algorithms, the standard GWO and JAYA; (ii) IGWO and its three best competitors compared in [69]; (iii) ASGWO [70]; (iv) SMO and its six best competitors compared in [47]. All designs reported in the table are practically feasible as the maximum constraint violation was 0.001%. Details on computational cost and statistical performance for independent optimization runs are reported below the table when available.
It can be seen that FHGWJA again was the best optimizer also for this test case. In fact, the present algorithm converged to the very low structural weight of 21.38340 kg while the optimized weights obtained with the other algorithms ranged between 21.39473 kg (IGWO used in [69]) and 23.12717 kg (WOA used in [70]). The chaotic GWO (used in [69]) ranked 3rd overall, finding a very close structural weight to those of FHGWJA and IGWO: 21.46164 kg vs. 21.38340 and 21.39473 kg, respectively. However, the optimized values of variables x8 and x9 listed in Table 7 for the IGWO and chaotic GWO do not coincide with the available discrete values {0.192;0.345} originally set for this test problem: the optimized designs of the IGWO and GWO would become infeasible as soon as x8 is set equal to 0.192 or 0.345, thus violating constraint functions G6, G7, G8, and G9 by up to 13.9 and 23.7%, respectively.
Interestingly, the average optimized weight achieved with FHGWJA over the 20 independent runs was lower than the best solutions of IGWO and chaotic GWO: only 21.3905 kg vs. 21.39473 and 21.46164 kg, respectively. Furthermore, the present algorithm required on average only 1309 structural analyses vs. the 30,000 analyses indicated in [69] for the IGWO and chaotic GWO. The hybrid optimizer proposed here clearly outperformed the standard GWO and standard JAYA that obtained 22.84755 and 23.19155 kg, respectively. Furthermore, FHGWJA required on average only 1309 structural analyses vs. 50,000 of standard GWO and 7560 of standard JAYA for completing the search process. The adaptive dynamic self-learning grey wolf optimization algorithm (ASGWO) [70] ranked below the standard GWO in terms of optimized weight (22.87188 kg vs. 22.84755 kg) but it required only 15,000 structural analyses vs. 50,000 analyses of the standard GWO. In summary, using rather simple elitist strategies and properly refining/repairing trial solutions (so that they always lie on descent directions) allowed FHGWJA to outperform complex GWO schemes combining special perturbations of design variables or several strategies taken from various metaheuristic algorithms. This happened because only the present algorithm can always generate high-quality trial solutions throughout the optimization process.
The starling murmuration optimizer (SMO) and the other algorithms compared in Ref. [70] also were less efficient than FHGWJA. In fact, they required 30,000 structural analyses vs. only 1309 of the present algorithm to find heavier optimized weights than FHGWJA (i.e., from 22.84298 to 23.12717 kg vs. only 21.38340 of FHGWJA).
The present algorithm was very robust: the standard deviation on optimized weight was only 0.0333% of the average optimized weight while the ratio between the standard deviation on number of structural analyses and the average number of structural analyses was 26.2%. Statistical data relative to the other algorithms compared with FHGWJA usually were not available in the literature. This can be explained in view of the formulation of the optimal crashworthiness design problem. The cost function W X stated by Equation (26) depends only on variables x1, x2, x3, x4, x5, and x7, while x10 is the only variable entering in the expressions of all constraint functions (see Equation (27)). Hence, the search space of this problem hosts many competitive solutions corresponding to significantly different optimized values of design variables with obvious implications in terms of statistical dispersion of solutions. For example, the optimized values of the first seven variables for the solutions listed in Table 7 varied by at most 27.8% while optimized values of variables x8 through x11 varied from 36% to 259.5% where the largest variation occurred for variable x10 that must be adjusted to satisfy all constraints. Remarkably, FHGWJA was always able to precisely approach and efficiently explore and exploit the best region of search space in all optimization runs in view of its inherent ability to generate high-quality trial solutions over the whole search process. This reduced by a great deal the standard deviation of the optimized solution.
The excellent convergence behavior and robustness of FHGWJA is confirmed in Figure 14, which compares the optimization history of the proposed algorithm and its component algorithms GWO and JAYA. The convergence curves relative to the best and average optimization runs of FHGWJA are shown in the figure while those of the other algorithms listed in Table 7 were not reported in [47,69,70]. For the sake of clarity, the plot is limited to the first 1800 structural analyses. FHGWJA started its best optimization run from the structural weight of 40.032 kg, which is about 87.2% larger than the best optimized weight of 21.383 kg found by the present algorithm. In spite of this, in its best run, FHGWJA generated a feasible intermediate design just 1.48% worse than the global optimum after only 350 structural analyses, that is at approximately 26% of the optimization process. Furthermore, the best and average optimization runs’ convergence curves for FHGWJA practically coincided after only 830 structural analyses, that is at just 60.7% of the best run’s optimization history.
The results presented in this section confirm once again the suitability of FHGWJA for solving complex engineering optimization problems. This conclusion is supported by the fact that the present algorithm was compared with another 14 (actually 28, considering all methods evaluated in Refs. [47,69,70]) state-of-the-art metaheuristic algorithms also including three advanced GWO variants developed just a few months before the present study.

3.5. Weight Minimization of a Planar 200-Bar Truss Structure

The last test case solved in this study regarded the weight minimization of the planar 200-bar truss structure shown in Figure 15. The structure, made of steel (Young’s modulus of 206.91 GPa; mass density of 7833.413 kg/m3), is composed of 200 elements connected by 77 nodes. Because of structural symmetry, the 200 elements are categorized in 29 groups as indicated in Table 8: the elements of each group have the same cross-sectional area, which is taken as a sizing optimization variable. Hence, this test problem has 29 design variables.
The structure is subjected to three independent loading conditions:
(a)
Concentrated forces of 4.45 kN (i.e., 1000 lbf) acting in the positive X-direction at nodes 1, 6, 15, 20, 29, 34, 43, 48, 57, 62, and 71 (denoted by the green horizontal arrows in Figure 15);
(b)
Concentrated forces of 44.497 kN (i.e., 10,000 lbf) acting in the negative Y-direction at nodes 1, 2, 3, 4, 5, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 24, 26, 28, 29, 30, 31, 32, 33, 34, 36, 38, 40, 42, 43, 44, 45, 46, 47, 48, 50, 52, 54, 56, 57, 58, 59, 60,61,62,64, 66, 68, 70, 71, 72, 73, 74, and 75 (denoted by the vertical horizontal arrows in Figure 15);
(c)
Loading conditions (a) and (b) acting together.
The truss weight must be minimized under 1200 nonlinear constraints on element stresses that should not exceed ±68.97 MPa (i.e., ±10,000 psi, the same limit stress under tension and compression). The cross-sectional areas taken as sizing variables can vary between 0.64516 cm2 (i.e., 0.1 in2) and 645.16 cm2 (i.e., 100 in2).
The optimization problem was stated as follows:
M i n i m i z e   W X = ρ γ j = 1 N E L A j L j
Subject   to   G X = σ j , i l c σ t , l i m 1 0 G X = σ j , i l c σ c , l i m 1 0           j = 1 , , N E L   i l c = 1,2 , 3  
In Equation (28), ρ and γ, respectively, are the mass density and the specific weight of the material; NEL is the number of elements of the truss; Aj and Lj, respectively, denote the cross-sectional area and the length of the j-th element of the structure. The design vector X includes the values of the 29 sizing variables corresponding to the cross-sectional areas of the 200 truss elements.
In Equation (29), σj,ilc denotes the stress developed in the j-th element of the structure under the ilc-th loading condition, while σt,lim = 10,000 psi and σc,lim = −10,000 psi, respectively, denote the limit stresses in tension and compression. A total of 1200 nonlinear constraints on element stresses are included in the optimization problem.
This design example is an average-scale structural optimization problem and has been extensively investigated in the literature. The target optimum weight for this problem is 11,542.4 kg, but there is also a local optimum corresponding to a structural weight of 11,544 kg. The non convexity of the search space makes this test problem hard to be solved with metaheuristic optimizers. The hybrid harmony search, big bang–big crunch, and simulated annealing algorithms developed in [81] converged to a structural weight of 11,542.409 kg, practically the same as the target optimum. Other very competitive solutions were obtained with the following: (i) the eigenvectors of covariance matrix (ECM) algorithm derived from the covariance matrix adaptation evolution strategy high-performance optimizer (i.e., 11,543.521 kg, [110]); (ii) the variant of success history-based adaptive differential evolution with linear population size reduction algorithm using an ensemble sinusoidal approach to automatically adapt the DE scaling factor (LSHADE-Epsin) (i.e., 11,543.980 kg, [111]); (iii) the modified coyote optimization algorithm (MCOA) that uses chaotic sequences instead of random sequences for perturbing solutions (i.e., 11,544.007 kg, [112]); (iv) the cyclic neighborhood network topology particle swarm optimizer (CNNT-PSO) that handles population diversity based on the interactions between each particle and its neighboring individuals (i.e., 11,545.330 kg, [113]); (v) the hybrid geometric mean optimizer (hGMO) that integrates GMO with variable neighborhood to enhance exploitation (i.e., 11,545.568, [114]).
Other advanced MHAs converged to lighter designs than the 11,542.4 kg target weight, but their optimized solutions were infeasible. For example, the corrected multi-level and multi-point simulated annealing (CMLPSA) algorithm of Ref. [90], that generates the new trial design using a population of candidate designs lying on descent directions rather than developing a single point as is conducted in classical SA, found an optimized weight of 11,542.137 kg (practically the same value of target optimum weight) violating stress constraints by only 0.0709%. The hybrid HPSACO algorithm [115], combining harmony search, particle swarm, and ant colony optimization, found an optimum weight of 11,410.737 kg but stress constraints were violated by 9.97%. An improved Harris hawks optimization algorithm was used in [116]: the optimized design weighted 11,533.983 kg (i.e., 0.073% less than target optimum weight) with 9.55% violation on stress constraints. PSOHHO [117], combining the particle swarm optimization and Harris hawks optimization methods, obtained 11,374.477 kg structural weight (i.e., 1.46% reduction) but the corresponding solution violated stress constraints by about 27.6%. Finally, the biogeography-based optimization [118] reduced structural weight to 11,320.758 kg (i.e., 1.92% reduction) but the corresponding design violated stress constraints by 11.33%.
In view of the above arguments, the optimization results obtained with the FHGWJA algorithm developed in this study were compared with those of the hybrid HS/BBBC/SA [81], ECM [110], LSHADE-Epsin [111], MCOA [112], CNNT-PSO [113], and hGMO [114] algorithms providing the best feasible solutions available in the literature. Furthermore, FHGWJA was compared with the classical GWO and improved GWO [119] as well as with classical and improved JAYA [81]. The political optimizer (PO) algorithm implemented in [120] also was included in the comparison as its best optimized weight was close to those of the classical GWO and standard and improved JAYA. The comparisons presented in this section should be considered highly significant because each competitor selected from Refs. [81,110,111,112,113,114,119,120] was reported to outperform from 5 to 76 other state-of-the-art metaheuristic algorithms.
According to the literature, population size values for this test problem were as follows: 50 for hybrid HS [81], MCOA [112], and hGMO [114]; 100 for hybrid BBBC [81]; 20 for improved JAYA [81]; 14 for ECM [110]; 1000 for CNNT-PSO [113]; 121 for PO [120]. Such a large variation in the number of search agents confirms the difficulties encountered with metaheuristic optimizers in solving the 200-bar truss problem. Interestingly, FHGWJA could use the smallest population size overall. This was the logical consequence of the inherent ability of FHGWJA to select high-quality trial solutions throughout the search process.
Table 9 presents the optimization results obtained with FHGWJA and its competitors in the 200-bar truss problem. The table reports the optimized values of cross-sections (expressed in in2) and the structural weight for the best run along with the number of structural analyses; the best, average, and worst values of optimized weight along with the corresponding standard deviation (ST Dev) over the independent runs; and the number of structural analyses (NSA) required in the search process. Statistical data on computational cost (i.e., average number of structural analyses and corresponding standard deviation) are reported when available. The detailed data reported for this test problem in Refs. [115,116,117,118] that present solutions violating stress limits were not replicated in Table 9 for the sake of brevity.
It can be seen from Table 9 that the FHGWJA was the best optimizer also for this test case. In fact, it converged to the overall lowest structural weight of 11,541.380 kg. Furthermore, all optimization runs of FHGWJA found practically feasible designs (in fact, the maximum element stress evaluated for the optimized solutions of FHGWJA was only 10,000.412 psi vs. the 10,000 psi stress limit set for this problem) that were lighter than the target optimum weight of 11,542.4 kg. The hybrid HS/BBBC/SA algorithms of Ref. [81] were very competitive with the present algorithm because they always converged to the target optimum with at most 0.00622 kg standard deviation vs. 0.4157 kg of FHGWJA. The average computational cost of FHGWJA (i.e., 3356 structural analyses) practically coincides with the mean of average computational costs recorded for hybrid HS (i.e., 2736 analyses), hybrid BBBC (i.e., 1676 analyses), and hybrid fast SA (i.e., 5806 analyses).
The other best competitors of FHGWJA, namely ECM [110], LSHADE-Epsin [111], MCOA [112], CNNT-PSO [113], and hGMO [114], obtained optimized weights ranging between 11,543.521 kg (ECM) and 11,545.568 kg (hGMO) and required between 20,000 (hGMO) and 290,000 (LSHADE-Epsin) structural analyses vs. only 3356 structural analyses required on average by the present algorithm. FHGWJA clearly outperformed its component algorithms: the best solutions of improved JAYA [81], standard JAYA, and the standard GWO, respectively, achieved optimized weights of 11,550.054 kg, 11,553.355 kg and 11,558.650 kg vs. only 11,541.380 kg of FHGWJA; furthermore, they, respectively, required 31,580, 40,941, and 50,000 structural analyses vs. only 3356 analyses of FHGWJA. The improved GWO variant (IGWO) of Ref. [119], using an exponential variation scheme for the vectors A 1 , A 2 , and A 3 of Equations (6) and (9), prematurely converged to the optimized weight of 11,689.883 kg within 23,760 structural analyses. The political optimizer (PO) of Ref. [120] obtained a slightly better solution than the standard GWO (the 2nd worst solution overall) weighing 11,557.384 kg vs. 11,558.650 kg within about 28,000 structural analyses vs. 50,000 analyses required for the standard GWO.
The present algorithm was very robust. In fact, it ranked 4th overall in terms of standard deviation on optimized weight but it achieved a rate of success of 100% in finding slightly lighter designs than the target global optimum of 11,542.4 kg, and it never got trapped in the 11,544 kg local minimum of structural weight. Furthermore, the ratio between standard deviation of computational cost and average computational cost achieved with FHGWJA was only 12.8%.
The W( X i , t r ) ≤ 1.1W( X b e s t ) elitist strategy of FHGWJA was again effective in improving the performance of the AHS/BBBC–UBS/sinDE algorithms of Refs. [94,95,96], which were reported in [81] to obtain optimized structural weights of 11,542.410 kg (yet with 0.9998% violation on stress constraints), 11,542.410 kg (yet with 1.076% constraint violation), and 11,555.006 kg (yet with 0.806% constraint violation), respectively. The modified mAHS, mBBBC–UBS, and MsinDE variants now converged to the fully feasible design listed in Table 9 for hybrid HS/BBBC/SA weighing 11,542.409 kg. The computational cost of mAHS, mBBBC–UBS, and MsinDE decreased on average by about 30% with respect to the original algorithms, yet remaining about 3.5- to 8.5-times higher than its counterpart for FHGWJA. These results are not reported in Table 9 for the sake of brevity.
Figure 16 compares the optimization histories of the best runs for the algorithms listed in Table 9; the average FHGWJA convergence curve also is shown in the figure. The plot is limited to the first 8000 structural analyses of the optimization history and to the 11,000–31,000 kg structural weight interval for the sake of clarity. FHGWJA started its best optimization run from the very large cost of 208,700.225 kg (i.e., about 18.1 times the target optimum of 11,541.380 kg quoted in Table 9) while the initial cost for all other optimizers ranged between 157,234.459 and 185,304.986 kg, except for the ECM (eigenvectors of covariance matrix) [110] and CNNT-PSO (cyclic neighborhood network topology particle swarm optimizer) [113] algorithms that started from 20,437.901 and 11,754.998 kg, respectively. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors: for example, within only 210 and 1330 structural analyses for ECM and CNNT-PSO, respectively. The best and average optimization runs’ convergence curves for FHGWJA again lie below those of all other algorithms practically over the whole search history. The hybrid harmony search and big bang–big crunch algorithms of Ref. [81] were competitive enough with FHGWJA approaching the best optimization run of the present algorithm after about 2000 and 1400 structural analyses, respectively, that is towards the end of their optimization histories. However, while hybrid HS/BBBC stopped to improve design after reaching their best structural weight of 11,542.409 kg, the present algorithm continued to reduce structural weight to 11,541.380 kg. The best and average optimization runs’ convergence curves of FHGWJA practically coincided after about 1750 structural analyses, that is at only 50.5% of the best run’s optimization history.
The data presented in this section confirmed the tendency of metaheuristic optimizers to prematurely converge to sub-optimal designs in the 200-bar truss weight minimization problem. However, FHGWJA emerged once again as the best algorithm in view of its inherent ability to generate high-quality trial solutions throughout the optimization process. Remarkably, the present algorithm was superior to high-performance optimizers such as ECM and LSHADE-Epsin that are competitive even with CEC (IEEE Congress on Evolutionary Computing) competition winners.

4. Conclusions

The novel hybrid metaheuristic algorithm FHGWJA combining the grey wolf optimizer (GWO) and JAYA was presented in this study. FHGWJA is an advanced grey wolf optimizer using elitist strategies and JAYA-based schemes to improve the current best record in each iteration. Exploration and exploitation are enhanced by forcing FHGWJA to increase diversity and select high-quality trial designs lying on descent directions. All operations carried out in FHGWJA to generate new trial solutions are computationally cheap because decision making mostly relies on evaluation of cost function rather than optimization constraints.
The present algorithm was successfully tested in seven engineering problems including up to 29 optimization variables and 1200 nonlinear constraints. Test cases covered various fields in robotics (i.e., 2D path planning), hydrology (i.e., calibration of a nonlinear hydrologic model), and hydraulic, mechanical, and civil engineering (i.e., shape optimization of a concrete gravity dam, optimal crashworthiness design, and weight minimization of a planar 200-bar truss structure).
The extensive comparison with the optimization literature carried out in this study proved the superiority of FHGWJA over a huge number of competitors that were reported to have found the best available solutions for each test problem. In fact, the present algorithm always converged to or improved the target global optimum designs indicated in the literature. Remarkably, FHWGJA was the 1st or 2nd fastest algorithm in all test problems. The present algorithm was considerably faster than the standard GWO and JAYA formulations, and advanced variants of its component algorithms, thus confirming the efficiency of the proposed hybridization scheme. Finally, FHGWJA was very robust, achieving a high rate of success in all test problems with low standard deviations on optimized cost and number of analyses required in the search process.
In summary, FHGWJA is a very efficient tool for engineering optimization. Further research will investigate the suitability of FHGWJA for large scale optimization problems with thousands of constraints. Optimization with discrete variables also will be analyzed. In this regard, the optimal solution found with FHGWJA for the 200-bar truss problem was simply matched with a discrete set of available cross-sections obtaining competitive results with the best solution available in the literature.

Author Contributions

Conceptualization, methodology, validation, C.F. and L.L.; formal analysis, C.F. and C.I.P.; software, investigation, data curation, C.F.; writing—original draft preparation, C.F.; writing—review and editing, L.L and C.I.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data relative to this study are available upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley: Reading, MA, USA, 1989. [Google Scholar]
  2. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–72. [Google Scholar] [CrossRef]
  3. Storn, R.M.; Price, K.V. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  4. Price, K.V.; Storn, R.M.; Lampinen, J.A. Differential Evolution a Practical Approach to Global Optimization; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  5. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  6. Beyer, H.G.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  7. Simon, D. Biogeography-based optimization. IEEE Trans. Evolution. Comp. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  8. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  9. Van Laarhoven, P.J.M.; Aarts, E.H.L. Simulated Annealing: Theory and Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1987. [Google Scholar]
  10. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  11. Kaveh, A.; Motie Share, M.A.; Moslehi, M. Magnetic charged system search: A new meta-heuristic algorithm for optimization. Acta Mech. 2013, 224, 85–107. [Google Scholar] [CrossRef]
  12. Kaveh, A.; Mahdavi, V.R. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 2014, 139, 18–27. [Google Scholar] [CrossRef]
  13. Kaveh, A.; Bakhshpoori, T. A new metaheuristic for continuous structural optimization: Water evaporation optimization. Struct. Multidiscip. Optim. 2016, 54, 23–43. [Google Scholar] [CrossRef]
  14. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  15. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  16. Abdechiri, M.; Meybodi, M.R.; Bahrami, H. Gases Brownian motion optimization: An algorithm for optimization (GBMO). Appl. Soft Comput. 2013, 13, 2932–2946. [Google Scholar] [CrossRef]
  17. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics based algorithm. Fut. Gen. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  18. Kaveh, A.; Khayat Azad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  19. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light spectrum optimizer: A novel physics-inspired metaheuristic optimization algorithm. Mathematics 2022, 10, 3466. [Google Scholar] [CrossRef]
  20. Erol, O.K.; Eksin, I. A new optimization method: Big Bang-Big Crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  21. Rashedi, E.; Nezamabadipour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inform. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  22. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  23. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  24. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  25. Glover, F.; Laguna, M. Tabu Search; Kluwer Academic Publishers: Boston, MA, USA, 1997. [Google Scholar]
  26. Geem, Z.W.; Kim, J.H.; Loganathan, G. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  27. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. CAD Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  28. Rao, R.V. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comp. 2016, 7, 19–34. [Google Scholar]
  29. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  30. Matoušová, I.; Trojovský, P.; Dehghani, M.; Trojovská, E.; Kostra, J. Mother optimization algorithm: A new human-based metaheuristic approach for solving engineering optimization. Sci. Rep. 2023, 13, 10312. [Google Scholar] [CrossRef] [PubMed]
  31. Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems based on preschool education. Sci. Rep. 2023, 13, 21472. [Google Scholar] [CrossRef] [PubMed]
  32. Gopi, S.; Mohapatra, P. Learning cooking algorithm for solving global optimization problems. Sci. Rep. 2024, 14, 13359. [Google Scholar] [CrossRef]
  33. Kaveh, A.; Talatahari, S. Optimum design of skeletal structures using imperialist competitive algorithm. Comput. Struct. 2010, 88, 1220–1229. [Google Scholar] [CrossRef]
  34. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl. Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  35. Clerc, M. Particle Swarm Optimization; ISTE Publishing Company: London, UK, 2006. [Google Scholar]
  36. Dorigo, M.; Stutzle, T. Ant Colony Optimization; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  37. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  38. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimization. Int. J. Bio-Inspir. Com. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  39. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  41. Pierezan, J.; Dos Santos Coelho, L. Coyote Optimization Algorithm: A new metaheuristic for global optimization problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation, Rio de Janeiro, Brazil, 8–13 July 2018; p. 8477769. [Google Scholar]
  42. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  43. Coufal, P.; Hubálovský, S.; Hubálovská, M.; Balogh, Z. Snow Leopard Optimization algorithm: A new nature-based optimization algorithm for solving optimization problems. Mathematics 2021, 9, 2832. [Google Scholar] [CrossRef]
  44. Yang, X.S.; Deb, S. Engineering optimisation by cuckoo search. Int. J. Math. Model. 2010, 1, 330–343. [Google Scholar] [CrossRef]
  45. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  46. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  47. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  48. Yang, X.-S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  49. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  52. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  53. Sadee, H.T.; Abdulazeez, A.M. Giant trevally optimizer (GTO): A novel metaheuristic algorithm for global optimization and challenging engineering problems. IEEE Access 2022, 10, 121615–121640. [Google Scholar] [CrossRef]
  54. Knypinski, L.; Devarapalli, R.; Gillon, F. The hybrid algorithms in constrained optimization of the permanent magnet motors. IET Sci. Meas. Technol. 2024, 1–7. [Google Scholar] [CrossRef]
  55. Ficarella, E.; Lamberti, L.; Degertekin, S.O. Mechanical identification of materials and structures with optical methods and metaheuristic optimization. Materials 2019, 12, 2133. [Google Scholar] [CrossRef]
  56. Degertekin, S.O.; Minooei, S.M.; Santoro, L.; Trentadue, B.; Lamberti, L. Large-scale truss-sizing optimization with enhanced hybrid HS algorithm. Appl. Sci. 2021, 11, 3270. [Google Scholar] [CrossRef]
  57. Degertekin, S.O.; Yalcin Bayar, G.; Lamberti, L. Parameter free Jaya algorithm for truss sizing-layout optimization under natural frequency constraints. Comput. Struct. 2021, 245, 106461. [Google Scholar] [CrossRef]
  58. Guo, J.; Liu, C.; Cao, J.; Jiang, D. Damage identification of wind turbine blades with deep convolutional neural networks. Renew. Energy 2021, 174, 122–133. [Google Scholar] [CrossRef]
  59. Silvestrin, P.V.; Ritt, M. An iterated tabu search for the multi-compartment vehicle routing problem. Comput. Oper. Res. 2017, 81, 191–202. [Google Scholar] [CrossRef]
  60. Brion, D.A.J.; Pattinson, S.W. Generalisable 3D printing error detection and correction via multi-head neural networks. Nat. Commun. 2022, 13, 4654. [Google Scholar] [CrossRef]
  61. Meenachi, L.; Ramakrishnan, S. Metaheuristic search based feature selection methods for classification of cancer. Pattern Recognit. 2021, 119, 108079. [Google Scholar] [CrossRef]
  62. Oliva, D.; Ortega-Sanchez, N.; Hinojosa, S.; Perez-Cisneros, M. Modern Metaheuristics in Image Processing; CRC Press: Boca Raton, FL, USA, 2023. [Google Scholar]
  63. Sharma, I.; Kumar, V.; Sharma, S. A comprehensive survey on grey wolf optimization. Recent Adv. Comput. Sci. Commun. 2022, 15, 323–333. [Google Scholar]
  64. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Zamani, H.; Bahreininejad, A. GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems. J. Comput. Sci. 2022, 61, 101636. [Google Scholar] [CrossRef]
  65. Tsai, H.-C.; Shi, J.-Y. Potential corrections to grey wolf optimizer. Appl. Soft. Comp. 2024, 161, 111776. [Google Scholar] [CrossRef]
  66. Nadimi-Shahraki, M.H.; Taghian, S.; Mirijalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  67. Li, K.; Li, S.; Huang, Z.; Zhang, M.; Xu, Z. Grey Wolf Optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy. Sci. Rep. 2022, 12, 18961. [Google Scholar] [CrossRef]
  68. Ma, S.; Fanga, Y.; Zhao, X.; Liu, Z. Multi-swarm improved Grey Wolf Optimizer with double adaptive weights and dimension learning for global optimization problems. Math. Comput. Simul. 2023, 205, 619–641. [Google Scholar] [CrossRef]
  69. Qiu, Y.; Yang, X.; Chen, S. An improved gray wolf optimization algorithm solving to functional optimization and engineering design problems. Sci. Rep. 2024, 14, 14190. [Google Scholar] [CrossRef] [PubMed]
  70. Zhang, Y.; Cai, Y. Adaptive dynamic self-learning grey wolf optimization algorithm for solving global optimization problems and engineering problems. Math. Biosci. Eng. 2024, 21, 3910–3943. [Google Scholar] [CrossRef]
  71. Singh, N.; Singh, S.B. Hybrid algorithm of particle swarm optimization and grey wolf optimizer for improving convergence performance. J. Appl. Math. 2017, 2017, 2030489. [Google Scholar] [CrossRef]
  72. Zhang, X.; Kang, Q.; Cheng, J.; Wang, X. A novel hybrid algorithm based on Biogeography-Based Optimization and Grey Wolf Optimizer. Appl. Soft Comp. 2018, 67, 197–214. [Google Scholar] [CrossRef]
  73. Wang, J.-S.; Li, S.-X. An improved grey wolf optimizer based on differential evolution and elimination mechanism. Sci. Rep. 2019, 9, 7181. [Google Scholar] [CrossRef]
  74. Yang, Q.; Liu, J.; Wu, Z.; He, S. A fusion algorithm based on whale and grey wolf optimization algorithm for solving real-world optimization problems. Appl. Soft Comp. 2023, 146, 110701. [Google Scholar] [CrossRef]
  75. Tu, B.; Wang, F.; Huo, Y.; Wang, X. A hybrid algorithm of grey wolf optimizer and Harris hawks optimization for solving global optimization problems with improved convergence performance. Sci. Rep. 2023, 13, 22909. [Google Scholar] [CrossRef]
  76. Zitar, R.A.; Al-Betar, M.A.; Awadallah, M.A.; Doush, I.A.; Assaleh, K. An intensive and comprehensive overview of JAYA algorithm, its versions and applications. Arch. Comput. Methods Eng. 2022, 29, 763–792. [Google Scholar] [CrossRef]
  77. Da Silva, L.S.A.; Lucio, Y.L.S.; Coelho, L.D.; Mariani, V.C.; Rao, R.V. A comprehensive review on Jaya optimization algorithm. Artif. Intell. Rev. 2023, 56, 4329–4361. [Google Scholar] [CrossRef]
  78. Rao, R.V.; Saroj, A. A self-adaptive multi-population based Jaya algorithm for engineering optimization. Swarm Evol. Comput. 2017, 37, 1–26. [Google Scholar]
  79. Zhang, Y.; Chi, A.; Mirjalili, S. Enhanced Jaya algorithm: A simple but efficient optimization method for constrained engineering design problems. Knowl.-Based Syst. 2021, 233, 107555. [Google Scholar] [CrossRef]
  80. Zhang, G.; Wan, C.; Xue, S.; Xie, L. A global-local hybrid strategy with adaptive space reduction search method for structural health monitoring. Appl. Math. Model. 2023, 121, 231–251. [Google Scholar] [CrossRef]
  81. Ficarella, E.; Lamberti, L.; Degertekin, S.O. Comparison of three novel hybrid metaheuristic algorithms for structural optimization problems. Comput. Struct. 2021, 244, 106395. [Google Scholar] [CrossRef]
  82. Degertekin, S.O.; Lamberti, L.; Ugur, I.B. Discrete sizing/layout/topology optimization of truss structures with an advanced Jaya algorithm. Appl. Soft Comp. 2019, 79, 363–390. [Google Scholar] [CrossRef]
  83. Chakraborty, U.K. Semi-steady-state Jaya algorithm for optimization. Appl. Sci. 2020, 10, 5388. [Google Scholar] [CrossRef]
  84. Cheng, Y.; Lyu, X.; Mao, S. Optimization design of brushless DC motor based on improved JAYA algorithm. Sci. Rep. 2024, 14, 5427. [Google Scholar] [CrossRef]
  85. Zhang, H.L.; Liu, Q.; Zhang, C. Hybrid Jaya algorithm with genetic algorithm for solving non-linear optimization problems. Math. Probl. Eng. 2019, 2019, 6397802. [Google Scholar]
  86. Gholami, K.; Olfat, H.; Gholami, J. An intelligent hybrid JAYA and crow search algorithms for optimizing constrained and unconstrained problems. Soft Comput. 2021, 25, 14393–14411. [Google Scholar] [CrossRef]
  87. Zhang, Y.J.; Wang, Y.F.; Tao, L.W.; Yan, Y.X.; Zhao, J.; Gao, Z.M. Self-adaptive classification learning hybrid JAYA and Rao-1 algorithm for large-scale numerical and engineering problems. Eng. Appl. Artif. Intell. 2022, 114, 105069. [Google Scholar] [CrossRef]
  88. Gholami, J.; Nia, F.A.; Sanatifar, M.; Zawbaa, H.M. Effective hybridization of JAYA and teaching–learning-based optimization algorithms for numerical function optimization. Soft Comput. 2023, 27, 9673–9691. [Google Scholar] [CrossRef]
  89. Mlaouhi, I.; Ben Guedria, N.; Bouraoui, C. An efficient hybrid differential evolution–Jaya algorithm for enhancing vibration behaviour in automotive turbocharger systems. Eng. Opt. 2023, 56, 1493–1515. [Google Scholar] [CrossRef]
  90. Lamberti, L. An efficient simulated annealing algorithm for design optimization of truss structures. Comput. Struct. 2008, 86, 1936–1953. [Google Scholar] [CrossRef]
  91. Ghafil, H.N.; Jármai, K. Dynamic differential annealed optimization: New metaheuristic optimization algorithm for engineering applications. Appl. Soft Comp. 2020, 93, 106392. [Google Scholar] [CrossRef]
  92. Gasparetto, A.; Zanotto, V. Optimal trajectory planning for industrial robots. Adv. Eng. Softw. 2010, 41, 548–556. [Google Scholar] [CrossRef]
  93. Li, B.; Shao, Z. Simultaneous dynamic optimization: A trajectory planning method for nonholonomic car-like robots. Adv. Eng. Softw. 2020, 87, 30–42. [Google Scholar] [CrossRef]
  94. Hasancebi, O.; Erdal, F.; Saka, M.P. Adaptive harmony search method for structural optimization. ASCE J. Struct. Eng. 2010, 136, 419–431. [Google Scholar] [CrossRef]
  95. Kazemzadeh Azad, S.; Hasancebi, O.; Kazemzadeh Azad, S.; Erol, O.K. Upper bound strategy in optimum design of truss structures: A big bang-big crunch algorithm based application. Adv. Struct. Eng. 2013, 16, 1035–1046. [Google Scholar] [CrossRef]
  96. Draa, A.; Bouzoubia, S.; Boukhalfa, I. A sinusoidal differential evolution algorithm for numerical optimisation. Appl. Soft Comp. 2015, 27, 99–126. [Google Scholar] [CrossRef]
  97. MiarNaeimi, F.; Azizyan, G.; Rashki, M. Multi-level cross entropy optimizer (MCEO): An evolutionary optimization algorithm for engineering problems. Eng. Comput. 2018, 34, 719–739. [Google Scholar] [CrossRef]
  98. Azizyan, G.; MiarNaeimi, F.; Rashki, M.; Shabakhty, S. Flying Squirrel Optimizer (FSO): A novel SI-based optimization algorithm for engineering problems. Iran. J. Optim. 2019, 11, 177–205. [Google Scholar]
  99. Wilson, E.M. Engineering Hydrology; MacMillan: London, UK, 1974. [Google Scholar]
  100. Tung, Y.K. River flood routing by nonlinear Muskingum method. ASCE J. Hydraul. Eng. 1985, 111, 1447–1460. [Google Scholar] [CrossRef]
  101. Wang, W.-C.; Tian, W.-C.; Xu, D.-M.; Chau, K.-W.; Qiang Ma, Q.; Liu, C.-J. Muskingum models’ development and their parameter estimation: A state-of-the-art review. Water. Resour. Manag. 2023, 37, 3129–3150. [Google Scholar] [CrossRef]
  102. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  103. Luo, J.; Yang, X.; Xie, J. Evaluation and improvement of routing procedure for nonlinear Muskingum models. Int. J. Civ. Eng. 2016, 14, 47–59. [Google Scholar] [CrossRef]
  104. Yua, X.; Wu, X.; Tian, H.; Yuan, Y.; Adnan, R.M. Parameter identification of nonlinear Muskingum model with backtracking search algorithm. Water. Resour. Manag. 2016, 30, 2767–2783. [Google Scholar]
  105. Geem, Z.W. Issues in optimal parameter estimation for the nonlinear Muskingum flood routing model. Eng. Optimiz. 2014, 46, 328–339. [Google Scholar] [CrossRef]
  106. Barati, R. Parameter estimation of nonlinear Muskingum models using Nelder-Mead Simplex algorithm. ASCE J. Hydrol. Eng. 2014, 16, 946–954. [Google Scholar] [CrossRef]
  107. Gu, L.; Yang, R.; Tho, C.-H.; Makowskit, M.; Faruquet, O.; Li, Y.L.Y. Optimisation and robustness for crashworthiness of side impact. Int. J. Veh. Des. 2001, 26, 348–360. [Google Scholar] [CrossRef]
  108. Youn, B.D.; Choi, K.K. A new response surface methodology for reliability-based design optimization. Comput. Struct. 2004, 82, 241–256. [Google Scholar] [CrossRef]
  109. Youn, B.D.; Choi, K.; Yang, R.-J.; Gu, L. Reliability-based design optimization for crashworthiness of vehicle side impact. Struct. Multidiscip. Optim. 2004, 26, 272–283. [Google Scholar] [CrossRef]
  110. Pouriyanezhad, E.; Rahami, H.; Mirhosseini, S.M. Truss optimization using eigenvectors of the covariance matrix. Eng. Comput. 2021, 37, 2207–2224. [Google Scholar] [CrossRef]
  111. Ozturk, H.T.; Kahraman, H.T. Meta-heuristic search algorithms in truss optimization: Research on stability and complexity analyses. Appl. Soft Comp. 2023, 145, 110573. [Google Scholar] [CrossRef]
  112. Pierezan, J.; Dos Santos Coelho, L.; Cocco Mariani, V.; Vasconcelos Segundo, E.H.; Prayogo, D. Chaotic coyote algorithm applied to truss optimization problems. Comput. Struct. 2021, 242, 106353. [Google Scholar] [CrossRef]
  113. Kim, T.-H.; Byun, J.-I. Truss sizing optimization with a diversity-enhanced cyclic neighborhood network topology particle swarm optimizer. Mathematics 2020, 8, 1087. [Google Scholar] [CrossRef]
  114. Pham, V.H.S.; Nguyen Dang, N.T.; Nguyen, V.N. Efficient truss design: A hybrid geometric mean optimizer for better performance. Appl. Comput. Intell. Soft Comput. 2024, 2024, 4216718. [Google Scholar] [CrossRef]
  115. Kaveh, A.; Talatahari, S. Particle swarm optimizer, ant colony strategy and harmony search scheme hybridized for optimization of truss structures. Comput. Struct. 2009, 87, 267–283. [Google Scholar] [CrossRef]
  116. Yassami, M.; Ashtari, P. PSOHHO hybrid optimization algorithm for truss optimization. AUT J. Civil Eng. 2022, 6, 295–318. [Google Scholar]
  117. Khajeh, A.; Kiani, A.; Seraji, M.; Dashti, H. Weight minimization of truss structures using an improved Harris hawks optimization algorithm. Innov. Infrastruct. Solut. 2023, 8, 112. [Google Scholar] [CrossRef]
  118. Massah, S.R.; Ahmadi, H. Weight optimization of truss structures by the biogeography-based optimization algorithm. Civil Eng. Infrastruct. J. 2021, 54, 129–144. [Google Scholar]
  119. Kaveh, A.; Zakian, P. Improved GWO algorithm for optimal design of truss structures. Eng. Comput. 2018, 34, 685–707. [Google Scholar] [CrossRef]
  120. Awad, R. Sizing optimization of truss structures using the political optimizer (PO) algorithm. Structures 2021, 33, 4871–4894. [Google Scholar] [CrossRef]
Figure 1. Schematic of the “mirroring” strategy used in FHGWJA if trial solutions X i , X i , t r , and X i , t r do not improve the current best record.
Figure 1. Schematic of the “mirroring” strategy used in FHGWJA if trial solutions X i , X i , t r , and X i , t r do not improve the current best record.
Applsci 14 09610 g001
Figure 2. Schematic of the elitist mirror strategy used in FHGWJA to avoid stagnation of wolves α, β, and δ.
Figure 2. Schematic of the elitist mirror strategy used in FHGWJA to avoid stagnation of wolves α, β, and δ.
Applsci 14 09610 g002
Figure 3. Flowchart of the FHGWJA algorithm developed in this study.
Figure 3. Flowchart of the FHGWJA algorithm developed in this study.
Applsci 14 09610 g003
Figure 4. Trajectory spaces of the 2D path planning problem variants including 7 and 10 obstacles.
Figure 4. Trajectory spaces of the 2D path planning problem variants including 7 and 10 obstacles.
Applsci 14 09610 g004
Figure 5. Comparison of convergence curves of FHGWJA and its competitors for the 2D path planning problem solved with 14 design variables. In the vertical axis, the “,” notation indicates the decimal signs.
Figure 5. Comparison of convergence curves of FHGWJA and its competitors for the 2D path planning problem solved with 14 design variables. In the vertical axis, the “,” notation indicates the decimal signs.
Applsci 14 09610 g005
Figure 6. Comparison of convergence curves of FHGWJA and its competitors for the 2D path planning problem solved with 20 design variables.
Figure 6. Comparison of convergence curves of FHGWJA and its competitors for the 2D path planning problem solved with 20 design variables.
Applsci 14 09610 g006
Figure 7. Comparison of optimized trajectories found with FHGWJA and its competitors in the 2D path planning problem: (a) 7 base points and 14 variables; (b) 10 base points and 20 variables.
Figure 7. Comparison of optimized trajectories found with FHGWJA and its competitors in the 2D path planning problem: (a) 7 base points and 14 variables; (b) 10 base points and 20 variables.
Applsci 14 09610 g007
Figure 8. (a) 3D schematic of the gravity dam; (b) cross-section of the concrete gravity dam indicating shape variables and main geometric dimensions.
Figure 8. (a) 3D schematic of the gravity dam; (b) cross-section of the concrete gravity dam indicating shape variables and main geometric dimensions.
Applsci 14 09610 g008
Figure 9. Comparison of convergence curves of FHGWJA and its competitors for the concrete dam optimization problem.
Figure 9. Comparison of convergence curves of FHGWJA and its competitors for the concrete dam optimization problem.
Applsci 14 09610 g009
Figure 10. Comparison of concrete dam optimized shapes obtained with FHGWJA and its competitors.
Figure 10. Comparison of concrete dam optimized shapes obtained with FHGWJA and its competitors.
Applsci 14 09610 g010
Figure 11. Comparison of convergence curves of FHGWJA and its competitors for the 3-variable hydrologic model calibration problem.
Figure 11. Comparison of convergence curves of FHGWJA and its competitors for the 3-variable hydrologic model calibration problem.
Applsci 14 09610 g011
Figure 12. Comparison of convergence curves of FHGWJA and its competitors for the 25-variable hydrologic model calibration problem.
Figure 12. Comparison of convergence curves of FHGWJA and its competitors for the 25-variable hydrologic model calibration problem.
Applsci 14 09610 g012
Figure 13. (a) Finite element model of vehicle subject to side impact (taken from [108]); (b) schematic of vehicle parts to be optimized.
Figure 13. (a) Finite element model of vehicle subject to side impact (taken from [108]); (b) schematic of vehicle parts to be optimized.
Applsci 14 09610 g013
Figure 14. Comparison of convergence curves of FHGWJA and its competitors for the optimal crashworthiness problem.
Figure 14. Comparison of convergence curves of FHGWJA and its competitors for the optimal crashworthiness problem.
Applsci 14 09610 g014
Figure 15. Schematic of the planar 200-bar truss including applied loads and kinematic constraints.
Figure 15. Schematic of the planar 200-bar truss including applied loads and kinematic constraints.
Applsci 14 09610 g015
Figure 16. Comparison of convergence curves of FHGWJA and its competitors for the 200-bar optimization problem.
Figure 16. Comparison of convergence curves of FHGWJA and its competitors for the 200-bar optimization problem.
Applsci 14 09610 g016
Table 1. Optimization results for the 2D path planning problem solved with 14 design variables.
Table 1. Optimization results for the 2D path planning problem solved with 14 design variables.
AlgorithmBest
(mm)
Average (mm)Worst
(mm)
ST Dev (mm)Number of Analyses
FHGWJA (present)40.81140.85940.8903.331 × 10−24581 ± 1127
Standard GWO41.10441.11241.1291.340 × 10−250,000
Standard JAYA41.05041.09241.1626.080 × 10−217,204
Improved JAYA [81]41.08241.10441.1191.850 × 10−26523
Hybrid HS [81]40.98341.04041.1056.133 × 10−24150
Hybrid BBBC [81] 40.98641.07841.1016.110 × 10−23611
HFSA [81]40.99441.04841.1136.035 × 10−24127
mAHS (der. from [94])41.05941.06741.0951.890 × 10−27237
mBBBC–UBS (der. from [95])41.08141.09941.1121.136 × 10−25724
MsinDE (der. from [96])40.98541.00241.0262.122 × 10−26291
The fastest optimization run of FHGWJA requires only 3837 analyses.
Table 3. Optimization results of the concrete gravity dam design problem.
Table 3. Optimization results of the concrete gravity dam design problem.
AlgorithmBest
(m3)
Average
(m3)
Worst
(m3)
ST Dev
(m3)
Number Structural AnalysesConstraint Violation (%)
FHGWJA (Present)7079.5587079.5587079.55803834 ± 1597
2207
1.441 ×10−3
Standard GWO7117.4417119.2887125.5683.499550,0002.423 × 10−3
Standard JAYA7121.1347125.2137129.8225.394625,0001.338 × 10−3
Improved JAYA
[81]
7117.4377119.1417120.8901.726922,537 ± 1996
19,917
4.076 × 10−4
Hybrid HS [81]7109.6087112.3407113.4742.588218,487 ± 3226
17,708
3.641 × 10−5
Hybrid BBBC [81]7108.7727111.1757112.2751.860618,074 ± 2692
18,128
3.029 × 10−5
HFSA [81]7113.4567114.2257114.5331.5385 19,390 ± 1324
18,232
1.343 × 10−3
mAHS
(derived from [94])
7109.4197111.4767113.2922.071924,687 ± 2916
22,172
3.333 × 10−3
mBBBC–UBS
(derived from [95])
7109.1227110.3067111.0040.791218,376 ± 1737
16,924
6.067 × 10−3
MsinDE
(dererived from [96])
7113.1237114.3507115.2670.898324,196 ± 2048
22,809
4.145 × 10−3
MCEO [97]7448.74N/AN/A3.274 × 10−235,00032
FSO [98]7448.74N/AN/AN/AN/A32
Table 5. Optimization results of the hydrologic model calibration problem solved with 3 unknown parameters.
Table 5. Optimization results of the hydrologic model calibration problem solved with 3 unknown parameters.
AlgorithmKhmBestAverageWorstST DevNFE
FHGWJA (Present)0.086200.28691.868136.76036.76036.76002005 ± 386
1637
Standard GWO0.086210.28691.868136.76136.76236.7641.258 × 10−350,000
Standard JAYA0.086200.28691.868136.76036.76136.7631.500 × 10−35438 ± 756
4331
Improved JAYA [81]0.086200.28691.868136.76036.76136.7621.442 × 10−42412 ± 684
1766
Hybrid HS [81]0.086330.28691.867736.76136.76236.7624.892 × 10−42547 ± 1035
1331
Hybrid BBBC [81]0.086180.28691.868236.76136.76136.7621.769 × 10−52468 ± 1102
1281
HFSA [81]0.086320.28691.867836.76136.76236.7623.891 × 10−42755 ± 1344
1326
mAHS (der. [94])0.086200.28691.868136.76036.76136.7621.543 × 10−42368 ± 967
1670
mBBBC–UBS (der. [95])0.086200.28691.868136.76036.76136.7622.044 × 10−42344 ± 835
1711
MsinDE (der. [96])0.086200.28691.868136.76036.76236.7632.119 × 10−32966 ± 1293
2213
HS [102]0.08700.28701.866136.77N/AN/AN/A20,000
IICSA [103]0.08650.28701.867536.77N/AN/AN/AN/A
COBSA [104]0.08640.28691.867836.7636.7736.770.01100,000
GA [104]0.08650.28691.867436.7736.8537.240.13N/A
DE [104]0.08630.28691.868036.7737.0938.040.36N/A
PSO [104]0.08670.28671.866836.7736.8437.250.14N/A
BFGS [105]0.08630.28691.867936.768N/AN/AN/AN/A
NMS [106]0.08620.28691.868136.76N/AN/AN/A221 *
* Number of optimization iterations.
Table 6. Optimization results of the hydrologic model calibration problem solved with 25 variables.
Table 6. Optimization results of the hydrologic model calibration problem solved with 25 variables.
AlgorithmKhMBestAverageWorstST DevNFE
FHGWJA (Present) *0.086110.28691.868136.76136.76136.7612.919 × 10−124438 ± 544
4122
Standard GWO0.085760.28701.869436.81836.87036.9341.395 × 10−250,000
Standard JAYA0.086200.28681.867736.76536.76636.7691.435 × 10−322,663 ± 8359
13,820
Improved JAYA [81]0.085910.28661.868936.80636.80636.8071.078 × 10−77276 ± 2367
5476
Hybrid HS [81]0.086140.28691.870536.76336.76336.7633.333 × 10−114744 ± 1954
3277
Hybrid BBBC [81]0.085670.28681.871836.76336.76336.7638.416 × 10−114863 ± 1607
3656
HFSA [81]0.085990.28671.867036.76236.76236.7621.032 × 10−105144 ± 1733
3887
mAHS (der. [94])0.086240.28621.867836.77436.77436.7751.263 × 10−85385 ± 760
4610
mBBBC–UBS (der. [95])0.086210.28721.869336.78636.78636.7871.149 × 10−85146 ± 797
4488
MsinDE (der. [96])0.086040.28641.867636.78636.78636.7871.456 × 10−85848 ± 1016
5261
* The optimal channel storage values St found with FHGWJA are {27.723;26.729;35.245;80.441;147.329;199.195;244.112;279.252;296.016;300.904;291.677;263.154;229.724;187.670;146.693;107.274;75.785;54.842;41.014;30.810;25.732;20.491} m3.
Table 7. Comparison of optimized designs obtained with FHGWJA and its competitors in the optimal crashworthiness problem.
Table 7. Comparison of optimized designs obtained with FHGWJA and its competitors in the optimal crashworthiness problem.
Algorithmsx1 (mm)x2 (mm)x3 (mm)x4 (mm)x5 (mm)x6 (mm)x7 (mm)x8x9x10 (mm)x11 (mm)Structural Weight (kg)
FHGWJA * Present0.500001.212040.500000.779080.500001.490040.500000.3450.345−28.97810.0001021.38340
Standard
GWO *
0.500021.113430.500001.307940.50041 1.5 0.50011 0.345 0.192−20.0960 −0.8626022.84755
Standard
JAYA *
0.500001.225730.500001.207110.500001.491940.500000.3450.3450.000000.0000023.19115
IGWO [69] 0.500000.886410.500001.257810.648090.913720.500001.000000.524831.9479015.371921.39473
Chaotic GWO 0.84252 0.50000 0.50000 1.361860.83852 0.86445 0.576790.94066 0.24622 4.55062 12.3084 21.46164
CSO 0.78544 0.56882 0.50000 1.34514 0.82107 0.86975 0.74586 0.89302 0.39281 0.89510 1.1799722.00444
TSA 0.50315 0.89471 0.50000 1.40533 0.86396 0.84245 0.59580 0.86533 0.06579 0.12364 2.06338 22.70296
ASGWO [70] 0.500041.134540.500091.279050.500201.499960.500050.344960.33248−16.3332−2.1491222.87188
SMO [47] **0.50001.116340.50001.302240.50001.500000.50000.3450.345−19.5660.00000122.84298
LIACOR **0.50001.115930.50001.302930.50001.500000.50000.1920.345−19.640−0.00000322.84299
KH **0.5000 1.14747 0.5000 1.26118 0.5000 1.5000 0.5000 0.345 0.345 −13.998 −0.8984 22.88596
Best ABC **0.50001.305390.50001.103120.50000.500000.50000.3450.34514.21320.330622.88605
HHO **0.50001.156270.50001.271330.50001.477700.50000.3450.192−14.592−2.489822.98537
CLPSO **0.50611.173790.50131.247060.50371.495600.50000.3450.345−9.59853.362723.06244
WOA **0.50001.092760.50001.412330.50001.454970.50000.3450.192−24.038−3.178923.12717
* Optimized weight: 21.3905 ± 0.007125 kg for FHGWJA; 22.9318 ± 0.1423 kg for the standard GWO; 23.2067 ± 0.02431 kg for standard JAYA; computational cost: 1309 ± 343 structural analyses for FHGWJA; 50,000 for standard GWO; 7560 ± 998 for standard JAYA; the best optimization run of FHGWJA was completed within 1367 structural analyses. Computational cost: 30,000 structural analyses, determined as the product between the population size NPOP = 30 and the limit number of iterations Nitermax = 1000. No statistical information on optimized weight and computational cost were given in [69]. computational cost: 15,000 structural analyses, determined as the product between the population size NPOP = 30 and the limit number of iterations Nitermax = 500. No statistical information on optimized weight and computational cost were given in [70]. ** computational cost: 30,000 structural analyses, determined as the product between the population size NPOP = 20 and the limit number of iterations Nitermax = 1500. No statistical information on optimized weight and computational cost were given in [47].
Table 8. Groups of elements defined in the 200-bar truss design problem.
Table 8. Groups of elements defined in the 200-bar truss design problem.
GroupElementsGroupElements
11, 2, 3, 41682, 83, 85, 86, 88, 89, 91, 92, 103, 104,
106, 107, 109, 110, 112, 113
25, 8, 11, 14, 1717115,116,117,118
319, 20, 21, 22, 23, 2416119, 122, 125, 128, 131
418, 25, 56, 63, 94, 101, 132, 139, 170, 17719133, 134, 135, 136, 137, 138
526, 29, 32, 35, 3820140, 143, 146, 149, 152
66, 7, 9, 10, 12, 13, 15, 16, 27, 28, 30, 31,
33, 34, 36, 37
21120, 121, 123, 124, 126, 127, 129, 130,
141, 142, 144, 145, 147, 148, 150, 151
739, 40, 41, 4222153, 154,155, 156
843,46,49,52,5523157, 160, 163, 166, 169
957,58,59,60,61,6224171, 172, 173, 174, 175, 176
1064, 67, 70, 73, 7625178, 181, 184, 187, 190
1144, 45, 47, 48, 50, 51, 53, 54, 65, 66, 68,
69, 71, 72, 74, 75
26158, 159, 161, 162, 164, 165, 167, 168,
179, 180, 182, 183, 185, 186, 188, 189
1277, 78, 79, 8027191, 192, 193, 194
1381, 84, 87, 90, 9328195, 197, 198, 200
1495, 96, 97, 98, 99, 10029196,199
15102, 105, 108, 111, 114
Table 9. Comparison of optimized designs obtained with FHGWJA and its competitors in the 200-bar truss problem.
Table 9. Comparison of optimized designs obtained with FHGWJA and its competitors in the 200-bar truss problem.
Design VariableFHGWJA (Present)Standard GWOStandard JAYAImproved JAYA [81]Hybrid HS/BBBC/
SA [81]
ECM [110]LSHADE-Epsin [111]MCOA
[112]
CNNT-
PSO
[113]
hGMO
[114]
PO
[120]
10.14840.14670.14730.14730.14840.14710.148380.13900.14820.14840.13911
20.94470.94150.94170.94040.94450.93990.944480.93550.94050.94080.96277
30.10000.10010.10020.10010.10000.10000.100000.10000.10000.10000.10996
40.10000.10060.10010.10010.10000.10000.100000.10000.10000.10000.10000
51.94341.94191.94231.94121.94451.93991.944481.93551.94081.94091.94300
60.29760.29650.29700.29680.29800.29650.297960.29090.29750.29750.29526
70.10000.10010.10010.10010.10000.10000.100000.10000.10000.10000.10006
83.11773.11473.10763.10673.12273.10493.122603.08163.10673.10973.09870
90.10000.10010.10010.10010.10000.10000.100000.10000.10000.10000.13761
104.11494.14164.10934.10814.12274.10494.122604.08164.10674.10974.09920
110.39890.40220.40340.40400.39900.40370.398970.39670.40570.40470.41926
120.10000.18710.19310.19310.10000.19060.100440.29590.18970.17220.15592
135.37835.43245.44065.43425.39345.42985.393365.38545.43435.43035.43440
140.10000.10010.10010.10010.10000.10060.100000.10000.10000.11790.10437
156.37316.42986.43606.43426.39346.42986.393366.38536.43406.43036.43550
160.52620.57280.57450.57530.52640.57390.526630.63320.57450.57450.56508
170.45210.14560.13520.13550.43530.13320.434850.18420.13660.13320.15753
187.92427.97127.98257.98027.59047.97447.950638.03967.98037.97627.96720
190.10000.10010.10020.10020.10000.10000.100000.10000.10000.10010.10012
208.91518.97158.98298.98048.59048.97448.950639.03958.98028.97628.96750
210.86910.71470.70920.70900.85920.70640.859010.74600.71090.70960.72223
220.15550.45720.43630.43720.15000.43390.149950.13060.46590.44410.48433
2310.962110.896810.894010.891210.997710.879010.997710.911410.911010.893110.9130
240.12190.10010.10010.10020.10000.10000.1000740.10000.10000.10000.10472
2511.951211.897311.894811.891411.997711.879011.997711.911411.911211.893111.9140
260.93591.05981.03941.04910.91251.04530.912520.86271.07121.05481.08420
276.50486.53946.61536.61066.66206.63006.661796.91696.50306.57616.50930
2810.870810.836810.872110.779110.806110.782710.806110.967410.721010.757410.7000
2913.871313.906213.887213.878313.823613.869113.823713.674213.931013.894213.9520
Best
(kg)
11,541.38011,558.65011,553.35511,550.05411,542.409
11,542.409
11,542.409
11,543.52111,543.98011,544.00711,545.33011,545.56811,557.384
Average (kg)11,541.71611,572.10111,564.05011,556.37711,542.410
11,542.410
11,542.411
11,580.98011,545.68111,576.61611,548.04811,609.69711,618.419
Worst (kg)11,542.39011,590.93111,579.70011,569.05911,542.411
11,542.410
11,542.415
N/A11,548.44811,592.70111,551.227N/AN/A
ST DEV (kg)0.415737.10925.40310.9410.00141
0.00010
0.00622
19.1250.988821.6001.43139.694170.601
NSA3356 ± 431
3464
50,00040,94131,5802736 ± 283 *
1676 ± 137 *
5806 ± 304 *
96,600290,00027,720150,00020,00027,984
* Best optimization run completed with Hybrid HS in 2912 structural analyses; with Hybrid BBBC in 1669 analyses; with HFSA in 5600 analyses.
Table 2. Optimization results for the 2D path planning problem solved with 20 design variables.
Table 2. Optimization results for the 2D path planning problem solved with 20 design variables.
AlgorithmBest
(mm)
Average (mm)Worst
(mm)
ST Dev (mm)Number of Analyses
FHGWJA (present)40.89840.90140.9241.161 × 10−25153 ± 248
Standard GWO41.13441.14641.1651.276 × 10−250,000
Standard JAYA41.11241.16041.1873.102 × 10−211,778
Improved JAYA [81]41.14141.18441.2092.808 × 10−211,393
Hybrid HS [81]40.99741.05341.1164.861 × 10−27408
Hybrid BBBC [81] 41.03441.09141.1083.165 × 10−27542
HFSA [81]41.00441.06041.1275.028 × 10−38245
mAHS (der. from [94])41.07541.09241.1161.682 × 10−28541
mBBBC–UBS (der. from [95])41.09141.12341.1502.412 × 10−27492
MsinDE (der. from [96])41.03041.07741.0922.641 × 10−28762
The fastest optimization run of FHGWJA requires only 5065 function evaluations.
Table 4. Comparison of optimized designs obtained with FHGWJA and its competitors in the concrete gravity dam problem.
Table 4. Comparison of optimized designs obtained with FHGWJA and its competitors in the concrete gravity dam problem.
Variables (m)FHGWJA Hybrid HS [81]Hybrid BBBC [81]HFSA
[81]
mAHS
(der. [94])
mBBC–UBS
(der. [95])
MsinDE
(der. [96])
MCEO/FSO
[97,98]
Existing Dam
X15.00005.00005.00065.00045.03945.01755.02565.16784
X250.00049.67049.554 49.92449.21149.33349.49725.197842
X31.00004.01324.00434.00014.13584.08464.08664.95564.2
X469.78470.08470.30968.24470.00670.05369.48355.781150
X524.83524.49924.53923.95523.97324.17524.06616.087412.5
X630.00030.23930.13131.67330.69630.92431.09021.000958
X740.00039.99740.02539.98439.97439.96839.91929.903623.2
X899.770100.39100.4999.241100.51100.75100.26120.169140
X980.00080.26180.16481.87480.56681.34880.94785.382105
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Furio, C.; Lamberti, L.; Pruncu, C.I. An Efficient and Fast Hybrid GWO-JAYA Algorithm for Design Optimization. Appl. Sci. 2024, 14, 9610. https://doi.org/10.3390/app14209610

AMA Style

Furio C, Lamberti L, Pruncu CI. An Efficient and Fast Hybrid GWO-JAYA Algorithm for Design Optimization. Applied Sciences. 2024; 14(20):9610. https://doi.org/10.3390/app14209610

Chicago/Turabian Style

Furio, Chiara, Luciano Lamberti, and Catalin I. Pruncu. 2024. "An Efficient and Fast Hybrid GWO-JAYA Algorithm for Design Optimization" Applied Sciences 14, no. 20: 9610. https://doi.org/10.3390/app14209610

APA Style

Furio, C., Lamberti, L., & Pruncu, C. I. (2024). An Efficient and Fast Hybrid GWO-JAYA Algorithm for Design Optimization. Applied Sciences, 14(20), 9610. https://doi.org/10.3390/app14209610

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop