Next Article in Journal
A Hierarchical Framework for Quadruped Robots Gait Planning Based on DDPG
Next Article in Special Issue
Adaptive Guided Equilibrium Optimizer with Spiral Search Mechanism to Solve Global Optimization Problems
Previous Article in Journal
Central Pattern Generator (CPG)-Based Locomotion Control and Hydrodynamic Experiments of Synergistical Interaction between Pectoral Fins and Caudal Fin for Boxfish-like Robot
Previous Article in Special Issue
The Application of the Improved Jellyfish Search Algorithm in a Site Selection Model of an Emergency Logistics Distribution Center Considering Time Satisfaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AOBLMOA: A Hybrid Biomimetic Optimization Algorithm for Numerical Optimization and Engineering Design Problems

1
School of Economics and Management, China University of Petroleum (East China), Qingdao 266580, China
2
Dareway Software Co., Ltd., Jinan 250200, China
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(4), 381; https://doi.org/10.3390/biomimetics8040381
Submission received: 8 July 2023 / Revised: 10 August 2023 / Accepted: 15 August 2023 / Published: 21 August 2023
(This article belongs to the Special Issue Nature-Inspired Computer Algorithms: 2nd Edition)

Abstract

:
The Mayfly Optimization Algorithm (MOA), as a new biomimetic metaheuristic algorithm with superior algorithm framework and optimization methods, plays a remarkable role in solving optimization problems. However, there are still shortcomings of convergence speed and local optimization in this algorithm. This paper proposes a metaheuristic algorithm for continuous and constrained global optimization problems, which combines the MOA, the Aquila Optimizer (AO), and the opposition-based learning (OBL) strategy, called AOBLMOA, to overcome the shortcomings of the MOA. The proposed algorithm first fuses the high soar with vertical stoop method and the low flight with slow descent attack method in the AO into the position movement process of the male mayfly population in the MOA. Then, it incorporates the contour flight with short glide attack and the walk and grab prey methods in the AO into the positional movement of female mayfly populations in the MOA. Finally, it replaces the gene mutation behavior of offspring mayfly populations in the MOA with the OBL strategy. To verify the optimization ability of the new algorithm, we conduct three sets of experiments. In the first experiment, we apply AOBLMOA to 19 benchmark functions to test whether it is the optimal strategy among multiple combined strategies. In the second experiment, we test AOBLMOA by using 30 CEC2017 numerical optimization problems and compare it with state-of-the-art metaheuristic algorithms. In the third experiment, 10 CEC2020 real-world constrained optimization problems are used to demonstrate the applicability of AOBLMOA to engineering design problems. The experimental results show that the proposed AOBLMOA is effective and superior and is feasible in numerical optimization problems and engineering design problems.

1. Introduction

There are a large number of optimization problems in the fields of mathematics and engineering, and these problems must be solved in a short time with highly complex constraints. However, some studies showed that such traditional methods fall into local optima when solving high-dimensional, multimodal, nondifferentiable, and discontinuous problems, resulting in low solution efficiency [1]. Compared with traditional methods, the metaheuristic algorithm has the characteristics of fewer parameters and no gradient information, so it can usually perform very well on such problems [2].
The Mayfly Optimization Algorithm (MOA) [3], proposed by Konstantinos Zervoudakis and Stelios Tsafarakis in 2020, is a metaheuristic algorithm inspired by the mating behavior of mayfly populations. Since it combines the main advantages of the particle swarm algorithm, genetic algorithm, and firefly algorithm, many researchers improved it and applied it to various scenarios. For example, Zhao et al. [4] presented a chaos-based mayfly algorithm with opposition-based learning and Lévy flight for numerical and engineering design problems. Zhou et al. [5] used orthogonal learning as well as chaotic strategies to improve the diversity of the MOA. Li et al. [6] proposed an IMA with chaotic initialization, the gravity coefficient, and a mutation strategy and applied it to the dynamic economic environment scheduling problem. Zhang et al. [7] combined the Sparrow Search Algorithm (SSA) with the MOA and applied it to the RFID network planning problem. Muhammad et al. [8] combined the MPA with the MOA and applied it to the maximum power tracking scenario in photovoltaic systems.
Based on the above, it can be found that using some strategies to optimize the MOA or combining other metaheuristic algorithms with the MOA can effectively improve the performance of the algorithm. However, although the above variants of the MOA improve the optimization ability of the MOA to varying degrees, they may present challenges when solving some non-convex optimization problems. Therefore, it is still necessary to find additional algorithms or methods to combine with the MOA to optimize it.
The Aquila Optimizer (AO), proposed by Abualigah et al. [9] in 2021, is a novel SIA that simulates the predation process of Aquila. Due to the variety of search methods for the AO, it received extensive attention from researchers. Mahajan et al. [10] combined the Arithmetic Optimization Algorithm (AOA) with the AO and applied it to the global optimization task. Ma et al. [11] combined GWO with the AO to construct a new hybrid algorithm. Ekinci et al. [12] proposed an enhanced AO as an effective control design method for automatic voltage regulators. Liu et al. [13] combined the WOA with the AO and applied it to the Cox proportional hazards model. From these studies, we find that the AO, as a metaheuristic algorithm that combines multiple optimization strategies, is mostly used by researchers in combination with other algorithms to optimize the performance of the other algorithms.
The opposition-based learning (OBL) strategy was proposed by Tizhoosh et al. [14]. Since the solution after OBL has a higher probability of being closer to the global optimal solution than the solution without OBL [15], it is often used by researchers to optimize the metaheuristic algorithm. For example, Zeng et al. [16] optimized the wild horse optimizer algorithm with OBL and applied the algorithm to the HWSN coverage problem. Muhammad et al. [17] combined OBL with teaching–learning-based optimization. Jia et al. [18] hybridized OBL with the Reptile Search Algorithm. Sarada et al. [19] mixed OBL with the Golden Jackal Optimization algorithm to optimize it. In these studies, OBL turned out to be effective at optimizing algorithms.
The no free lunch (NFL) [20] theorem states that no one optimization method can solve all optimization problems, and the MOA was shown to degrade the quality of the final solution owing to premature convergence [21]. Therefore, to enhance the global optimization ability of the MOA, we propose an MOA that combines the AO and the opposition-based learning (OBL) strategy, namely, AOBLMOA. The proposed algorithm integrates the search strategies in the AO into the position update process of male and female mayflies in the MOA, without increasing the time complexity, and adopts the OBL strategy for the offspring mayfly population to further optimize the offspring population. We also apply the hybrid algorithm to benchmark functions, CEC2017 numerical optimization problems, and CEC2020 real-world constrained optimization problems to show the feasibility of the proposed hybrid algorithm in optimization problems.
The organizational structure of this paper is as follows: Section 2 briefly describes the basic MOA, the AO, OBL, and other popular metaheuristic algorithms; Section 3 introduces the specific content of the proposed hybrid algorithm in detail; Section 4 analyzes the performance of AOBLMOA on benchmark functions, CEC2017 numerical optimization problems, and CEC2020 real-world constrained optimization problems; Section 5 summarizes the full text and looks forward to future work. The MATLAB codes of AOBLMOA are available at https://github.com/SheldonYnuup/AOBLMOA (accessed on 1 August 2023).

2. Background Overview

2.1. Popular Metaheuristic Algorithms

Metaheuristic algorithms (MHs) can generally be divided into four categories: Swarm Intelligence Algorithms (SIAs), Evolutionary Algorithms (EAs), Physics-based Algorithms (PhAs), and Human-based Algorithms (HAs).
SIAs mainly simulate various group behaviors of diverse organisms, including foraging, attacking, mating, and other behaviors. Such algorithms include Particle Swarm Optimization (PSO) [22], the Salp Search Algorithm (SSA) [23], Gray Wolf Optimization (GWO) [24], the Monarch Butterfly Optimizer (MBO) [25], the Bald Eagle Search Optimization Algorithm (BES) [26], the Marine Predator Algorithm (MPA) [27], the Whale Optimization Algorithm (WOA) [28], the Moth Search Algorithm (MS) [29], etc.
EAs mainly simulate the evolutionary process in nature, and the most classic algorithm is the Genetic Algorithm (GA) [30]. In recent years, with the in-depth study of EAs, researchers have successively proposed Differential Evolution (DE) [31], Biogeography-Based Optimization (BBO) [32], Cuckoo Search (CS) [33], and so on.
PhAs are mainly inspired by the laws of physics, including Multi-Verse Optimization (MVO) [34], Equilibrium Optimization (EO) [35], the Gravity Search Algorithm (GSA) [36], the Lightning Search Algorithm (LSA) [37], the Archimedes Optimization Algorithm (AOA) [38], and so on.
HAs simulate human social behavior. Existing HAs include teaching–earning-based optimization (TLBO) [39], the Human Felicity Algorithm (HFA) [40], Political Optimization (PO) [41], etc. Table 1 provides an overview of the mentioned metaheuristic algorithms.

2.2. Mayfly Optimization Algorithm (MOA)

The working principle of the mayfly algorithm is as follows: at the initial stage of the algorithm, male and female mayfly populations are randomly generated in the problem space. Each individual mayfly in the two populations represents a candidate solution in the problem space, which can be expressed by the d -dimensional variable x = ( x 1 , · · · , x d ) . Additionally, the performance of each individual mayfly is calculated by the objective function f ( x ) . The velocity of each individual ephemera is represented by v = ( v 1 , · · · , v d ) , which is used to represent the change in position of each individual mayfly in each iteration. The movement direction of each individual mayfly is affected by both the individual optimal position p b e s t and the global optimal position g b e s t . That is, each individual mayfly adjusts its flight path to adapt to the individual optimal position p b e s t and the global optimal position g b e s t before the current iteration.

2.2.1. Movement of Male Mayflies

Male mayfly groups tend to gather in the center of the group in space, which means that each individual male adjusts its position based on its own experience and the experience of its neighbors. x i j t represents the position of individual i in dimension j at the t iteration, and v i j t represents the velocity of individual i in dimension j at the t iteration. The movement of the male mayfly individual’s position x i j t is shown in Equation (1):
x i j t + 1 = x i j t + v i j t + 1 ,
When the fitness of male mayflies is worse than the global optimal fitness, then male mayflies approach the individual optimal position and the global optimal position. In contrast, if the individual fitness of male mayflies is better than the global optimal fitness, then male mayflies perform courtship dance behaviors above the water surface to attract mates. Based on this, it should be assumed that although the male mayflies can continue to move, they cannot quickly gain speed. Taking the minimization problem as an example, the variation in the speed v i j t of the male mayfly is shown in Equation (2):
v i j t + 1 = g v i j t + a 1 e β r p 2 p b e s t i j x i j t + a 2 e β r g 2 g b e s t j x i j t , f g b e s t j > f x i v i j t + d r , f g b e s t j f x i ,
where f : R n R represents the objective function, which is used to evaluate the performance of the solution; v i j t represents the velocity of individual i in dimension j at the t -th iteration; x i j t represents the position of individual i in dimension j at the t -th iteration; a 1 and a 2 represent the individual optimal attraction coefficient and the population optimal attraction coefficient, respectively; β represents the visibility coefficient, which is used to limit the visible range of individuals; d is the mating dance coefficient, which is used to attract the opposite sex to keep approaching; r is a random coefficient, which is valued in the range of [−1, 1]; and g represents the gravity coefficient, which is used to maintain the velocity of the individual in the previous iteration, and its expression is shown as
g = g m a x g m a x g m i n i t e r m a x × i t e r ,
where g m a x and g m i n represent the maximum and minimum values of the gravity coefficient, respectively; i t e r m a x indicates the maximum number of iterations of the algorithm, and i t e r indicates the current number of iterations of the algorithm.
r p and r g in Equation (2) represent the Cartesian distance between x i j t and p b e s t i j and g b e s t j , respectively. The Cartesian distance calculation formula is shown in Equation (4):
x i X i = j = 1 n x i j X i j 2 ,
where x i j represents the individual position of male mayfly individual i in dimension j ; X i j indicates the position of p b e s t i j or g b e s t j ; and p b e s t i j is the individual optimal position of mayfly individual i in dimension j . Taking the minimization problem as an example, the updated formula of the individual optimal position at the t + 1 iteration is shown in Equation (5):
p b e s t i j = x i j t + 1 , f x i j t + 1 < f p b e s t i j p b e s t i j , f x i j t + 1 f p b e s t i j ,
where g b e s t j is the optimal position of the group in the j dimension, and its updated formula is shown in Equation (6):
g b e s t p b e s t 1 , p b e s t 2 , · · · , p b e s t N | f ( c b e s t ) = min f p b e s t 1 , f p b e s t 2 , · · · , f p b e s t N ,
where N represents the number of individuals in the male mayfly population.

2.2.2. Movement of Female Mayflies

The most striking difference between female mayflies and male mayflies is that females do not congregate in large groups. Instead, they mate by flying to males. Suppose y i j t represents the position of individual i in dimension j at iteration t , and the position update formula of the female mayfly at iteration t + 1 is shown in Equation (7):
y i j t + 1 = y i j t + v i j t + 1 ,
Although the process of the mutual attraction of mayflies is stochastic, this stochastic process can also be modeled as a deterministic process, that is, according to the mayflies’ fitness. For example, the female mayfly with the best fitness should be attracted by the male mayfly with the best fitness, and the female mayfly with the second-best fitness should be attracted by the male mayfly with the second-best fitness. By analogy, when the fitness of the female mayfly is inferior to that of the corresponding male mayfly, the female mayfly is attracted by the corresponding male mayfly and approaches it; otherwise, the female mayfly randomly moves. Therefore, taking the minimization problem as an example, the velocity update formula of the female mayfly is shown in Equation (8):
v i j t + 1 = g v i j t + a 3 e β r m f 2 x i j t y i j t , f y i > f x i g v i j t + f l r , f y i f x i ,
where v i j t represents the velocity of individual i in dimension j at the t -th iteration. a 3 is the attraction coefficient between the male and female mayflies. β represents the visibility coefficient. r m f represents the Cartesian distance between the female mayfly and the male mayfly calculated by Equation (3). f l represents the random walk coefficient. r represents a random coefficient, taking values in the range [−1, 1].

2.2.3. Mating of Male and Female Mayflies

The mating process of the two sexes of mayflies is represented by a crossover operator, and the survival of the fittest mechanism is used to select a male parent from the male mayfly population and a female parent and the female mayfly population for mating. That is, the male mayfly with the best fitness mates with the female mayfly with the best fitness, the male mayfly with the second-best fitness mates with the female mayfly with the second-best fitness, and so on. Before the end of each iteration, the male mayfly population, the female mayfly population, and the offspring mayfly population are merged into two populations, and then the individuals with the poorest fitness in the two merged populations is eliminated. The individuals with better fitness in the two merged populations enter the next iteration as the new male mayfly population and female mayfly population, respectively. The expressions of the offspring produced after mating are shown in Equations (9) and (10):
o f f s p r i n g 1 = L × m a l e + 1 L f e m a l e ,
o f f s p r i n g 2 = L × f e m a l e + 1 L m a l e ,
where m a l e and f e m a l e represent the male parent and female parent, respectively; L represents a random value that conforms to a specific range; and the initial velocity of the offspring is set to zero.

2.2.4. Mutation of Offspring Mayflies

To solve the problem that the algorithm may fall into a local optimum, the mutation operation is performed on the offspring mayflies, and, by adding random numbers that obey the normal distribution, the offspring mayflies can explore new areas that may not have been explored in the problem space. Among them, the number of mutant individuals is approximately 0.05 times that of the male mayflies. The expression of the offspring gene mutation is shown in Equation (11):
o f f s p r i n g n = o f f s p r i n g n + σ N n 0,1 ,
where σ represents the standard deviation of the normal distribution. N n 0,1 represents a standard normal distribution with a mean of 0 and a variance of 1.

2.3. Aquila Optimizer (AO)

The AO simulates Aquila’s hunting behavior and optimizes the problem through its various methods in the hunting process. These methods include the high soar with vertical stoop, contour flight with short glide attack, low flight with slow descent attack, and diving to walk and grab prey. At the same time, the AO decides whether to use the exploration method or the exploitation method by judging the number of iterations. When the current iteration number is within 2/3 times the total iteration number, the explore method is fired; otherwise, the exploit method is used. The specific mathematical model of the method adopted by the AO is as follows:

2.3.1. High Soar with Vertical Stoop (Expanded Exploration)

In this method, Aquila first identifies the location of the prey and selects the best prey area by the high soar with vertical stoop method. In this process, Aquila swoops from various locations within the problem space to ensure a wide area for the search space. The method can be represented as
x i j t + 1 = g b e s t j × 1 t T m a x + x M t g b e s t j × rand ,
where 1 t T m a x is used to control the expanded search process by judging the number of iterations. t represents the current iteration number. T m a x represents the maximum number of iterations. rand represents a random value in the interval (0, 1). x M t represents the average value of the individual x position as of the t -th iteration, which can be expressed as
x M t = 1 N i = 1 N   x i j t , j = 1,2 , , d i m ,
where N represents the population size, and d i m represents the dimension of the problem to be solved.

2.3.2. Contour Flight with Short Glide Attack (Narrowed Exploration)

The contour flight with short glide attack method mainly simulates the behavior of Aquila hovering above the target prey, preparing to land, and attacking after finding the target prey at high altitude. In this method, the algorithm more precisely explores the area where the target prey is located and prepares for the next attack. The simulation method can be mathematically expressed as
x i j t + 1 = g b e s t j × Levy D + x R t + f 1 f 2 × r a n d ,
where x R t represents a random solution obtained in the population at the t -th iteration, D represents the problem dimension, Levy D is the Lévy flight distribution function, and its calculation formula can be shown as
Levy D = s × u × σ | v | 1 β ,
where s is a constant value with a value of 0.01, u and v are both random numbers with a value in the (0, 1) interval, β is a constant value with a value of 1.5, and the calculation formula of σ is
σ = Γ 1 + β × sin e π β 2 Γ 1 + β 2 × β × 2 β 1 2 ,
y and x in Equation (14) are used to represent the spiral search method in the search process, and its calculation formula can be shown as
f 1 = r × cos θ ,
f 2 = r × sin θ ,
where
r = r 1 + U × D 1 ,
θ = ω × D 1 + θ 1 ,
θ 1 = 3 × π 2 ,
where r 1 takes a value according to the number of search cycles in the interval [1, 20], U is a constant value with a value of 0.00565, D 1 is an integer value determined according to the dimension of the problem search space, and ω is a constant with a value of 0.005.

2.3.3. Low Flight with Slow Descent Attack (Expanded Exploitation)

When the location of the prey is locked, Aquila needs to be ready to land and attack the prey. Therefore, Aquila tests the reaction of the prey by making an initial dive. In this method, the algorithm makes Aquila approach the prey and attack by exploiting the area where the target is located, which can be mathematically expressed as
x i j t + 1 = g b e s t j x M t × α rand + U B L B × rand + L B × δ ,
where both α and δ are development adjustment parameters and take values in the interval (0, 1). U B and L B denote the lower bound and upper bound of a given problem, respectively.

2.3.4. Walk and Grab Prey (Narrowed Exploitation)

When Aquila is close to the prey, it lands and attacks the prey according to the movement of the prey. This is Aquila’s final attack on the prey’s final location, and the method can be expressed as
x i j t + 1 = Q F × g b e s t j G 1 × x i j t × rand G 2 × L e v y D + r a n d × G 1 ,
where Q F represents the quality function used to balance the search strategy, and its calculation formula is shown in Equation (24). G 1 is used to represent the various actions that Aquila makes when tracking the escaping prey, which can be calculated by Equation (25). The decreasing value of G 2 from 2 to 0 during the iterative process represents the flight slope of Aquila tracking the prey when the prey is escaping, and this parameter is expressed by Equation (26).
Q F t = t 2 × r a n d 1 ( 1 T m a x ) 2 ,
G 1 = 2 × r a n d 1
G 2 = 2 × 1 t T m a x ,

2.4. Opposition-Based Learning (OBL)

The opposition-based learning strategy mainly compares the fitness of the current solution and its opposite solution and selects the better of the two to enter the next stage. As OBL is widely applied to the optimization of metaheuristic algorithms, researchers successively proposed variants of OBL, such as quasi OBL [42], binary student OBL [43], and specular reflection learning [44]. Basic OBL is defined as follows:
Definition 1. 
Opposite numbers. When  x  is a real number, and  x [ l b , u b ] , the opposite number  x ~ is shown as
x ~ = l b + u b x ,
Definition 2. 
Opposite points. When point  P ( x 1 , x 2 , · · · , x n )  is a point in  n -dimensional coordinates,  x 1 , x 2 , · · · , x n  are real numbers, and  x 1 , x 2 , · · · , x n [ l b i , u b i ] ; the coordinates in the opposite point  P ~  can be shown as
x ~ i = l b i + u b i x i ,

3. Proposed Hybrid Algorithm (AOBLMOA)

The proposed hybrid algorithm (AOBLMOA) is based on the mayfly algorithm, which preserves the position movement method when the fitness of the male mayfly is better than the global optimal fitness method, the position movement method when the fitness of the female mayfly is better than the fitness of the corresponding male mayfly, and the mating method of the male and female mayflies to produce offspring. Then, the mating dance of the male mayflies and the random walk of the female mayflies were replaced with the method adopted in the AO, and the genetic mutation behavior of the offspring mayflies was replaced with reverse learning behavior.

3.1. Movement of Male Mayflies

Since female mayflies move toward male mayflies, and the birth of their offspring is also directly influenced by male mayflies, it is crucial to increase the search efficiency of male mayflies. In the process of moving the position of the male mayfly, if the fitness of the male mayfly is equal to or worse than the global optimal fitness, it means that its optimization effect in the last iteration process is not good, and it has not reached a better position than the global optimal individual mayfly. In this situation, the algorithm should make the male mayfly approach the global optimal mayfly and look for a better position in the process. The basic MOA uses the mating dance behavior to move the male mayfly at this stage. However, in real experiments, the optimization of mating dance behavior is not particularly effective.
In the AO, when t 2 3 T m a x , the algorithm lies in the exploration phase, and, when t > 2 3 T m a x , the algorithm is in the exploitation phase. The contour flight with the short glide attack and walk and grab prey methods in the AO are the narrow search method in the algorithm exploration process and the narrow search method in the exploitation process, respectively, both of which allow Aquila to more directly rush to the prey. This takes place under the same requirement that male mayflies are equal to or worse than the global optimal solution in the MOA. Therefore, we introduce the contour flight with short glide attack method and the walk and grab prey method in the AO to replace the mating dance method in the original algorithm, to promote the male mayfly to approach the global optimal mayfly. The improved mathematical model of the movement process of the male mayfly population is as follows:
x i j t + 1 = x i j t + g v i j t + a 1 e β r p 2 p b e s t i j x i j t + a 2 e β r g 2 g b e s t j x i j t , f g b e s t j > f x i g b e s t j × Levy D + y R t + f 1 f 2 × r a n d , f g b e s t j f x i | | t 2 3 T m a x Q F × g b e s t j G 1 × y i j t × rand G 2 × L e v y D + r a n d × G 1 , f g b e s t j f x i | | t > 2 3 T m a x ,

3.2. Movement of Female Mayflies

In the basic MOA, when the fitness of the female mayfly is worse than that of the corresponding male mayfly, the female mayfly is attracted by the male mayfly and moves to the position of the male mayfly. When the fitness of individual female mayflies is better than that of their male counterparts, the female randomly walks because it is not attracted. Although the random walk behavior helps the female mayfly to expand the search range to a certain extent and prevents it from falling into a local optimum, in actual experiments, this method does not have a benefit effect on the optimization of the function. We believe that female mayflies, as an attracted population, should be attracted to the globally optimal individual and move according to the globally optimal position if the individual cannot be attracted to the corresponding male.
The high soar with vertical stoop method and the low flight with slow descent attack method in the AO are an expanded method in the exploration process and an expanded method in the exploitation process, respectively. Both methods are employed in the AO to expand the search range of Aquila. This not only has a similar effect as the random walk behavior performed by female mayflies but also more closely fits with the view that female mayflies are attracted to the global optimal individual when they are not attracted by male mayflies. Therefore, the improved mathematical model of the female mayfly’s positional movement process is as follows:
y i j t + 1 = y i j t + g v i j t + a 3 e β r m f 2 x i j t y i j t , f y i > f x i g b e s t j × 1 t T m a x + x M t g b e s t j × r a n d , f y i f x i | | t 2 3 T m a x g b e s t j x M t × α r a n d + U B L B × rand + L B × δ , f y i f x i | | t > 2 3 T m a x ,

3.3. Stochastic OBL of Offspring Mayflies

Although OBL can perform well for most functions, for symmetric functions, the feasible solution without OBL has the same fitness as the opposite solution using OBL, which leads to the poor optimization effect of OBL in this kind of function and problem. To solve this problem, we introduce stochastic OBL, which applies a random perturbation on the basis of the OBL strategy to increase its randomness. The stochastic OBL strategy is defined as follows:
Definition 3. 
Stochastic opposite point. On the basis of the opposite point, take a random perturbation to  x i :
x ~ i = l b i + u b i x i × r ,
where  r  represents a random value in the (0,1) interval that conforms to a Gaussian distribution.
Introducing the stochastic OBL into the process of optimizing the offspring mayfly certainly helps to improve the performance of the algorithm by diversifying the offspring population. However, if this method is directly used on the offspring mayfly on the basis of the original algorithm, the overall efficiency of the algorithm decreases due to the increase in the time complexity. Therefore, we replace the original genetic mutation behavior of offspring mayflies with the stochastic OBL, which enables the embedding of the strategy in the algorithm without affecting the time complexity. The process of optimizing offspring mayflies using stochastic optimization is as follows:
o f f s p r i n g i j t = o f f s p r i n g i j t , f o f f s p r i n g i j t f o f f s p r i n g ~ i j t o f f s p r i n g ~ i j t , f o f f s p r i n g i j t > f o f f s p r i n g ~ i j t ,
where o f f s p r i n g ~ i j t represents the individual optimized using the stochastic OBL strategy, and o f f s p r i n g i j t represents the individual not optimized using the stochastic OBL strategy.

3.4. Sensitivity Analysis

The parametric sensitivity analysis method of Xu et al. [45] is applied to AOBLMOA. Based on this, the weight minimization of a speed reducer problem in CEC2020RW problems is selected in this section for the sensitivity analysis of five core parameters in AOBLMOA: a 1 , a 2 , a 3 , α , and δ . The details of the selected problem are shown in Section 4.3.1. And the ranges of parameters are a 1 [ 0.8,1.0 ] , a 2 [ 1.3,1.5 ] , a 3 [ 1.3,1.5 ] , α [ 0.08,0.1 ] , and δ [ 0.08,0.1 ] . The problem size is set to 7, and the number of iterations is set to 10,000. The average value obtained after 25 independent runs of the algorithm with different parameter combinations is shown in Table 2.
From Table 2, we can see that AOBLMOA obtains the best function value in Scenario 32, that is, a 1 = 1.0 , a 2 = 1.5 , a 3 = 1.5 , α = 0.1 , and δ = 0.1 . Therefore, in the subsequent experiments, we set the parameters to be the same as those of Scenario 32.

3.5. Pseudocode of AOBLMOA

The pseudo-code for AOBLMOA is shown in Algorithm 1.
Algorithm 1: Pseudo-code of AOBLMOA
Input the Initialization parameters of AOBLMOA
Objective function f( x ), x = ( x 1 , , x d ) T
  •  Initialize the male mayfly population x i (i = 1,2,…,N)
  •  Initialize the male mayfly velocities v m i
  •  Initialize the female mayfly population y i (i = 1,2,…,N)
  •  Initialize the female mayfly velocities v f i
  •  Evaluate solutions
  •  Find global best gbest
  • while t m a x I t e r do
  •     for each female y i  do
  •         if  f ( y i ) f ( x i )  then
  •            Update the v f i using Equation (8)
  •            Update the location vector y i using Equation (7)
  •         else if  f ( y i ) > f ( x i ) | | t 2 3 T m a x  then
  •            Update the location vector y i using Equation (14)
  •         else if  f ( y i ) > f ( x i ) | | t > 2 3 T m a x  then
  •            Update the location vector y i using Equation (23)
  •         end if
  •     end for
  •     for each male x i  do
  •         if  f ( x i ) f ( g b e s t )  then
  •            Update the v m i using Equation (2)
  •            Update the location vector x i using Equation (1)
  •         else if  f ( x i ) > f ( g b e s t ) | | t 2 3 T m a x  then
  •            Update the location vector x i using Equation (12)
  •         else if  f ( x i ) > f ( g b e s t ) | | t > 2 3 T m a x  then
  •            Update the location vector x i using Equation (22)
  •         end if
  •     end for
  •     Generate the offspring mayfly population z i using Equations (9) and (10)
  •     for each offspring z i  do
  •         Calculate the opposite solution using Equation (31)
  •         Update the location vector z i using Equation (32)
  •     end for
  •     return gbest
  • end while
Output the Global optimal solution gbest

4. Experimental Results and Analysis

To demonstrate AOBLMOA’s effectiveness, stability, and excellence, we first test it using 19 benchmark functions. The basic MOA and basic AO are compared with PSO, GA, GWO, the SSA, SCA, and other traditional or state-of-art metaheuristic algorithms in the classical benchmark function, and the comparison results show that the two algorithms are superior to these traditional algorithms. Based on this, we compare AOBLMOA with the basic MOA, the basic AO, AMOA combining the MOA and the AO, OBLMOA combining the MOA and the OBL strategy, and OBLAO combining the AO and the OBL strategy to prove that AOBLMOA not only outperforms classical metaheuristic algorithms but also is the optimal algorithm among the multiple algorithm combinations used in this paper. Not only that, we use CEC2017 bound constrained numerical optimization problems to compare AOBLMOA with the state-of-the-art algorithm to prove that AOBLMOA can not only be applied in numerical optimization problems but also has advantages compared with other state-of-the-art algorithms. Finally, we apply AOBLMOA to CEC2020 real-world constraint optimization problems and compare it with the three top-performing algorithms in this competition to prove that AOBLMOA is equally applicable in real-world engineering problems.
In the CEC2017BC test suite, the state-of-the-art algorithms compared with AOBLMOA are the Reptile Search Algorithm (RSA) [46], the Annealed Locust Algorithm (SGOA) [47], the Multi-Strategy Enhanced Salmon Optimization Algorithm (ESSA) [48], the improved Moth-to-Flame Algorithm (LGCMFO) [49] and the Chaos-Based Mayfly Algorithm (COLMA) [4]. The data used in the comparison are all taken from the data disclosed in the papers of each algorithm. In the CEC2020RW test suite, the introduced comparison algorithms include Self-Adaptive Spherical Search (SASS) [50], the Modified Matrix Adaptation Evolution Strategy (sCMAgES) [51], and COLSHADE [52].
The experiments are conducted using a computer Core i7-1165G7 with 16 GB RAM and 64-bit for Microsoft Windows 11. The source code is implemented using MATLAB (R2021b).

4.1. Benchmark Function

We use 19 benchmark functions to test the abilities of AOBLMOA to search for the global optimal solution in the problem space and to jump out of local optimal solutions and to prove the superiority of AOBLMOA compared to the original MOA, the AO, and variant algorithms. The 19 benchmark functions include the unimodal function ( f 1 f 4 ) used to test the global search ability of the algorithm, the multimodal function ( f 5 f 10 ) used to test the local search ability of the algorithm in more complex cases and the ability to escape the local optimum, and the fixed-dimension function ( f 11 f 19 ) used to test the exploration ability of the algorithm in low-dimensional space. To evaluate the development ability of AOBLMOA in different problem dimensions, we set the problem dimensions of the unimodal and multimodal functions to 10, 30, 50, and 100 dimensions and analyzed the results in different dimensions. As we found in the literature [22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41], the number of iterations of each algorithm for each function is 200–100,000, and the population is 20–50. Therefore, to ensure the stability of the algorithm, each algorithm independently runs 30 times for each function, and the maximum number of iterations each time is 1000. The mathematical expression of each benchmark function is shown in Table 3.

4.1.1. Convergence Analysis

Convergence analysis is the most basic step in the process of analyzing an algorithm. In this section, to verify the performance of AOBLMOA, we analyze its convergence behavior in the iterative process through a visualization method. Figure 1 lists the 2D image of the benchmark function (column 1), the search history of the algorithm in the problem space (column 2), the convergence trajectory of the algorithm (column 3), the change in the average fitness value (column 4), and the convergence curve (column 5).
The search history in the second column of Figure 1 shows the position distribution of the search factor in AOBLMOA for each iteration of the different functions. For the unimodal function, most of the search factors converge so close to the optimal point that the points in the scatter plot look sparse. For the multimodal function and the fixed-dimension function, the search factors of AOBLMOA in f 5 , f 6 , f 7 , f 8 , f 9 , f 10 , f 14 , and f 16 can be searched around the optimal point, while, in f 11 , f 12 , f 13 , f 15 , f 17 , f 18 , and f 19 , we can see that although the search factor falls into the local optimum in the search process once, it is not affected by the local optimum and can jump out of the local optimum and develop around the optimum. The third column shows the trajectory of the male mayfly population in the first dimension of each problem. From the trajectory, it can be seen that AOBLMOA has a higher oscillation frequency in the early iteration process, indicating its strong exploration ability in the early iteration, while a lower oscillation range in the late iteration indicates that the algorithm has a strong development ability in the late iteration.
Not only that, but, from the changes in the average fitness value of the solutions in the fourth column, we can find that most of the solutions have relatively high fitness values at the beginning of the iteration. However, within 20 iterations, the average fitness value of the solution can reach a very low interval, which shows that AOBLMOA can converge to the region of the optimal solution in fewer iterations. Similarly, we can see from the convergence curve of the algorithm in the fifth column that, since the unimodal function has only one optimal point, and the convergence process is relatively simple, the convergence curve of the algorithm for the unimodal function is relatively smooth; since the multimodal function may have multiple optimal points, and its convergence process is complicated, it may be necessary to find the global optimal point by jumping out of the local optimum. Therefore, the convergence curve of AOBLMOA for the multimodal function is similar to the stepwise convergence curve; even so, AOBLMOA finds the global optimal point in both f 6 and f 8 within single-digit iterations. For fixed-dimension functions, AOBLMOA can also find the global optimum within very few iterations.
To further analyze the convergence of AOBLMOA, we compare the convergence curves of the MOA, the AO, AMOA, OBLMOA, OBLAO, and AOBLMOA in the same graph. The parameter settings of each algorithm are shown in Table 4. Figure 2 is a comparison of the convergence curves of multiple algorithms for different functions, covering the optimal solution of each algorithm at each iteration number. We compare the MOA, the AO, and AMOA without the OBL strategy with the OBLMOA, OBLAO, and AOBLMOA with the OBL strategy and find that the algorithms that adopted the OBL strategy show a more obvious decay rate for the three functions, especially for f 11 f 17 . We find that the AO easily falls into a local optimum for these functions, and OBLAO with the OBL strategy has a faster convergence speed than the basic AO, which shows that the OBL strategy helps the algorithm with iterative convergence in time. At the same time, to confirm that the fusion of the AO and the MOA is helpful to the convergence of the algorithm, we compare the AO, the MOA, and AMOA as well as OBLAO, OBLMOA, and AOBLMOA. From the figure, we find that AMOA adopting the fusion strategy has a faster convergence speed than the AO and the MOA for most functions. Similarly, AOBLMOA has better convergence than OBLAO and OBLMOA. This shows that the fusion of the AO and the MOA also helps to improve the convergence of the algorithm. Finally, comparing AOBLMOA with other algorithms, it is found that the proposed AOBLMOA converges faster than the MOA and the AO and does not fall into a local optimum. Compared with other hybrid algorithms, AOBLMOA is also the one with the fastest convergence speed and can find the global optimum in only a small number of iterations.

4.1.2. Search Capability Analysis

The search capability analysis of the metaheuristic algorithm can be carried out by analyzing the development ability and the exploration ability of the algorithm. Table 5 and Table 6 show the calculation results of different algorithms for each benchmark function, including the best solution, median solution, worst solution, mean solution, and standard deviation after 30 independent runs, and the best results are shown in bold.
Since the unimodal function has only one global optimal solution, it is often used to test the exploitability of the algorithm. The optimization results of the unimodal function ( f 1 f 4 ) in Table 5 confirm the superiority of the proposed AOBLMOA exploitation performance because its best value, median value, worst value, and mean value for the unimodal function are superior to those of other algorithms, and all reach the theoretical optimal value of the function. Moreover, the minimum standard deviation obtained by AOBLMOA also proves its superior exploitation performance with high reliability.
The multimodal function for the benchmark function is often used to evaluate the exploration ability of the algorithm. We applied the high-dimensional multimodal function ( f 5 f 10 ) and the fixed-dimension multimodal function ( f 11 f 19 ) to test the ability of the algorithm. Table 5 and Table 6 list the best value, median value, worst value, mean value, and standard deviation of each algorithm for high-dimensional multimodal functions and fixed-dimension multimodal functions, respectively. The results show that AOBLMOA provides the minimum metric values fir all 15 multimodal functions and reaches the theoretical optimal values for all the other functions, except f 5 , f 7 , f 9 , and f 10 . This shows that AOBLMOA also has excellent exploration capabilities.

4.1.3. Stability Analysis

To evaluate the ability and stability of AOBLMOA when dealing with problems of different dimensions, we set the problem dimensions of the 10 functions, f 1 f 10 , in Table 3 to 30, 50, and 100 dimensions. AOBLMOA and other comparison algorithms are independently run 30 times for each function, and the number of iterations is set to 1000. As shown in Table 7, Table 8 and Table 9, the excellent performance of each algorithm for each function is reflected in its best value, median value, worst value, average value, and standard deviation. Obviously, except that the best value and median of AOBLMOA in the 30-dimensional f 5 are slightly lower than those of the AO and OBLAO, it achieves the best results compared with the other algorithms for all the indicators of each function in each dimension. At the same time, the proposed algorithm can jump out of multiple local optimal solutions for functions of different dimensions and accurately reach the global optimum after finding the effective global optimal search domain, which is enough to show that its search ability does not decline with the increase in the problem dimension.

4.1.4. Time Complexity Analysis

The time complexity can describe the efficiency of the algorithm operation. Zhou et al. [5] analyzed the time complexity of the variant mayfly algorithm that they proposed, and we adopt the same idea to analyze and compare the time complexity of the MOA and AOBLMOA.
The time complexity of the basic MOA and AOBLMOA is mainly related to three parameters: the problem dimension ( d ), population size ( N ), and algorithm iteration number ( T m a x ). In a single iteration, the time complexity of each algorithm optimization process can be summarized as O ( M O A ) = O   ( p o p u l a t i o n   i n i t i a l i z a t i o n ) + O   ( p o s i t i o n   u p d a t i n g ) + O   ( m a y f l i e s   m a t i n g ) + O   ( o f f s p r i n g   m u t a t i o n ) , O   ( A O B L M O A ) = O   ( p o p u l a t i o n   i n i t i a l i z a t i o n ) + O   ( p o s i t i o n   u p d a t i n g ) + O   ( m a y f l i e s   m a t i n g ) + O   ( o f f s p r i n g   o p p o s i t i o n b a s e d   l e a r n i n g ) . The detailed analysis of the time complexity of the algorithm is as follows:
In the basic MOA, the time complexity of the initial population is O ( d × N ) , the calculation cost of the process of mayfly position change is O ( 2 × T m a x × d × N ) , the calculation amount of the mating of the mayflies is O ( T m a x × d × N ) , and the calculation amount of the mutation of the offspring is O ( T m a x × d × N ) . Therefore, the time complexity of the basic MOA is O ( ( d × N ) × ( 1 + 4 × T m a x ) ) .
In AOBLMOA, the time complexity of the initial population is O ( d × N ) , the time complexity of the process of the mayfly position change adopted by combining AO is O ( 2 × T m a x × d × N ) , the time complexity of the mating process of mayflies is O ( T m a x × d × N ) , and the time consumption of the offspring mayflies in stochastic OBL processes is O ( T m a x × d × N ) . Thus, the time complexity of AOBLMOA is O ( ( d × N ) × ( 1 + 4 × T m a x ) ) .
To sum up, the time complexity of the MOA and AOBLMOA can be expressed as O ( N ) , which shows that AOBLMOA has no significant improvement in time complexity compared with the basic MOA.
At the same time, in order to confirm the accuracy of the time complexity analyses of the two algorithms, we list the calculation time of several algorithms fors f 1 f 9 in Table 5 and Table 6. From these data, we can see that the calculation time of the MOA and AOBLMOA is similar, which proves that the time complexity of the two algorithms is basically the same.

4.1.5. Statistical Analysis

In the above analysis process, after the comprehensive consideration of the experimental results obtained after each algorithm is independently run 30 times for different functions, we conclude that the proposed AOBLMOA has superior convergence, search ability, and stability. To confirm the statistical validity of this conclusion, we use statistical methods to analyze the optimization results of each algorithm for each function. The adopted statistical methods include the Wilcoxon rank sum nonparametric statistical test [53] and the Friedman statistics test.
The Wilcoxon rank sum nonparametric statistical test mainly judges whether there is a significant difference between two groups of data by comparing the p value. If p < 0.05 , it means that the two groups of data have a significant difference; otherwise, there is no significant difference. Based on this, we compare the optimization results of AOBLMOA for different functions with other algorithms. If p < 0.05 , there is a significant difference between AOBLMOA and the specific algorithm for the corresponding function; otherwise, there is no significant difference. The calculation results are shown in Table 10, where “+/−/=“ indicates the number of results of “significant advantage/significant disadvantage/no significant difference” between the corresponding algorithm and AOBLMOA. In Table 10, compared with other algorithms, AOBLMOA not only has no significant disadvantage but also has significant advantages for most functions.
The Friedman statistical test mainly judges the superiority of the algorithm by ranking the mean value in the data, and the lower the ranking value is, the more superior the algorithm is. The ranking formula is
R j = 1 N i = 1 N r i j ,
where i represents the i -th function, j represents the j -th algorithm, N represents the number of test functions, and r i j represents the ranking of the j -th algorithm in the i -th function. The smaller the value of R j is, the higher the ranking of the algorithm is, and the better the performance is.
From the Friedman ranking in Table 11, we find that both the basic AO and the basic MOA are ranked in the last two positions for all algorithms, which shows that other hybrid algorithms are indeed optimized on the basis of the original algorithm. Not only that, the proposed AOBLMOA ranks first in all 49 functions, and its Friedman ranking and the final ranking are both first, which is enough to illustrate the strong performance of AOBLMOA compared with other algorithms.
Combining the two statistical analysis results, it can be seen that the conclusion that “AOBLMOA has superior convergence, search ability and stability compared with AO, MOA and other hybrid algorithms” is statistically valid.

4.2. CEC2017 Bound Constrained Numerical Optimization Problems

To demonstrate that the proposed AOBLMOA not only outperforms the original algorithms, the recently popular improved MOA, and the state-of-the-art algorithm but also can be applied to numerical optimization problems, we use a very challenging CEC2017BC function for testing. The CEC2017BC test functions include unimodal functions ( f 1 f 3 ), multimodal functions ( f 4 f 11 ), hybrid functions ( f 12 f 20 ), and composite functions ( f 21 f 30 ). The specific information is shown in Table 12.
In this experiment, we compare AOBLMOA with several state-of-the-art algorithms, such as the MOA, the AO, LGCMFO, RSA, the ESSA, the SGOA, and COLMA. All algorithms are independently run 30 times, the maximum number of iterations is 1000, and the population size is set to 30. Table 13 lists the average, standard deviation, Friedman ranking, and final ranking of each algorithm for each function. From the final ranking results in Table 13, we can see that the performance of AOBLMOA for the CEC2017 function set is not only better than the MOA and the AO but also than the other state-of-the-art algorithms, and the final ranking is first. This shows that AOBLMOA is feasible and superior when applied to numerical optimization problems.

4.3. CEC2020 Real-World Constrained Optimization Problems

In order to verify the feasibility of the proposed AOBLMOA in engineering optimization problems, we use the CEC2020 real-world constrained optimization problems to test it and compare it with the three best-performing algorithms in the CEC2020 RW competition. The three algorithms are SASS, sCMAgES, and COLSHADE, and the running results of these three algorithms can be found in [54]. We use a static penalty function approach to address the constraints of each problem. The standard deviation, mean value, median value, minimum value, and maximum value of each algorithm are compared after 25 independent runs for each problem. It is not difficult to see from the results in Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21, Table 22 and Table 23 that the proposed AOBLMOA is not inferior to the other three algorithms, which just shows that AOBLMOA is also feasible in real-world engineering problems.

4.3.1. Weight Minimization of a Speed Reducer (WMSR)

The WMSR problem mainly describes the design of a small aircraft engine reducer. The problem contains 11 constraints and seven design variables, and the mathematical model is as follows:
minimize
f x ¯ = 0.7854 x 2 2 x 1 14.9334 x 3 43.0934 + 3.3333 x 3 2 + 0.7854 x 5 x 7 2 + x 4 x 6 2 1.508 x 1 x 7 2 + x 6 2 + 7.477 x 7 3 + x 6 3 ,
subject to
g 1 ( x ¯ ) = x 1 x 2 2 x 3 + 27 0 , g 2 ( x ¯ ) = x 1 x 2 2 x 3 2 + 397.5 0 , g 3 ( x ¯ ) = x 2 x 6 4 x 3 x 4 3 + 1.93 0 , g 4 ( x ¯ ) = x 2 x 7 4 x 3 x 5 3 + 1.93 0 , g 5 ( x ¯ ) = 10 x 6 3 16.91 × 10 6 + 745 x 4 x 2 1 x 3 1 2 1100 0 , g 6 ( x ¯ ) = 10 x 7 3 157.5 × 10 6 + 745 x 5 x 2 1 x 3 1 2 850 0 , g 7 x ¯ = x 2 x 3 40 0 , g 8 ( x ¯ ) = x 1 x 2 1 + 5 0 , g 9 ( x ¯ ) = x 1 x 2 1 12 0 , g 10 ( x ¯ ) = 1.5 x 6 x 4 + 1.9 0 , g 11 ( x ¯ ) = 1.1 x 7 x 5 + 1.9 0 ,
with bounds
0.7 x 2 0.8,17 x 3 28,2.6 x 1 3.6 , 5 x 7 5.5,7.3 x 5 , x 4 8.3,2.9 x 6 3.9 ,

4.3.2. Optimal Design of Industrial Refrigeration System (ODIRS)

The ODRIS problem is an optimization problem for an industrial refrigeration system. It contains 15 constraints and 14 design variables. Its specific mathematical model is as follows:
minimize
f ( x ¯ ) = 63098.88 x 2 x 4 x 12 + 5441.5 x 2 2 x 12 + 115055.5 x 2 1.664 x 6 + 6172.27 x 2 2 x 6 + 63098.88 x 1 x 3 x 11 + 5441.5 x 1 2 x 11 + 115055.5 x 1 1.664 x 5 + 6172.27 x 1 2 x 5 + 140.53 x 1 x 11 + 281.29 x 3 x 111 + 70.26 x 1 2 + 281.29 x 1 x 3 + 281.29 x 3 2 + 14437 x 8 1.8812 x 12 0.3424 x 10 x 14 1 x 1 2 x 7 x 9 1 + 20470.2 x 7 2.893 x 11 0.316 x 1 2 ,
subject to
g 1 ( x ¯ ) = 1.524 x 7 1 1 , g 2 ( x ¯ ) = 1.524 x 8 1 1 , g 3 ( x ¯ ) = 0.07789 x 1 2 x 7 1 x 9 1 0 , g 4 ( x ¯ ) = 7.05305 x 9 1 x 1 2 x 10 x 8 1 x 2 1 x 14 1 1 0 , g 5 ( x ¯ ) = 0.0833 x 13 1 x 14 1 0 , g 6 ( x ¯ ) = 47.136 x 2 0.333 x 10 1 x 12 1.333 x 8 x 13 2.1195 + 62.08 x 13 2.1195 x 12 1 x 8 0.2 x 10 1 1 0 , g 7 ( x ¯ ) = 0.04771 x 10 x 8 1.8812 x 12 0.3424 1 0 , g 8 ( x ¯ ) = 0.0488 x 9 x 7 1.893 x 11 0.316 1 0 , g 9 ( x ¯ ) = 0.0099 x 1 x 3 1 1 0 , g 10 ( x ¯ ) = 0.0193 x 2 x 4 1 1 0 , g 11 ( x ¯ ) = 0.0298 x 1 x 5 1 1 0 , g 12 ( x ¯ ) = 0.056 x 2 x 6 1 1 0 , g 13 ( x ¯ ) = 2 x 9 1 1 0 , g 14 ( x ¯ ) = 2 x 10 1 1 0 , g 15 ( x ¯ ) = x 12 x 11 1 1 0 ,
with bounds
0.001 x i 5 , i = 1 , , 14 ,

4.3.3. Tension/Compression Spring Design (TCSD Case 1)

The TCSD1 problem is a relatively classic engineering optimization problem, and many metaheuristic algorithms use this problem to prove its feasibility in engineering optimization problems. The main objective of this problem is to optimize the weight of a tension or compression spring. It consists of 4 constraints and 3 design variables: wire diameter ( x 1 ), mean coil diameter ( x 2 ), and number of coils ( x 3 ). The mathematical model of the problem appears as follows:
minimize
f x ¯ = x 3 + 2 x 2 x 1 2 ,
subject to
g 1 x ¯ = 1 x 2 3 x 3 71785 x 1 4 0 , g 2 x ¯ = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 0 , g 3 x ¯ = 1 140.45 x 1 x 2 2 x 3 0 , g 4 x ¯ = x 1 + x 2 1.5 1 0 ,
with bounds
0.05 x 1 2 ,   0.25 x 2 1.3 , 2 x 3 15 ,

4.3.4. Multiple Disk Clutch Brake Design Problem (MDCBDP)

The MDCBDP problem is described by nine constraints and five integer decision variables, and its main purpose is to minimize the mass of the multiplate clutch brake. The decision variables for this problem include the inner radius ( x 1 ), outer radius ( x 2 ), disc thickness ( x 3 ), actuator force ( x 4 ), and number of friction surfaces ( x 5 ). Its mathematical model is as follows:
minimize
f x ¯ = π x 2 2 x 1 2 x 3 x 5 + 1 ρ ,
subject to
g 1 ( x ¯ ) = p m a x + p r z 0 , g 2 ( x ¯ ) = p r z V s r V s r , m a x p m a x 0 , g 3 ( x ¯ ) = Δ R + x 1 x 2 0 , g 4 ( x ¯ ) = L m a x + x 5 + 1 x 3 + δ 0 , g 5 ( x ¯ ) = s M s M h 0 , g 6 ( x ¯ ) = T 0 , g 7 ( x ¯ ) = V s r ,   max   + V s r 0 , g 8 x ¯ = T T m a x 0 ,
where
M h = 2 3 μ x 4 x 5 x 2 3 x 1 3 x 2 2 x 1 2 N mm ω = π n 30 rad / s A = π x 2 2 x 1 2 mm 2 p r z = x 4 A N / mm 2 V s r = π R s r n 30 mm / s , R s r = 2 3 x 2 3 x 1 3 x 2 2 x 1 2 mm , T = I z ω M h + M f , Δ R = 20   mm , L m a x = 30   mm , μ = 0.6 V s r , m a x = 10   m / s , δ = 0.5   mm , s = 1.5 T m a x = 15   s , n = 250   rpm , I z = 55   Kg m 2 , M s = 40   Nm , M f = 3   Nm ,   and   p m a x = 1 .
with bounds
60 x 1 80,90 x 2 110,1 x 3 3,0 x 4 1000,2 x 5 9 .

4.3.5. Planetary Gear Train Design Optimization (PGTDO)

The main goal of PGTDO is to minimize the maximum error of the car transmission ratio by calculating the number of gear teeth in the automatic planetary transmission system. The mathematical model of the problem is shown below, with six integer variables and 11 constraints.
Minimize
f x ¯ = m a x i k i 0 k , k = 1,2 , . . , R ,
where
i 1 = N 6 N 4 , i 01 = 3.11 , i 2 = N 6 N 1 N 3 + N 2 N 4 N 1 N 3 N 6 N 4 , i 0 R = 3.11 I R = N 2 N 6 N 1 N 3 , i 02 = 1.84 , x ¯ = p , N 6 , N 5 , N 4 , N 3 , N 2 , N 1 , m 2 , m 1 ,
subject to
g 1 ( x ¯ ) = m 3 N 6 + 2.5 D m a x 0 , g 2 ( x ¯ ) = m 1 N 1 + N 2 + m 1 N 2 + 2 D m a x 0 , g 3 ( x ¯ ) = m 3 N 4 + N 5 + m 3 N 5 + 2 D m a x 0 , g 4 ( x ¯ ) = m 1 N 1 + N 2 m 3 N 6 N 3 m 1 m 3 0 , g 5 ( x ¯ ) = N 1 + N 2 sin ( π / p ) + N 2 + 2 + δ 22 0 , g 6 ( x ¯ ) = N 6 N 3 sin ( π / p ) + N 3 + 2 + δ 33 0 , g 7 ( x ¯ ) = N 4 + N 5 sin ( π / p ) + N 5 + 2 + δ 55 0 , g 8 ( x ¯ ) = N 3 + N 5 + 2 + δ 35 2 N 6 N 3 2 N 4 + N 5 2 + 2 N 6 N 3 N 4 + N 5 c o s 2 π p β 0 , g 9 ( x ¯ ) = N 4 N 6 + 2 N 5 + 2 δ 56 + 4 0 , g 10 ( x ¯ ) = 2 N 3 N 6 + N 4 + 2 δ 34 + 4 0 , h 1 ( x ¯ ) = N 6 N 4 p = integer , δ 22 = δ 33 = δ 55 = δ 35 = δ 56 = 0.5 , β = cos 1 N 4 + N 5 2 + N 6 N 3 2 N 3 + N 5 2 2 N 6 N 3 N 4 + N 5 , D m a x = 220 ,
with bounds
p = ( 3,4 , 5 ) m 1 = ( 1.75,2.0,2.25,2.5,2.75,3.0 ) , m 3 = ( 1.75,2.0,2.25,2.5,2.75,3.0 ) , 17 N 1 96,14 N 2 54,14 N 3 51 17 N 4 46,14 N 5 51,48 N 6 124 , and   N i   =   integer .

4.3.6. Hydro-Static Thrust-Bearing Design Problem (HTBDP)

The HTBDP problem is primarily about optimizing bearing power losses by using four design variables: oil viscosity, bearing radius, flow rate, and groove radius. In addition to the above four design variables, the problem also contains seven nonlinear constraints, and its mathematical model is as follows:
minimize
f x ¯ = Q P 0 0.7 + E f ,
subject to
g 1 ( x ¯ ) = 1000 P 0 0 , g 2 ( x ¯ ) = W 101000 0 , g 3 ( x ¯ ) = 5000 W π R 2 R 0 2 0 , g 4 ( x ¯ ) = 50 P 0 0 , g 5 ( x ¯ ) = 0.001 0.0307 386.4 P 0 Q 2 π R h 0 , g 6 ( x ¯ ) = R R 0 0 , g 7 ( x ¯ ) = h 0.001 0 ,
where
W = π P 0 2 R 2 R 0 2 ln R R 0 , P 0 = 6 μ Q π h 3 ln R R 0 , E f = 9336 Q × 0.0307 × 0.5 Δ T , Δ T = 2 10 P 559.7 , P = l o g 10 l o g 10 8.122 × 10 6 μ + 0.8 + 3.55 10.04 , h = 2 π × 750 60 2 2 π μ E f R 4 4 R 0 4 4 ,
with bounds
1 R 16,1 R 0 16,1 × 10 6 μ 16 × 10 6 , 1 Q 16 ,

4.3.7. Four-Stage Gear Box Problem (FGBP)

The FGBP aims at minimizing the weight of the gearbox and has 22 design variables, which are very discrete, including gear position, gear teeth number, blank thickness, etc. At the same time, the problem also has 86 nonlinear constraints. The comparison of the results of each algorithm in this problem is shown in Table 20.

4.3.8. Gas Transmission Compressor Design (GTCD)

The GTCD problem contains four design variables and one constraint condition, which is used to optimize the design of the gas transmission compressor. Its mathematical model is as follows:
minimize
f x ¯ = 8.61 × 10 5 x 1 1 2 x 2 x 3 2 3 x 4 1 2 + 3.69 × 10 4 x 3 + 7.72 × 10 8 x 1 1 x 2 0.219 765.43 × 10 6 x 1 1 ,
subject to
x 4 x 2 2 + x 2 2 1 0 ,
with bounds
20 x 1 50,1 x 2 10,20 x 3 50,0.1 x 4 60 ,

4.3.9. Tension/Compression Spring Design (TCSD Case 2)

The TCSD2 problem is mainly used to optimize the volume required for manufacturing helical compression spring steel wire. This problem mainly includes three design variables, the outer diameter ( x 1 ), the number of spring coils ( x 2 ) and the spring steel wire diameter ( x 3 ), and eight nonlinear constraints. Its mathematical model is as follows:
minimize
f x ¯ = π 2 x 2 x 3 2 x 1 + 2 4 ,
subject to
g 1 x ¯ = 8000 C f x 2 π x 3 3 189000 0 , g 2 ( x ¯ ) = l f 14 0 , g 3 ( x ¯ ) = 0.2 x 3 0 , g 4 ( x ¯ ) = x 2 3 0 , g 5 ( x ¯ ) = 3 x 2 x 3 0 , g 6 ( x ¯ ) = σ p 6 0 , g 7 ( x ¯ ) = σ p + 700 K + 1.05 x 1 + 2 x 3 l f 0 , g 8 ( x ¯ ) = 1.25 700 K 0 ,
where
C f = 4 x 2 x 3 1 4 x 2 x 3 4 + 0.615 x 3 x 2 , K = 11.5 × 10 6 x 3 4 8 x 1 x 2 3 , σ p = 300 K , l f = 1000 K + 1.05 x 1 + 2 x 3 ,
with bounds
1 x 1   ( integer )   70,0.6 x 2   ( continuous )   3 ,
x 3 ( discrete ) { 0.009,0.0095,0.0104,0.0118,0.0128,0.0132,0.014,0.015,0.0162,0.0173,0.018,0.020 , 0.023,0.025,0.028,0.032,0.035,0.041,0.047,0.054,0.063,0.072,0.080,0.092,0.0105 , 0.120,0.135,0.148,0.162,0.177,0.192,0.207,0.225,0.244,0.263,0.283,0.307,0.0331 0.362,0.394,0.4375,0.500 } ,

4.3.10. Topology Optimization (TO)

The main purpose of this problem is to optimize the material layout for a given load set given the design search space and constraints related to system performance. The mathematical model is as follows:
minimize
f x ¯ = U T K U = e = 1 N   x e p u e T k 0 u 0 ,
subject to
h 1 ( x ¯ ) = V x ¯ V 0 f = 0 , h 2 ( x ¯ ) = K U F = 0 ,
with bounds
0 < x ¯ m i n x 1 .

5. Conclusions and Future Directions

In this paper, we propose a metaheuristic algorithm that combines the MOA, the AO, and the OBL strategies, namely, AOBLMOA. The algorithm takes the MOA as the framework, assigns the search methods of Aquila in the AO to the male and female mayfly populations in the MOA, and replaces the mutation strategy of the offspring mayfly population with the stochastic OBL strategy. To verify the effectiveness, superiority, and feasibility of the proposed algorithm for different types of problems, we successively apply AOBLMOA to 19 benchmark functions, 30 CEC2017 functions, and 10 CEC2020 real-world constrained optimization problems. From the obtained results and statistical analysis results, it can be seen that the algorithm has a great improvement compared with the original algorithm, has advantages compared with the recently proposed algorithm, and is feasible in numerical optimization and practical engineering optimization problems. However, the algorithm is designed for continuous problems, not binary or discrete problems. Therefore, AOBLMOA cannot solve discrete problems such as the TSP problem and the VRP problem.
In future work, we suggest that researchers interested in AOBLMOA can further optimize it and even design a binary, discrete, or multi-objective AOBLMOA. It is also interesting to apply AOBLMOA to large-scale applications such as neural network optimization, workshop task scheduling, robot path planning, text and data mining, image segmentation, signal denoising, oil and gas pipeline network transportation, feature selection, etc. The MATLAB codes for AOBLMOA are available at https://github.com/SheldonYnuup/AOBLMOA (accessed on 1 August 2023), to help researchers with further research.

Author Contributions

Conceptualization, Y.Z. and C.H.; methodology, Y.Z.; software, M.Z.; validation, Y.Z., M.Z. and Y.C.; formal analysis, Y.C.; investigation, Y.C.; data curation, Y.Z. and M.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z. and C.H.; visualization, Y.Z., M.Z. and Y.C.; supervision, C.H.; project administration, C.H.; funding acquisition, C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data is contained within the article. The data presented in this study are available insert article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  2. Huang, C.; Zhao, Y.; Zhang, M.; Yang, H. APSO: An A*-PSO Hybrid Algorithm for Mobile Robot Path Planning. IEEE Access 2023, 11, 43238–43256. [Google Scholar] [CrossRef]
  3. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  4. Zhao, Y.; Huang, C.; Zhang, M.; Lv, C. COLMA: A chaos-based mayfly algorithm with opposition-based learning and Levy flight for numerical optimization and engineering design. J. Supercomput. 2023, 2023, 1–47. [Google Scholar] [CrossRef]
  5. Zhou, D.; Kang, Z.; Su, X.; Yang, C. An enhanced Mayfly optimization algorithm based on orthogonal learning and chaotic exploitation strategy. Int. J. Mach. Learn. Cybern. 2022, 13, 3625–3643. [Google Scholar] [CrossRef]
  6. Li, L.-L.; Lou, J.-L.; Tseng, M.-L.; Lim, M.K.; Tan, R.R. A hybrid dynamic economic environmental dispatch model for balancing operating costs and pollutant emissions in renewable energy: A novel improved mayfly algorithm. Expert Syst. Appl. 2022, 203, 117411. [Google Scholar] [CrossRef]
  7. Zhang, J.; Zheng, J.; Xie, X.; Lin, Z.; Li, H. Mayfly Sparrow Search Hybrid Algorithm for RFID Network Planning. IEEE Sens. J. 2022, 22, 16673–16686. [Google Scholar] [CrossRef]
  8. Zafar, M.H.; Khan, N.M.; Mirza, A.F.; Mansoor, M. Bio-inspired optimization algorithms based maximum power point tracking technique for photovoltaic systems under partial shading and complex partial shading conditions. J. Clean. Prod. 2021, 309, 127279. [Google Scholar] [CrossRef]
  9. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  10. Mahajan, S.; Abualigah, L.; Pandit, A.K.; Altalhi, M. Hybrid Aquila optimizer with arithmetic optimization algorithm for global optimization tasks. Soft Comput. 2022, 26, 4863–4881. [Google Scholar] [CrossRef]
  11. Ma, C.; Huang, H.; Fan, Q.; Wei, J.; Du, Y.; Gao, W. Grey wolf optimizer based on Aquila exploration method. Expert Syst. Appl. 2022, 205, 117629. [Google Scholar] [CrossRef]
  12. Ekinci, S.; Izci, D.; Eker, E.; Abualigah, L. An effective control design approach based on novel enhanced aquila optimizer for automatic voltage regulator. Artif. Intell. Rev. 2023, 56, 1731–1762. [Google Scholar] [CrossRef]
  13. Ewees, A.A.; Algamal, Z.Y.; Abualigah, L.; Al-qaness, M.A.A.; Yousri, D.; Ghoniem, R.M.; Abd Elaziz, M. A Cox Proportional-Hazards Model Based on an Improved Aquila Optimizer with Whale Optimization Algorithm Operators. Mathematics 2022, 10, 1273. [Google Scholar] [CrossRef]
  14. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Washington, DC, USA, 28–30 November 2005; pp. 695–701. [Google Scholar]
  15. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition versus randomness in soft computing techniques. Appl. Soft Comput. 2008, 8, 906–918. [Google Scholar] [CrossRef]
  16. Zeng, C.; Qin, T.; Tan, W.; Lin, C.; Zhu, Z.; Yang, J.; Yuan, S. Coverage Optimization of Heterogeneous Wireless Sensor Network Based on Improved Wild Horse Optimizer. Biomimetics 2023, 8, 70. [Google Scholar] [CrossRef] [PubMed]
  17. Eirgash, M.A.; Toğan, V. A novel oppositional teaching learning strategy based on the golden ratio to solve the Time-Cost-Environmental impact Trade-off optimization problems. Expert Syst. Appl. 2023, 224, 119995. [Google Scholar] [CrossRef]
  18. Jia, H.; Lu, C.; Wu, D.; Wen, C.; Rao, H.; Abualigah, L. An Improved Reptile Search Algorithm with Ghost Opposition-based Learning for Global Optimization Problems. J. Comput. Des. Eng. 2023, 10, 1390–1422. [Google Scholar] [CrossRef]
  19. Mohapatra, S.; Mohapatra, P. Fast random opposition-based learning Golden Jackal Optimization algorithm. Knowl. Based Syst. 2023, 275, 110679. [Google Scholar] [CrossRef]
  20. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  21. Bhattacharyya, T.; Chatterjee, B.; Singh, P.K.; Yoon, J.H.; Geem, Z.W.; Sarkar, R. Mayfly in Harmony: A New Hybrid Meta-Heuristic Feature Selection Algorithm. IEEE Access 2020, 8, 195929–195945. [Google Scholar] [CrossRef]
  22. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  25. Feng, Y.; Deb, S.; Wang, G.-G.; Alavi, A.H. Monarch butterfly optimization: A comprehensive review. Expert Syst. Appl. 2021, 168, 114418. [Google Scholar] [CrossRef]
  26. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  27. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  29. Wang, G.G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
  30. Holland, J.H. Adaptation in Natural and Artificial Systems; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1975. [Google Scholar]
  31. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  32. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  33. Yang, X.-S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  35. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  36. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  37. Shareef, H.; Ibrahim, A.A.; Mutlag, A.H. Lightning search algorithm. Appl. Soft Comput. 2015, 36, 315–333. [Google Scholar] [CrossRef]
  38. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  39. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  40. Verij kazemi, M.; Veysari, E.F. A new optimization algorithm inspired by the quest for the evolution of human society: Human felicity algorithm. Expert Syst. Appl. 2022, 193, 116468. [Google Scholar] [CrossRef]
  41. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl. Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  42. Nasir, A.N.K.; Razak, A.A.A. Opposition-based spiral dynamic algorithm with an application to optimize type-2 fuzzy control for an inverted pendulum system. Expert Syst. Appl. 2022, 195, 116661. [Google Scholar] [CrossRef]
  43. Khosravi, H.; Amiri, B.; Yazdanjue, N.; Babaiyan, V. An improved group teaching optimization algorithm based on local search and chaotic map for feature selection in high-dimensional data. Expert Syst. Appl. 2022, 204, 117493. [Google Scholar] [CrossRef]
  44. Zhang, Y. Backtracking search algorithm with specular reflection learning for global optimization. Knowl. Based Syst. 2021, 212, 106546. [Google Scholar] [CrossRef]
  45. Xu, Y.; Liu, H.; Xie, S.; Xi, L.; Lu, M. Competitive search algorithm: A new method for stochastic optimization. Appl. Intell. 2022, 52, 12131–12154. [Google Scholar] [CrossRef]
  46. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 2022. [Google Scholar] [CrossRef]
  47. Yu, C.; Chen, M.; Cheng, K.; Zhao, X.; Ma, C.; Kuang, F.; Chen, H. SGOA: Annealing-behaved grasshopper optimizer for global tasks. Eng. Comput. 2022, 38, 3761–3788. [Google Scholar] [CrossRef]
  48. Zhang, H.; Cai, Z.; Ye, X.; Wang, M.; Kuang, F.; Chen, H.; Li, C.; Li, Y. A multi-strategy enhanced salp swarm algorithm for global optimization. Eng. Comput. 2022, 38, 1177–1203. [Google Scholar] [CrossRef]
  49. Xu, Y.; Chen, H.; Luo, J.; Zhang, Q.; Jiao, S.; Zhang, X. Enhanced Moth-flame optimizer with mutation strategy for global optimization. Inf. Sci. 2019, 492, 181–203. [Google Scholar] [CrossRef]
  50. Kumar, A.; Das, S.; Kong, L.; Snášel, V. Self-Adaptive Spherical Search With a Low-Precision Projection Matrix for Real-World Optimization. IEEE Trans. Cybern. 2023, 53, 4107–4121. [Google Scholar] [CrossRef] [PubMed]
  51. Kumar, A.; Das, S.; Zelinka, I. A modified covariance matrix adaptation evolution strategy for real-world constrained optimization problems. In Proceedings of the GECCO ‘20: 2020 Genetic and Evolutionary Computation Conference Companion, Cancún, Mexico, 8–12 July 2020. [Google Scholar] [CrossRef]
  52. Gurrola-Ramos, J.; Hernàndez-Aguirre, A.; Dalmau-Cedeño, O. COLSHADE for Real-World Single-Objective Constrained optimization Problems. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020. [Google Scholar] [CrossRef]
  53. Wilcoxon, F. Individual Comparisons by Ranking Methods. In Breakthroughs in Statistics; Kotz, S., Johnson, N.L., Eds.; Springer Series in Statistics; Springer: New York, NY, USA, 1992. [Google Scholar] [CrossRef]
  54. P-N-Suganthan. 2020-RW-Constrained-Optimisation GitHub. 2021. Available online: https://github.com/P-N-Suganthan/2020-RW-Constrained-Optimisation (accessed on 18 July 2022).
Figure 1. Benchmark functions.
Figure 1. Benchmark functions.
Biomimetics 08 00381 g001aBiomimetics 08 00381 g001bBiomimetics 08 00381 g001cBiomimetics 08 00381 g001d
Figure 2. Convergence comparison among algorithms.
Figure 2. Convergence comparison among algorithms.
Biomimetics 08 00381 g002aBiomimetics 08 00381 g002bBiomimetics 08 00381 g002c
Table 1. The metaheuristic algorithms mentioned in the Introduction.
Table 1. The metaheuristic algorithms mentioned in the Introduction.
TypeAlgorithmRef.InspirationCharacteristic
SIAPSO[22]Foraging behavior of birdsThe bird swarm flies to the best birds and searches in the process
SSA[23]Navigation and foraging behavior of salps in the oceanThe lead salp searches first, and followers search after the leader
GWO[24]The leadership hierarchy and hunting process of gray wolves The whole pack moves toward the three best wolves
MBO[25]Migration behavior of monarch butterfliesIn the process of population migration, old individuals are eliminated, and new individuals are created and adapt to the environment
BES[26]Bald eagles’ attack on preyEach individual can search three times through the three methods: select, search and swoop, and stay in the optimal position
MPA[27]The predation process of marine predatorsMPA combines Brownian motion, Lévy flight, and other random generation strategies and uses different strategies in different stages
WOA[28]The behavior of humpback whalesWhale groups search the problem space by encircling both prey and bubble nets
MS[29]Navigation method of mothsMoths approach a flame by the spiral search method
EAGA[30]Darwinian theory of evolutionOptimal solution candidates are filtered and continuously obtained through crossover, mutation, and selection operations
DE[31]Evolutionary phenomenonOn the basis of GA, a difference operation is added to carry out the variation
BBO[32]Biogeography associated with species migrationBBO incorporates the migration and mutation behavior of species
CS[33]Evolution of the cuckooCS simulates the behavior of a cuckoo laying eggs, combining the Lévy flight and random selection methods
PhAMVO[34]The concepts of black holes, white holes, and wormholes in the universeBlack holes and white holes are used for exploration and wormholes for exploitation
EO[35]Simple well-mixed dynamic mass balance on a control volumeSearch agents randomly update their concentration with respect to some talented particles, called equilibrium candidates, to finally reach an equilibrium state as the optimal results
GSA[36]The law of gravity and mass interactionsThe law of gravity and the law of motion are incorporated into the motion of particles
LSA[37]The natural phenomenon of lightningTransition projectiles, space projectiles, and lead projectiles are used to optimize problems
AOA[38]The law of physics of Archimedes’ PrincipleEach individual has four attributes, position, density, volume, and acceleration, and adjusts the acceleration by changing the density and volume; the acceleration and current position determine the new position
HATLBO[39]The influence of a teacher on the output of learnersTLBO is divided into two parts: the first part is the “teacher stage”, that is, learning from teachers; the second part is the “learner phase”, that is, learning through the interaction between learners
HFA[40]The efforts of human society to become felicityThe population is divided into elites, disciples, and ordinary people; later, people’s minds changed in three ways: the influence of elites, personal experience, and drastic changes
PO[41]The multi-phased process of politicsPO integrates party formation, constituency allocation, party elections, party switching, campaigning, and parliamentary affairs into the optimization process
Table 2. Sensitivity analysis to AOBLMOA.
Table 2. Sensitivity analysis to AOBLMOA.
ScenarioParameter ValuesAve.
a 1 a 2 a 3 α δ
10.81.31.30.080.082994.424637
211.31.30.080.082994.424557
30.81.51.30.080.082994.424646
411.51.30.080.082994.42457
50.81.31.50.080.082994.424561
611.31.50.080.082994.42453
70.81.51.50.080.082994.424518
811.51.50.080.082994.424622
90.81.31.30.10.082994.42454
1011.31.30.10.082994.424568
110.81.51.30.10.082994.424538
1211.51.30.10.082994.424614
130.81.31.50.10.082994.424661
1411.31.50.10.082994.424638
150.81.51.50.10.082994.424518
1611.51.50.10.082994.424658
170.81.31.30.080.12994.424799
1811.31.30.080.12994.424532
190.81.51.30.080.12994.424554
2011.51.30.080.12994.424605
210.81.31.50.080.12994.424557
2211.31.50.080.12994.42454
230.81.51.50.080.12994.424548
2411.51.50.080.12994.424589
250.81.31.30.10.12994.42452
2611.31.30.10.12994.424547
270.81.51.30.10.12994.42457
2811.51.30.10.12994.424538
290.81.31.50.10.12994.424663
3011.31.50.10.12994.424617
310.81.51.50.10.12994.424755
3211.51.50.10.12994.42451
Table 3. Benchmark functions.
Table 3. Benchmark functions.
ExpressionDimensionsRange f m i n
f 1 X = i = 1 D x i 2 10, 30, 50, 100[−100, 100]0
f 2 X = i = 1 D x i + i = 1 D x i 10, 30, 50, 100[−10, 10]0
f 3 X = i = 1 D   j = 1 i   x j 2 10, 30, 50, 100[−100, 100]0
f 4 X = max i { x i , 1 x i D } 10, 30, 50, 100[−100, 100]0
f 5 X = i = 1 D i x i 4 + r a n d o m [ 0,1 ) 10, 30, 50, 100[−128, 128]0
f 6 X = i = 1 D [ x i 2 10 cos 2 π x i + 10 ] 10, 30, 50, 100[−5.12, 5.12]0
f 7 X = i = 1 D 20 exp 0.2 1 D i = 1 D x i 2 exp 1 D i = 1 D cos 2 π x i + 20 + e 10, 30, 50, 100[−32, 32]0
f 8 X = 1 4000 i = 1 D x i 2 i = 1 D cos ( x i i ) + 1 10, 30, 50, 100[−600, 600]0
f 9 X = π n 10 sin π y 1 + i = 1 n 1   y i 1 2 1 + 10 sin 2 π y i + 1 + i = 1 n   u x i , 10,100,4 , where   y i = 1 + x i + 1 4 , u x i , a , k , m K x i a m   if   x i > a 0 a x i a K x i a m a x i 10, 30, 50, 100[−50, 50]0
f 10 X = 0.1 sin 2 ( 3 π x i ) + i = 1 D x i 1 2 1 + sin 2 3 π x i + 1 + ( x i 1 ) 2 [ 1 + sin 2 2 π x i ] + i = 1 D u ( x i , 5,100,4 ) 10, 30, 50, 100[−50, 50]0
f 11 X = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.00030
f 12 X = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
f 13 X = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 2[−5, 5]0.398
f 14 X = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × [ 30 + 2 x 1 3 x 2 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
f 15 X = i = 1 4 c i e x p ( j = 1 3 a i j ( x j p i j ) 2 ) 3[0, 1]−3.86
f 16 X = i = 1 4 c i e x p ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
f 17 X = i = 1 5   X a i X a i T + c i 1 4[0, 1]−10.1532
f 18 X = i = 1 7   X a i X a i T + c i 1 4[0, 1]−10.4028
f 19 X = i = 1 10   X a i X a i T + c i 1 4[0, 1]−10.5363
Table 4. Parameters of algorithms.
Table 4. Parameters of algorithms.
ParametersMOAAOAMOAOBLMOAOBLAOAOBLMOA
Population size303030303030
a 1 , a 2   and   a 3 1.0, 1.5, 1.5-1.0, 1.5, 1.51.0, 1.5, 1.5-1.0, 1.5, 1.5
g 0.9–0.4 0.9–0.40.9–0.4-0.9–0.4
α -0.10.1-0.10.1
δ -0.10.1-0.10.1
Table 5. Comparison between algorithms for f 1 f 10 ; dimension fixed to 10.
Table 5. Comparison between algorithms for f 1 f 10 ; dimension fixed to 10.
Function AOMOAAMOAOBLAOOBLMOAAOBLMOA
f 1 best8.04 × 10−3037.15 × 10−1710000
median4.74 × 10−2883.48 × 10−1660000
worst4.00 × 10−2062.28 × 10−1594.90 × 10−324000
mean1.33 × 10−2078.83 × 10−1610000
std000000
time0.0255208330.19427080.20989580.0255210.1734380.146875
f 2 best2.8314 × 10−1534.334 × 10−931.92 × 10−195000
median5.3136 × 10−1452.295 × 10−883.42 × 10−183000
worst9.4069 × 10−1022.498 × 10−799.04 × 10−158000
mean3.1364 × 10−1031.156 × 10−803.01 × 10−159000
std1.6886 × 10−1024.731 × 10−800000
time0.0281250.21250.21718750.0286460.1864580.1989583
f 3 best2.4268 × 10−2977.359 × 10−884.15 × 10−306000
median4.2447 × 10−2863.599 × 10−822.39 × 10−278000
worst1.5943 × 10−1984.746 × 10−731.66 × 10−248000
mean9.7774 × 10−2001.582 × 10−745.53 × 10−250000
std08.52 × 10−740000
time0.0364583330.20989580.23281250.0546880.2109380.2255208
f 4 best2.5033 × 10−1516.427 × 10−383.46 × 10−212000
median1.6094 × 10−1454.494 × 10−317.52 × 10−190000
worst5.4551 × 10−992.419 × 10−269.09 × 10−164000
mean2.2999 × 10−1001.108 × 10−273.03 × 10−165000
std9.9082 × 10−1004.473 × 10−270000
time0.0223958330.20520830.20416670.0250.1802080.19375
f 5 best2.02018 × 10−60.00019944.293 × 10−61.7 × 10−61.18 × 10−65.63 × 10−7
median2.76003 × 10−50.0006433.137 × 10−53.49 × 10−52.99 × 10−52.931 × 10−5
worst0.0002493450.00204990.00020630.0001970.0002340.000106
mean5.98305 × 10−50.00089284.658 × 10−55.49 × 10−54.41 × 10−53.76 × 10−5
std6.27868 × 10−50.00054314.152 × 10−54.91 × 10−54.31 × 10−52.79 × 10−5
time0.0317708330.20364580.2156250.0411460.1968750.1895833
f 6 best00.99495910000
median04.97479530000
worst0.0007457098.95462650000
mean4.60805 × 10−54.89177410000
std0.0001729912.08749540000
time0.02343750.21510420.23281250.0296880.1765630.1953125
f 7 best8.88178 × 10−164.441 × 10−158.882 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
median8.88178 × 10−161.15514858.882 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
worst8.88178 × 10−163.40415838.882 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
mean8.88178 × 10−161.43937578.882 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
std9.86076 × 10−321.09782839.861 × 10−329.86 × 10−329.86 × 10−329.86 × 10−32
time0.0260416670.21145830.22239580.0260420.1822920.1958333
f 8 best00.06152390000
median00.50178760000
worst01.75934480000
mean00.5623630000
std00.36324740000
time0.0281250.21927080.22135420.0354170.1984380.2182292
f 9 best3.80995 × 10−104.712 × 10−324.712 × 10−325.18 × 10−93.32 × 10−244.71 × 10−32
median1.75313 × 10−74.712 × 10−324.712 × 10−322.1 × 10−72.24 × 10−214.71 × 10−32
worst6.7531 × 10−60.93289194.695 × 10−301.46 × 10−54.1 × 10−194.71 × 10−32
mean1.27274 × 10−60.06219712.035 × 10−311.87 × 10−62.98 × 10−204.71 × 10−32
std1.86588 × 10−60.18658418.34 × 10−313.21 × 10−67.8 × 10−201.64 × 10−47
time0.0572916670.23906250.25781250.0697920.2338540.2317708
f 10 best1.42853 × 10−81.35 × 10−321.35 × 10−323.3 × 10−107.06 × 10−231.35 × 10−32
median9.68122 × 10−71.35 × 10−321.35 × 10−321.68 × 10−64.71 × 10−201.35 × 10−32
worst1.27366 × 10−50.01098741.35 × 10−328.48 × 10−50.0973711.35 × 10−32
mean2.47232 × 10−60.00329621.35 × 10−329.48 × 10−60.010831.35 × 10−32
std3.26171 × 10−60.0050355.474 × 10−481.73 × 10−50.0221875.47 × 10−48
time0.056250.24322920.27031250.0765630.2343750.2348958
Table 6. Comparison between algorithms for f 11 f 19 .
Table 6. Comparison between algorithms for f 11 f 19 .
Function AOMOAAMOAOBLAOOBLMOAAOBLMOA
f 11 best0.00031320.00030750.00030750.0003170.0003070.000307
median0.00042230.00030750.00030750.0003980.0003070.000307
worst0.00062180.00030750.00122320.0007120.0004240.000307
mean0.00043560.00030750.0003380.0004380.0003110.000307
std7.281 × 10−500.00016449.7 × 10−52.1 × 10−50
time0.02239580.19895830.21406250.0255210.1744790.188021
f 12 best−1.031628−1.031628−1.031628−1.03163−1.03163−1.03163
median−1.031486−1.031628−1.031628−1.0316−1.03163−1.03163
worst−1.030616−1.031628−1.031628−1.03106−1.03163−1.03163
mean−1.031424−1.031628−1.031628−1.03156−1.03163−1.03163
std0.0002147000.00010600
time0.018750.19322920.20885420.0213540.1713540.190104
f 13 best0.39788750.39788740.39788740.3978870.3978870.397887
median0.39792780.39788740.39788740.3979210.3978870.397887
worst0.39859840.39788740.39788740.3981760.3978870.397887
mean0.3979740.39788740.39788740.3979540.3978870.397887
std0.00014461.11 × 10−161.11 × 10−167.31 × 10−51.11 × 10−161.11 × 10−16
time0.02395830.19166670.2156250.0255210.1781250.183854
f 14 best3.0000982333.00008233
median3.0085938333.00379533
worst3.0327635333.06669833
mean3.0117942333.0101333
std0.00906792.979 × 10−152.446 × 10−150.0133124.53 × 10−151.92 × 10−15
time0.01718750.19427080.20520830.018750.1661460.174479
f 15 best−3.862622−3.862782−3.862782−3.86252−3.86278−3.86278
median−3.85822−3.862782−3.862782−3.86007−3.86278−3.86278
worst−3.852219−3.862782−3.862782−3.85032−3.08976−3.86278
mean−3.857847−3.862782−3.862782−3.85924−3.83701−3.86278
std0.00319742.665 × 10−152.665 × 10−150.0027950.1387612.66 × 10−15
time0.02447920.20729170.21406250.0234380.1864580.205208
f 16 best−3.317657−3.321995−3.321995−3.31361−3.322−3.322
median−3.187066−3.321995−3.321995−3.27522−3.322−3.322
worst−2.983285−3.203102−3.203102−3.11183−3.2031−3.322
mean−3.192639−3.270475−3.29029−3.24649−3.29425−3.322
std0.08624050.05891580.05257650.0575750.0502861.33 × 10−15
time0.02604170.20989580.21302080.0255210.1776040.197917
f 17 best−10.15318−10.1532−10.1532−10.1532−10.1532−10.1532
median−10.15134−10.1532−6.589102−10.1532−10.1532−10.1532
worst−10.13038−2.630472−5.18484−10.1531−10.1532−10.1532
mean−10.14889−6.74062−6.957913−10.1532−10.1532−10.1532
std0.00563973.66540231.60950033.47 × 10−51.78 × 10−151.78 × 10−15
time0.02604170.19843750.21250.0281250.181250.200521
f 18 best−10.4029−10.40294−10.40294−10.4029−10.4029−10.4029
median−10.40193−10.40294−8.049912−10.4028−10.4029−10.4029
worst−10.35991−2.751934−5.198423−10.4026−10.4029−10.4029
mean−10.39792−8.938495−7.787828−10.4028−10.4029−10.4029
std0.00886192.93595571.82139698.36 × 10−500
time0.03020830.19947920.21979170.031250.1854170.217708
f 19 best−10.53628−10.53641−10.53641−10.5364−10.5364−10.5364
median−10.53504−10.53641−8.006939−10.5363−10.5364−10.5364
worst−10.50195−2.421734−5.206245−10.5356−10.0647−10.5364
mean−10.53173−8.358936−7.621057−10.5362−10.5212−10.5364
std0.00747543.41930711.82407790.0001440.0833392.57 × 10−14
time0.0406250.21250.218750.0427080.1984380.202604
Table 7. Comparison between algorithms for f 1 f 10 ; dimension fixed to 30.
Table 7. Comparison between algorithms for f 1 f 10 ; dimension fixed to 30.
Function AOMOAAMOAOBLAOOBLMOAAOBLMOA
f 1 best8.68 × 10−3052.98 × 10−311.96 × 10−295000
median1.80 × 10−2891.49 × 10−274.26 × 10−253000
worst2.56 × 10−2009.59 × 10−242.23 × 10−218000
mean8.55 × 10−2027.99 × 10−257.45 × 10−220000
std02.404 × 10−240000
f 2 best1.62 × 10−1491.031 × 10−161.95 × 10−132000
median4.75 × 10−1453.868 × 10−156.69 × 10−117000
worst1.52 × 10−1053.198 × 10−114.38 × 10−102000
mean5.94 × 10−1071.268 × 10−121.46 × 10−103000
std2.76 × 10−1065.725 × 10−127.87 × 10−103000
f 3 best1.92 × 10−3004.573 × 10−85.38 × 10−238000
median5.85 × 10−2872.922 × 10−72.01 × 10−198000
worst4.65 × 10−1989.657 × 10−74.14 × 10−182000
mean1.55 × 10−1993.267 × 10−71.38 × 10−183000
std02.043 × 10−70000
f 4 best1.93 × 10−1500.04276672.23 × 10−234000
median2.33 × 10−1460.12208531.09 × 10−194000
worst3.04 × 10−1070.50679657.07 × 10−159000
mean1.01 × 10−1080.16046632.38 × 10−160000
std5.46 × 10−1080.11494140000
f 5 best4.74 × 10−70.00351873.094 × 10−61.27 × 10−61.29 × 10−62.59 × 10−6
median4.165 × 10−50.00878363.066 × 10−51.5 × 10−52.41 × 10−51.68 × 10−5
worst0.00016670.01584720.00022580.0001460.0001530.000145
mean5.532 × 10−50.00923614.892 × 10−53.3 × 10−54 × 10−53.24 × 10−5
std4.37 × 10−50.00371554.926 × 10−53.39 × 10−53.62 × 10−53.1 × 10−5
f 6 best010.944550000
median015.9193450000
worst025.8689251.442 × 10-08000
mean016.284164.929 × 10-10000
std03.44168652.588 × 10-09000
f 7 best8.882 × 10−161.50174668.882 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
median8.882 × 10−164.4243058.882 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
worst8.882 × 10−166.69195038.882 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
mean8.882 × 10−164.72571548.882 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
std9.861 × 10−321.31011589.861 × 10−329.86 × 10−329.86 × 10−329.86 × 10−32
f 8 best000000
median00.00985730000
worst00.04429760000
mean00.0130430000
std00.01251270000
f 9 best6.255 × 10−104.122 × 10−287.453 × 10−111.1 × 10−99.93 × 10−71.57 × 10−32
median7.472 × 10−81.97 × 10−242.31 × 10−92.62 × 10−74.25 × 10−61.73 × 10−32
worst2.872 × 10−61.76450126.044 × 10−81.2 × 10−53.11 × 10−51.56 × 10−30
mean5.104 × 10−70.24551076.531 × 10−99.54 × 10−76.99 × 10−62.09 × 10−31
std7.581 × 10−70.39055051.339 × 10−82.18 × 10−66.67 × 10−64.54 × 10−31
f 10 best1.551 × 10−95.583 × 10−313.387 × 10−107.66 × 10−80.0025841.35 × 10−32
median1.296 × 10−60.09888261.925 × 10−82.37 × 10−62.9660791.84 × 10−32
worst8.757 × 10−53.8669340.02102385.36 × 10−52.9661711.09 × 10−29
mean1.378 × 10−50.85871140.00179971.02 × 10−52.8011014.02 × 10−31
std2.075 × 10−51.21000840.00485461.44 × 10−50.5849381.95 × 10−30
Table 8. Comparison between algorithms for f 1 f 10 ; dimension fixed to 50.
Table 8. Comparison between algorithms for f 1 f 10 ; dimension fixed to 50.
Function AOMOAAMOAOBLAOOBLMOAAOBLMOA
f 1 best4.68 × 10−3034.80 × 10−132.01 × 10−241000
median1.24 × 10−2893.65 × 10−115.11 × 10−210000
worst2.89 × 10−1981.18 × 10−72.07 × 10−179000
mean9.68 × 10−2004.13 × 10−96.91 × 10−181000
std02.107 × 10−80000
f 2 best1.78 × 10−1499.361 × 10−83 × 10−118000
median1.77 × 10−1425.794 × 10−64.83 × 10−105000
worst2.07 × 10−980.08428741.581 × 10−92000
mean7.71 × 10−1000.00289575.567 × 10−94000
std3.72 × 10−990.01511722.835 × 10−93000
f 3 best5.69 × 10−2960.03861253.44 × 10−222000
median8.65 × 10−2850.18291721.52 × 10−183000
worst1.36 × 10−1990.72135061.72 × 10−149000
mean4.53 × 10−2010.24115815.72 × 10−151000
std00.17361753.08 × 10−150000
f 4 best7.51 × 10−1571.07748983.35 × 10−259000
median8.13 × 10−1473.84350034.38 × 10−199000
worst1.98 × 10−996.51260951.32 × 10−147000
mean6.64 × 10−1013.83922574.39 × 10−149000
std3.55 × 10−1001.30395032.37 × 10−148000
f 5 best1.432 × 10−60.01726565.276 × 10−61.09 × 10−61.3 × 10−63.63 × 10−7
median3.38 × 10−50.03119834.656 × 10−52.04 × 10−52.32 × 10−51.82 × 10−5
worst0.00016860.05540840.00019940.0001890.0001939.02 × 10−5
mean5.087 × 10−50.03562826.119 × 10−54.01 × 10−54.14 × 10−52.78 × 10−5
std4.552 × 10−50.01045574.796 × 10−54.26 × 10−54.41 × 10−52.45 × 10−5
f 6 best018.9042220000
median024.8739760000
worst041.7882650000
mean028.3563250000
std06.22809920000
f 7 best8.882 × 10−163.47348628.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
median8.882 × 10−166.99235928.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
worst8.882 × 10−169.22276098.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
mean8.882 × 10−166.95607158.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
std9.861 × 10−321.3055469.861 × 10−329.86 × 10−329.86 × 10−329.86 × 10−32
f 8 best02.946 × 10-110000
median06.311 × 10-100000
worst00.02705170000
mean00.00525080000
std00.00794860000
f 9 best5.258 × 10−93.272 × 10−121.196 × 10−73.25 × 10−90.0002163.9 × 10−23
median1.828 × 10−70.06220145.664 × 10−71.58 × 10−70.0004726.89 × 10−19
worst1.855 × 10−61.3729919.859 × 10−66.87 × 10−60.00182.08 × 10−16
mean3.418 × 10−70.28868641.568 × 10−61.11 × 10−60.0006192.23 × 10−17
std4.312 × 10−70.37247522.328 × 10−61.75 × 10−60.0003855.15 × 10−17
f 10 best4.392 × 10−80.01126021.616 × 10−61.58 × 10−84.9434781.97 × 10−19
median1.771 × 10−63.07724291.662 × 10−52.4 × 10−64.9440991.98 × 10−17
worst3.427 × 10−537.5454970.02123387.5 × 10−54.9462461.39 × 10−15
mean4.587 × 10−68.63676250.00289911.06 × 10−54.9443251.92 × 10−16
std0.0001376259.102870.0869730.000319148.32985.76 × 10−15
Table 9. Comparison between algorithms for f 1 f 10 ; dimension fixed to 100.
Table 9. Comparison between algorithms for f 1 f 10 ; dimension fixed to 100.
Function AOMOAAMOAOBLAOOBLMOAAOBLMOA
f 1 best1.60 × 10−3061.56 × 10−77.15 × 10−246000
median1.12 × 10−2912.28 × 10−72.1 × 10−196000
worst2.85 × 10−1914.03 × 10−72.99 × 10−150000
mean9.51 × 10−1932.46 × 10−79.98 × 10−152000
std06.112 × 10−85.37 × 10−151000
f 2 best3.71 × 10−1490.006554.42 × 10−114000
median7.33 × 10−1440.02327344.278 × 10−86000
worst4.1 × 10−1010.75818514.327 × 10−73000
mean1.64 × 10−1020.06834581.47 × 10−74000
std7.38 × 10−1020.13557217.763 × 10−74000
f 3 best2.39 × 10−298104.397661.28 × 10−190000
median2.78 × 10−282154.473632.64 × 10−167000
worst8.55 × 10−197486.079071.09 × 10−117000
mean2.85 × 10−198180.480853.63 × 10−119000
std076.9471761.96 × 10−118000
f 4 best1.68 × 10−1538.30078867.35 × 10−267000
median3.91 × 10−14611.7909055.49 × 10−220000
worst1.332 × 10−9815.0141428.14 × 10−146000
mean6.4 × 10−10011.8277962.71 × 10−147000
std2.58 × 10−991.51474061.46 × 10−146000
f 5 best4.286 × 10−70.11499121.235 × 10−62.3 × 10−65.28 × 10−62.20 × 10−7
median2.655 × 10−50.15959873.08 × 10−51.53 × 10−54.28 × 10−52.18 × 10−5
worst0.00016720.26139390.00019120.000150.0002250.000102
mean4.542 × 10−50.1682124.66 × 10−53.33 × 10−56.83 × 10−53.04 × 10−5
std4.359 × 10−50.03173164.736 × 10−53.05 × 10−55.19 × 10−52.73 × 10−5
f 6 best039.7984130000
median057.7076150000
worst077.6067810000
mean058.7026170000
std08.41901550000
f 7 best8.882 × 10−165.61491668.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
median8.882 × 10−168.39662598.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
worst8.882 × 10−1610.0240128.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
mean8.882 × 10−168.30488268.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
std9.861 × 10−321.01227039.861 × 10−329.861 × 10−329.861 × 10−329.861 × 10−32
f 8 best00.00112530000
median00.0103090000
worst00.0421270000
mean00.01510880000
std00.01045580000
f 9 best3.442 × 10−100.48999418.979 × 10−61.38 × 10−90.0068453.28 × 10−13
median8.43 × 10−81.43365326.114 × 10−52.17 × 10−70.0113474.08 × 10−11
worst5.784 × 10−64.97630260.00131513.84 × 10−60.0320851.56 × 10−9
mean5.383 × 10−71.80839670.00026386.05 × 10−70.0137132.94 × 10−10
std1.153 × 10−60.93996690.00037898.51 × 10−70.0053063.97 × 10−10
f 10 best2.305 × 10−866.7826460.00016771.98 × 10−79.8957227.46 × 10−13
median6.757 × 10−695.9560550.0030015.48 × 10−69.9006941.53 × 10−9
worst0.0002711149.91680.10925650.0002029.9129461.58 × 10−7
mean2.877 × 10−599.0210780.01166082.27 × 10−59.902121.96 × 10−8
std5.867 × 10−520.6666440.02069424.24 × 10−50.0037523.75 × 10−8
Table 10. Wilcoxon rank sum test under benchmark function.
Table 10. Wilcoxon rank sum test under benchmark function.
AOBLMOA vs.DimAOMOAAMOAOBLAOOBLMOA
f 1 101.73 × 10−61.73 × 10−6111
301.73 × 10−61.73 × 10−61.73 × 10−611
501.73 × 10−61.73 × 10−61.73 × 10−611
1001.73 × 10−61.73 × 10−61.73 × 10−611
f 2 101.73 × 10−61.73 × 10−61.73 × 10−611
301.73 × 10−61.73 × 10−61.73 × 10−611
501.73 × 10−61.73 × 10−61.73 × 10−611
1001.73 × 10−61.73 × 10−61.73 × 10−611
f 3 101.73 × 10−61.73 × 10−61.73 × 10−611
301.73 × 10−61.73 × 10−61.73 × 10−611
501.73 × 10−61.73 × 10−61.73 × 10−611
1001.73 × 10−61.73 × 10−61.73 × 10−611
f 4 101.73 × 10−61.73 × 10−61.73 × 10−611
301.73 × 10−61.73 × 10−61.73 × 10−611
501.73 × 10−61.73 × 10−61.73 × 10−611
1001.73 × 10−61.73 × 10−61.73 × 10−611
f 5 100.2802141.73 × 10−60.3086150.228880.765519
300.0332691.73 × 10−60.0098420.3493330.271155
500.0332691.73 × 10−60.0098420.3493330.271155
1000.428431.73 × 10−60.3388430.5577430.002585
f 6 100.51.73 × 10−6111
3011.73 × 10−6111
5011.73 × 10−6111
10011.73 × 10−6111
f 7 1011.73 × 10−6111
3011.73 × 10−6111
5011.73 × 10−6111
10011.73 × 10−6111
f 8 1011.73 × 10−6111
3011.73 × 10−6111
5011.73 × 10−6111
10011.73 × 10−6111
f 9 101.73 × 10−60.06250.0001221.73 × 10−61.73 × 10−6
301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
501.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
1001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
f 10 101.73 × 10−60.00390611.73 × 10−61.73 × 10−6
301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
501.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
1001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
f 11 41.73 × 10−6111.73 × 10−61
f 12 21.73 × 10−6111.73 × 10−61
f 13 20.0004530.250.250.0002410.25
f 14 21.73 × 10−6111.73 × 10−61
f 15 31.73 × 10−6111.73 × 10−61
f 16 61.73 × 10−60.0002440.0078131.73 × 10−60.015625
f 17 41.73 × 10−60.0001223.79 × 10−61.73 × 10−61
f 18 41.73 × 10−60.031253.79 × 10−61.73 × 10−61
f 19 41.73 × 10−60.0039063.79 × 10−61.73 × 10−61
+/−/=-0/35/140/43/60/28/210/17/320/10/39
Table 11. Friedman rank sum test under benchmark function.
Table 11. Friedman rank sum test under benchmark function.
FunctionDimAOMOAAMOAOBLAOOBLMOAAOBLMOA
f 1 10561111
30564111
50465111
100465111
f 2 10564111
30465111
50465111
100465111
f 3 10564111
30465111
50465111
100465111
f 4 10564111
30564111
50564111
100564111
f 5 10563421
30564231
50465231
100364251
f 6 10561111
30165111
50161111
100161111
f 7 10161111
30161111
50161111
100161111
f 8 10161111
30161111
50161111
100161111
f 9 10462531
30362451
50264351
100264351
f 10 10351461
30354261
50264351
100364251
f 11 4514631
f 12 2611511
f 13 2611511
f 14 2611541
f 15 3511461
f 16 6643521
f 17 4465311
f 18 4456311
f 19 4356241
Friedman rank-3.5102045.3673473.1428572.0816332.1224491
Final rank-564231
Table 12. Summary of the CEC2017 test functions.
Table 12. Summary of the CEC2017 test functions.
TypeFunc No.DetailsFi
Unimodal Functions f 1 Shifted and Rotated Bent Cigar Function100
f 2 Shifted and Rotated Sum Diff Pow Function200
f 3 Shifted and Rotated Zakharov Function300
Simple Multimodal Functions f 4 Shifted and Rotated Rosenbrock’s Function400
f 5 Shifted and Rotated Rastrigin’s Function500
f 6 Shifted and Rotated Expanded Scaffer’s F6 Function600
f 7 Shifted and Rotated Lunacek Bi_Rastrigin Function700
f 8 Shifted and Rotated Non-Continuous Rastrigin’s Function800
f 9 Shifted and Rotated Lévy Function900
f 10 Shifted and Rotated Schwefel’s Function1000
Hybrid Functions f 11 Hybrid Function 1 (N = 3)1100
f 12 Hybrid Function 2 (N = 3)1200
f 13 Hybrid Function 3 (N = 3)1300
f 14 Hybrid Function 4 (N = 4)1400
f 15 Hybrid Function 5 (N = 4)1500
f 16 Hybrid Function 6 (N = 4)1600
f 17 Hybrid Function 6 (N = 5)1700
f 18 Hybrid Function 6 (N = 5)1800
f 19 Hybrid Function 6 (N = 5)1900
f 20 Hybrid Function 6 (N = 6)2000
Composition Functions f 21 Composition Function 1 (N = 3)2100
f 22 Composition Function 2 (N = 3)2200
f 23 Composition Function 3 (N = 4)2300
f 24 Composition Function 4 (N = 4)2400
f 25 Composition Function 5 (N = 5)2500
f 26 Composition Function 6 (N = 5)2600
f 27 Composition Function 7 (N = 6)2700
f 28 Composition Function 8 (N = 6)2800
f 29 Composition Function 9 (N = 3)2900
f 30 Composition Function 10 (N = 3)3000
Table 13. Comparison of experimental results of CEC2017 test functions.
Table 13. Comparison of experimental results of CEC2017 test functions.
No.MeasureAOBLMOAMOAAOLGCMFORSAESSASGOACOLMA
f 1 Ave10927212969230247030191.09 × 1052380
STD1124922757920265236273,2181060
Rank15274683
f 2 Ave200200-2.67 × 1013-1.22 × 10106.61 × 1010200
STD00-3.84 × 1013-5.99 × 10103.62 × 10110
Rank12868453
f 3 Ave30028,546114213,50015104281303422
STD019,84223537802527271320
Rank18475623
f 4 Ave400413406500404520478402
STD0209228341343
Rank15473862
f 5 Ave529614511628513600611540
STD822722824243121
Rank37182564
f 6 Ave600635624610600600602600
STD751481020
Rank18761354
f 7 Ave777904715865713855783705
STD23502424361732
Rank48372651
f 8 Ave825882820921809884899853
STD8167278181914
Rank35281674
f 9 Ave106321889002690910173714481280
STD10311920854205711440979
Rank37182654
f 10 Ave19684604170649201410409443894450
STD19153536260835634456407
Rank37281456
f 11 Ave11391230112512401110124812051177
STD1640236711523620
Rank36271854
f 12 Ave385512,29310,0311.18 × 1061.52 × 1043.09 × 1061.81 × 1064.84 × 105
STD217670292321.23 × 1062.68 × 1031.58 × 1069.33 × 1056.21 × 105
Rank13264875
f 13 Ave156411,59280193.48 × 105682017,9631.03 × 1057340
STD131817556231.13 × 106426012,59245,1826320
Rank15482673
f 14 Ave14424426144939,100145015,94918512680
STD334658465445,0002214,0482142260
Rank16283745
f 15 Ave16812527171074101580508740,6343780
STD1185552766450128404722,0981740
Rank24371685
f 16 Ave17812627162425901730210920772060
STD11826340279120213146202
Rank38172654
f 17 Ave17722390174221201730192919741754
STD30227292013510111319
Rank48271563
f 18 Ave187794,94487122.18 × 10574401.67 × 10537,1903120
STD461.39 × 10532511.80 × 10545201.45 × 10516,6691170
Rank16483752
f 19 Ave19334897194452001950573115,5545150
STD25358530298055391081553440
Rank14263785
f 20 Ave21302457201823602020231623092343
STD76154211792510816099
Rank38172546
f 21 Ave22962406220524102230239524152132
STD6432402944202823
Rank46273581
f 22 Ave23002300230523002280230153332300
STD521682211331487906
Rank24721684
f 23 Ave26353031262027602610279027522606
STD206612324363155
Rank48362751
f 24 Ave27603208268629202620304629772594
STD5469162980544766
Rank48352761
f 25 Ave29182931291928902920292528882809
STD231819151325419
Rank48536721
f 26 Ave32516565300636403110494737322884
STD4511314145128028913181069187
Rank48253761
f 27 Ave31013458309033003110326032003082
STD8988312124058
Rank38274651
f 28 Ave31003125321132302300327333003169
STD14852472212425063
Rank23561784
f 29 Ave36633.74 × 107319038703210371335058.70 × 106
STD1564.33 × 10729266571441246.98 × 105
Rank48162537
f 30 Ave37247507290,14033,9002.96 × 10536,6691.96 × 1057720
STD15179652,31451,60021,40026,1291.44 × 1051050
Rank12748563
Friedman2.4333333336.13.1333333336.4666666672.7666666676.0333333335.6666666673.333333333
Final17382654
Table 14. Comparison for the WMSR problem.
Table 14. Comparison for the WMSR problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave2994.4242994.4242994.4242994.424
STD4.54747 × 10−134.54747 × 10−134.54747 × 10−132.45564 × 10−12
Median2994.4242994.4242994.4242994.424
Min2994.4242994.4242994.4242994.424
Max2994.4242994.4242994.4242994.424
Table 15. Comparison for the ODIRS problem.
Table 15. Comparison for the ODIRS problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave0.0322130.0322130.0322130.036419
STD0.0009824591.38778 × 10−171.38778 × 10−170.001726802
Median0.0322130.0322130.0322130.036266
Min0.0322130.0322130.0322130.03355
Max0.0350480.0322130.0322130.03993
Table 16. Comparison for the TCSD1 problem.
Table 16. Comparison for the TCSD1 problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave0.0126650.0126650.0126650.012668
STD8.562 × 10−801.06254 × 10−74.54201 × 10−6
Median0.0126650.0126650.0126650.012666
Min0.0126650.0126650.0126650.012665
Max0.0126650.0126650.0126660.012669
Table 17. Comparison for the MDCBDP problem.
Table 17. Comparison for the MDCBDP problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave0.2352424580.235242460.235242460.235242458
STD5.55112 × 10−172.77556 × 10−172.77556 × 10−171.11022 × 10−16
Median0.2352424580.235242460.235242460.235242458
Min0.2352424580.235242460.235242460.235242458
Max0.2352424580.235242460.235242460.235242458
Table 18. Comparison for the PGTDO problem.
Table 18. Comparison for the PGTDO problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave0.6004804841.0015240.5410260.530809
STD0.1141423290.7105330.0425730.004261
Median0.5370588240.6455730.530.53
Min0.52718750.5257680.5257680.525967
Max0.87959313.5216560.7466670.543846
Table 19. Comparison for the HTBDP problem.
Table 19. Comparison for the HTBDP problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave1775.8644581616.12011639.0373523022.135455
STD30.928507360.000923472100.7282129387.5627403
Median1771.9613061616.11981616.11983174.538873
Min1708.8522651616.11981616.11982284.476299
Max1821.9209661616.12342129.14523530.085832
Table 20. Comparison for the FGBP problem.
Table 20. Comparison for the FGBP problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave50.7060301338.5140983636.6109753653.7101309
STD15.042723412.0720941031.36770857817.51522759
Median45.9064109938.12970336.24929148.61282506
Min35.3606624436.25040135.35923236.24853345
Max82.1091589445.50640740.931153120.3173533
Table 21. Comparison for the GTCD problem.
Table 21. Comparison for the GTCD problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave2,964,895.42,964,895.42,964,895.42,964,912.352
STD4.65661 × 10−104.65661 × 10−104.65661 × 10−1034.1029386
Median2,964,895.42,964,895.42,964,895.42,964,898.734
Min2,964,895.42,964,895.42,964,895.42,964,895.42
Max2,964,895.42,964,895.42,964,895.42,965,049.472
Table 22. Comparison for the TCSD2 problem.
Table 22. Comparison for the TCSD2 problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave2.6831198962.65855922.661833964.236906962
STD0.0200537524.65661 × 10−100.0111052511.035279415
Median2.6994937092.65855922.65855923.878629281
Min2.6585591662.65855922.65855922.852343312
Max2.699493752.65855922.69949376.739677286
Table 23. Comparison for the TO problem.
Table 23. Comparison for the TO problem.
AlgorithmAOBLMOASASSCOLSHADEsCMAgES
Ave2.63934652.63934652.63934652.6393465
STD04.44089 × 10−164.44089 × 10−161.62806 × 10−15
Median2.63934652.63934652.63934652.6393465
Min2.63934652.63934652.63934652.6393465
Max2.63934652.63934652.63934652.6393465
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Huang, C.; Zhang, M.; Cui, Y. AOBLMOA: A Hybrid Biomimetic Optimization Algorithm for Numerical Optimization and Engineering Design Problems. Biomimetics 2023, 8, 381. https://doi.org/10.3390/biomimetics8040381

AMA Style

Zhao Y, Huang C, Zhang M, Cui Y. AOBLMOA: A Hybrid Biomimetic Optimization Algorithm for Numerical Optimization and Engineering Design Problems. Biomimetics. 2023; 8(4):381. https://doi.org/10.3390/biomimetics8040381

Chicago/Turabian Style

Zhao, Yanpu, Changsheng Huang, Mengjie Zhang, and Yang Cui. 2023. "AOBLMOA: A Hybrid Biomimetic Optimization Algorithm for Numerical Optimization and Engineering Design Problems" Biomimetics 8, no. 4: 381. https://doi.org/10.3390/biomimetics8040381

APA Style

Zhao, Y., Huang, C., Zhang, M., & Cui, Y. (2023). AOBLMOA: A Hybrid Biomimetic Optimization Algorithm for Numerical Optimization and Engineering Design Problems. Biomimetics, 8(4), 381. https://doi.org/10.3390/biomimetics8040381

Article Metrics

Back to TopTop