Next Article in Journal
Framed Natural Mates of Framed Curves in Euclidean 3-Space
Previous Article in Journal
Generalized Taylor’s Formula and Steffensen’s Inequality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Firefly Algorithm with Double-Level Learning Strategy

1
School of Software and Computer Science, Nanyang Institute of Technology, Nanyang 473000, China
2
Electronic Information School, Wuhan University, Wuhan 430072, China
3
School of Information Engineering, Nanyang Institute of Technology, Nanyang 473000, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(16), 3569; https://doi.org/10.3390/math11163569
Submission received: 24 July 2023 / Revised: 13 August 2023 / Accepted: 15 August 2023 / Published: 17 August 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
The firefly algorithm (FA) is a swarm intelligence algorithm capable of solving global optimization problems exactly; it has been used to solve many practical problems. However, traditional firefly algorithms solve complex optimization problems with a simple update method, which leads to premature stagnation due to the limitation of firefly diversity. To overcome these drawbacks, a novel hybrid firefly algorithm (HFA-DLL) with a double-level learning strategy is proposed. In HFA-DLL, a double-level learning strategy is proposed to avoid premature convergence and enhance the algorithm’s global search capability. At the same time, a competitive elimination mechanism is introduced to increase the accuracy of solving complex optimization problems and improve the convergence rate of the algorithm. Moreover, a stochastic disturbance strategy is designed to help the best solution jump out of the local optimum and minimize the time cost in the wrong direction. To understand the advantages and disadvantages of HFA-DLL, experiments were conducted on the CEC 2017 benchmark suite. Experimental results show that HFA-DLL outperforms other state-of-art algorithms in terms of convergence rate and exploration efficiency.

1. Introduction

Global optimization problems play a crucial role in scientific research, engineering, and industrial design. Many problems are eventually transformed into global optimization problems, such as image compression, inventory prediction, genetic identification, mathematical planning, graph theory, and network analysis. As the complexity of optimization problems increases, traditional algorithms have difficulty solving them effectively. As a result, many swarm intelligence algorithms have emerged, including the artificial bee colony algorithm (ABC) [1], locust swarms (LS) [2], cuckoo search (CS) [3], particle swarm optimization (PSO) [4], the harmony search algorithm (HS) [5], the fruit fly optimizer algorithm (FOA) [6], the tree seed algorithm (TSA) [7], biogeography-based optimization (BBO) [8], and the differential evolution algorithm (DE) [9]. These swarm intelligence algorithms iteratively update the population through various strategies to find an approximate solution to the global optimization problems.
The Firefly algorithm (FA) is a heuristic algorithm proposed by Professor Yang of Cambridge University in 2008 [10]. Its inspiration comes from the flickering behavior of fireflies, which attract other fireflies by shining. FA is easy to use and implement, and is widely used to solve complex optimization problems in the real world. An improved firefly algorithm for global continuous optimization problems (AD-IFA) [11] is proposed by Wu for global continuous optimization problems. AD-IFA improves local availability and accelerates convergence speed by adding Lévy adaptive algorithm spirals. A new dynamic FA (NDFA) [12] algorithm is proposed by Hui Wang et al., which is used to estimate water resource demand. NDFA improves the accuracy of water demand prediction through three different estimation models (linear, exponential, and mixed) combining historical water consumption and local economic structure. Due to the ability of FA to effectively and quickly solve constrained optimization problems, a novel particle swarm optimization based on a hybrid-learning model was proposed by Wang et al. [13]. It is used to solve global optimization problems in the continuous domain and has achieved excellent results. A GPU-based implementation of the firefly algorithm was put forward by Lauro et al. [14]. It uses a graphics processor to accelerate variable selection in multivariate calibration problems, finding more suitable choices in multivariate calibration and improving processing efficiency. A hybrid particle swarm optimization firefly algorithm (HPSOFF) [15,16] was proposed by Sivaranjani et al.; it is used to plan the location and shape of modules in the VLSI design cycle and calculate the cost indicators such as optimized layout area and wiring length. A support vector machine-firefly algorithm (SVM-FFA) [17] was proposed by Sudheer et al., which can predict the incidence of malaria to ensure that appropriate actions are taken to control the epidemic.
Complex optimization problems necessitate the rapid identification of global optimal values, which requires firefly algorithms (FAs) to possess both high search accuracy and swift convergence speeds. Consequently, researchers have focused on improving these aspects of FAs. In 2020, Hui Wang et al. [11] proposed an adaptive logarithmic spiral Levy firefly algorithm (AD-IFA) for continuous global optimization problems, which incorporated an adaptive change mechanism and enhanced global search capabilities, effectively addressing single-peak problems. However, for more complex optimization problems, an overly rapid search speed may lead to entrapment in local optima.
AD-IFA may have slow convergence speed and lack of individual diversity in high-dimensional and complex problems. This is because, in the search process, the movement rules of fireflies are mainly based on the calculation of light attraction and distance, and the light intensity has a limited guiding effect on the global search, which easily leads to the search process falling into the local optimal solution. At the same time, it will also lead to the lack of global exploration ability and difficulty in maintaining diversity. To alleviate the issues of premature convergence and local optima stagnation in firefly algorithms, we proposed a novel hybrid firefly algorithm with a double-level learning strategy (HFA-DLL). This approach utilizes an elite archive queue (EAQ) and the global best firefly to guide the fireflies’ movement, The main framework of HFA-DLL was shown in Figure 1, and the main contributions of this study can be summarized as follows:
(1) HFA-DLL uses the EAQ to increase the diversity of the firefly population, and the size of the EAQ is set to 4 m . In each generation, m elite fireflies are selected from the firefly population to enqueued at the rear of the EAQ, while m fireflies are dequeued at the front of the EAQ.
(2) The EAQ is capable of self-reproducing by utilizing a double-level (individual and dimensional) learning method for crossover to generate new offspring. It then compares the new offspring with their parents to retain the individuals with superior fitness values. This process can expand the search space of the elite fireflies, enhance population diversity, and effectively prevent premature convergence.
(3) A competitive elimination mechanism is introduced to eradicate the least effective fireflies in each generation by generating new fireflies from the elite individuals in the EAQ. This enhances the accuracy of HFA-DLL in solving complex optimization problems and accelerates convergence velocity. However, it can also lead to entrapment in local optima.
(4) To minimize the time wasted on incorrect search directions in intricate problem landscapes and address the issue of local optima entrapment when using the competitive elimination mechanism, a stochastic disturbance strategy is proposed. This assists elite individuals in escaping local optima.
In conclusion, the HFA-DLL method balances search accuracy and convergence speed; thus, providing an effective solution for complex optimization problems while mitigating the risk of stagnation in local optima.
The structure of this paper is as follows: Firstly, the work of the FA, Lévy-flight FA and AD-IFA is reviewed in the Section 2. Section 3 introduces the framework of HFA-DLL. Section 4 is the experimental part. Conclusions and the outlook of the work are given in Section 5.

2. Related Work

2.1. Standard Firefly Algorithm

The firefly Algorithm (FA) is a swarm intelligence optimization algorithm that simulates the luminescence characteristics and attraction behavior of fireflies. It is a new meta-heuristic algorithm proposed by Professor Yang after observing firefly behavior [10].
In nature, fireflies use their light as a signal to attract other fireflies. Each firefly has the intensity of light I and attractiveness β , which is determined by the light intensity I. In the global optimization problem, the light intensity of a firefly is usually measured by the fitness value. The movement of each firefly depends on the brightness and attractiveness of its peers in its neighborhood structure. The higher the light intensity of the firefly, the greater its attraction. The brightness of fireflies is inversely proportional to the distance. The farther the distance, the lower the brightness of fireflies. The firefly algorithm searches for the global optimal solution through the movement of each firefly. In the process of finding the global optimal solution, high-brightness individuals continue to attract low-brightness individuals. The FA has the following three basic principles:
(1) All fireflies are assumed to be hermaphrodites, so one firefly might be attracted to any other.
(2) The brightness of fireflies determines their attractiveness, and high-brightness fireflies attract low-brightness fireflies. If a firefly is brighter than other fireflies in the population, it will move randomly.
(3) Generally, the objective function value of the feasible solution of the problem to be solved is taken as a single brightness value.
When light is emitted from a source of a given intensity at a given distance r, it obeys the inverse square law. The light intensity I decreases with the increase in distance r, the air act as an obstacle, and the light intensity decreases with the increase in distance. The relative brightness of fireflies at r is calculated as follows:
I ( r ) = I 0 · e γ · r i j 2
where r represents the distance between the fireflies, γ represents the light intensity attraction coefficient, and I 0 represents the initial fluorescence brightness of the firefly ( r = 0 ) . The attraction between fireflies and their relative brightness follows the proportional law, which can be expressed as follows:
β ( r ) = β 0 · e γ · r i j 2
where β 0 is the attraction at the light source ( r = 0 ) , γ is the absorption coefficient of light intensity, represents the air with light absorption capacity, and r i j is the Cartesian distance between firefly i and j. The Cartesian distance between different fireflies can be expressed as follows:
r i j = x i x j = k = 1 D ( x i , k x j , k ) 2
where x i , k represents the k-th component of the i-th firefly spatial coordinate, where D is the space dimension.
During the iteration process, each firefly will move to the firefly whose brightness is higher than its own. The position update formula is as follows:
x i t + 1 = x i t + β 0 · e γ · r i j 2 · ( x j t x i t ) + α · ( R n d 0.5 )
where x i t and x j t are the spatial position of fireflies i and j, r i j is the distance between the i-th firefly and the j-th firefly, R n d represents the d-dimensional uniform random vector in [0,1], and α is the step size factor of the disturbance.

2.2. Lévy-Flight Firefly Algorithm

The Lévy-flight firefly (LF-FA) [18] algorithm introduces Lévy-flight into the FA to adjust the update step size, which enhances the local space exploitation ability and improves the global exploration ability. Compared with the traditional FA, LF-FA has the better and faster convergence speed and global exploration efficiency. The main idea of LF-FA is to replace the traditional random or uniform distribution with the Lévy random distribution. The position update formula of firefly is as follows:
x i t + 1 = x i t + β 0 · e γ · r i j 2 · ( x j t x i t ) + α · s i g n ( R n d 0.5 ) l e v y
where x i t is the position of i-th firefly at t-th generation, the second term is due to the attraction, α is the coefficient of the random term, the product ⊕ means entry-wise multiplications, s i g n ( ) is a symbolic function that provides a random direction, and the random step size is extracted from Lévy flights, R n d [ 0 , 1 ] . The random step size is taken from the Lévy distribution based on the Fourier transform as follows [19]:
l e v y ( k ) = e x p α k β , ( 0 β 2 )
where α is a parameter within [−1,1] interval and is known as the skewness of scale factor. An index of o stability β ( 0 , 2 ) is also referred to as the Lévy index. The analytic form of the integral is not known for general β except for a few special cases. For random walk, the step length S can be calculated by Mantegna’s algorithm as:
S = u v 1 β
where β = 1.5, ν and μ conform to the standard normal distribution, that is:
u N ( 0 , σ u 2 ) , v N ( 0 , σ v 2 )
σ u = τ ( 1 + β ) × s i n ( π × β / 2 ) τ ( ( 1 + β ) / 2 ) × β × 2 ( β 1 ) / 2 1 / β
where τ is a standard Gamma function.

3. A Novel Hybrid Firefly Algorithm with Double-Level Learning Strategy

3.1. The Framework of HFA-DLL

In HFA-DLL, it employs an elite archive queue (EAQ) and the global best firefly to guide the movement of fireflies. In each generation, the EAQ selects m elite fireflies from the population to enqueue on the rear of the EAQ, while there are m fireflies’ elements which dequeue on the front of the EAQ. The EAQ uses a double-level (particle-level and dimension-level) learning strategy to update the position of elite fireflies in the EAQ. The EAQ can be used to store the historical optimal solutions from the past search process. These optimal solutions tend to have high fitness values and may be ignored by the current search process. By retaining these historical optimal solutions, they can be prevented from being lost during the search process, while providing valuable references and heuristics to help ordinary individuals better explore the search space. In addition, the solutions stored in the EAQ can be used as a reference or guide to improve the current search strategy. By comparing the fitness value, characteristics, and other information between the current solution and the solution in the EAQ, the search direction is adjusted and the elite individuals of new solutions are generated. In this way, the past experience and success can be used for reference and applied to the current search process, which helps to improve the convergence and search effect of the algorithm. The detail of the double-level learning strategy was shown in Section 3.2. HFA-DLL uses a competitive elimination mechanism to replace the worst fireflies in the population with descendants of the EAQ; thereby, accelerating the convergence speed of HFA-DLL. The detail of the competitive elimination mechanism was introduced in Section 3.3. In order to prevent HFA-DLL from falling into local optima, a stochastic disturbance strategy is introduced to update the position of the global best firefly. The detail of the stochastic disturbance strategy is shown in Section 3.4.
In order to balance the trade-off between exploration and mining, HFA-DLL uses an adaptive switch ratio to choose the appropriate method from the two methods (the exploration method is attracted by the EAQ and the global best firefly, and the exploitation method uses the logarithmic spiral path) of updating the position of the firefly. The formula for updating the position of fireflies in the population is as follows:
x i t + 1 = x i t + β 0 · e γ · r g , i 2 · g b e s t x i t + β 1 · e γ · r k , i 2 · e a q k t x i t + α · ( r a n d 0.5 ) , u > R t x i t + β 0 · e γ r i , j 2 · x j t x i t e b · I cos ( 2 π · I ) , u R t
where u is a uniform random number in [0,1], R t is an adaptive switch ratio at t-th generation with an initial value of 0.5, and its next generation value is calculated by the Equation (11). x i t + 1 is the position of i-th firefly at ( t + 1 ) -th generation, g b e s t is the global best firefly, r g , i is the Euclidean distance between g b e s t and the firefly i, e a q k t is the k-th elite firefly in the EAQ at t-th generation, k is a random number whose range is [0, n], n is the size of the EAQ, and α is the step size factor in [0,1]. β 0 and β 1 are the initial attraction coefficients, which is usually 1, and γ is the light absorption coefficient, generally set as a constant 1. I is a d-dimensional uniform random vector whose range is [ 1 , 1 ] d , b is a constant value with a default value 1 for defining the shape of the logarithmic spiral, ⊗ is the Hadamard product, and the term cos ( 2 π · I ) provides a random direction logarithmic spiral path by the coefficients e b · I .
The adaptive switch ratio R t can control the search direction (global search and local search) of HFA-DLL. At different stages of iterative evolution, R t can be adaptively changed based on the fitness value. Followed by the idea of AD-IFA [11], it is calculated as follows:
R t + 1 = 1 1 + e x p f t * f t 1 * lg f t * lg f t 1 * 1 1 + e x p f t * θ · f t * θ f t 1 * θ · f t 1 * θ e l s e
where f t * is the best fitness value at the t-th generation, lg ( · ) = log 10 ( · ) , · is the floor function, and the adaptive scale parameter threshold θ is as follows:
θ = 10 lg f t * f t 1 * + 1

3.2. Double-Level Learning Strategy

The EAQ is a queue that stores multiple generations of elite fireflies. In every generation, m elite fireflies will be selected to insert at the rear of the EAQ, while m elite fireflies at the front of the EAQ will be deleted. The EAQ is represented as follows:
E A Q = x 1 , x 2 , x 3 , , x n
where n is the size of the EAQ, it is equal to m, m represents the number of elite fireflies selected per generation with a default value 4, m [ 1 , N P ] , and N P is the size of the population.
In order to increase the diversity of the population, we give the elite fireflies of the EAQ maximum movement freedom to simulate their irregular manner in the natural environment. As a general rule, every firefly only jostles each other, so it cannot fully explore the high-dimensional landscape for complex multi-modal problems. Therefore, a double-level learning strategy (particle-level and dimension-level) has been proposed. It not only achieves irregular movement of fireflies at the macro level but also can achieve mutation in a certain dimension within a single firefly at the micro level. In the particle-level learning strategy, the elite firefly moves a random direction based on the Euclidean distance between two fireflies. In the dimension-level learning strategy, every elite firefly has only one dimension to move along a logarithmic spiral path. These two strategies are as follows:
x i t + 1 = ψ ( x i t ) , r p e
x i , d t + 1 = ϕ ( x i , d t ) , r > p e
where r is a random number between [ 0 , 1 ] , p e [ 0 , 1 ] is a control parameter called the rotation probability. ψ ( · ) is a particle-level learning strategy; it attracts each other through the Euclidean distance between fireflies. ϕ ( · ) is a dimension-level learning strategy; it updates a dimension within a single firefly. When r is less than p e , ψ ( · ) can make the i-th firefly move in random directions by j-th firefly. When r is more than p e , ϕ ( · ) can make the d-th dimension of the i-th firefly explore the landscape through a logarithmic spiral path. The particle-level learning strategy is as follows:
ψ x i t = x i t + D i , j t θ c o s ( θ )
D i , j t = i = 1 d ( x i t x j t )
where θ is used to define the elite firefly spiral shape, and θ is a random number generated within the range π , π , D i , j is used to represent the Euclidean distance between an elite firefly x i t and an elite firefly x j t .
The dimension-level learning strategy is used to simulate the irregular logarithmic spiral movement behavior of the firefly, with its update strategy is as follows:
ϕ x i , d t = e | x i , k t x i , d t | x j , k t x i , d t e 2 ( rand 0.5 ) · sin ( 2 π · ( rand 0.5 ) ) + H · x i , d t
where k and d is a randomly chosen dimension, k d , k [ 1 , N P ] , and d [ 1 , N P ] . H is a random number obeying normal distribution.

3.3. Competitive Elimination Mechanism

In order to accelerate the convergence speed of HFA-DLL, a competitive elimination mechanism was proposed. This mechanism can eliminate the worst firefly by the elite firefly of the EAQ in each iteration. The worst firefly is defined as:
w x t = arg max f x 1 t , f x 2 t , , f x N P t
n x t = g b e s t + r 1 e a q j t e a q k t
where w x t is the worst firefly of the population at t-th generation, and N P is the size of the population. n x t is the child firefly of the EAQ at t-th generation, r 1 is a scale factor, and r 1 [ 0 , 1 ] . j and k are a random number, j k [ 1 , n ] , and n is the size of the EAQ.
x i t = n x t , f n x t < f w x t w x t , otherwise
where i is the index of the worst firefly of the population.

3.4. Stochastic Disturbance Strategy

In the search process, all fireflies in the population will learn from the global best firefly g b e s t , so g b e s t has an important influence on the population. For the complex multi-modal problem, once the g b e s t falls into a local optimum, the remaining fireflies will easily converge to the sub-optimal region, leading to premature convergence. A stochastic disturbance strategy is designed to prevent HFA-DLL from being in stagnation behavior. If the value of g b e s t does not change after ten iterations, the stochastic disturbance strategy will be triggered. This strategy is given below:
n b e s t t = r 2 · g b e s t t + 1 r 2 · g b e s t t e a q p t + D g , p · e b h · cos ( 2 π h )
g b e s t t = n b e s t t , f n b e s t t < f g b e s t t g b e s t t , otherwise
where r 2 ( 0 , 1 ) is a random parameter and D g , p is the Cartesian distance between g b e s t and e a q p t . b is the logarithmic spiral shape constant, and the path coefficient h is a random number in [−1,1].

3.5. Complexity Analysis

Similar to other iteration-based meta-heuristics, the computational time is evaluated by the running time of each cycle multiplied by the number of iterations. According to the pseudo-code given in Algorithm 1, the time complexity of HFA-DLL is analyzed as follows. N is the population size, D is the dimension problem dimension, and 4 m is used to represent the number of elite fireflies in the EAQ. In the location update stage, the position of the elite fireflies in the EAQ is updated 4 m times, and the remaining fireflies are updated N times. The complexity of the HFA-DLL single iteration algorithm is O ( D N 2 ) . If the number of iterations of HFA-DLL is K, the asymptotic upper bound of algorithm complexity is O ( K D N 2 ) .
Algorithm 1: The pseudo-code of HFA-DLL
Mathematics 11 03569 i001
A workstation equipped with Intel Core i7, 8700k 6 cores, and 12 threads CPU running at 4.5 GHz coupled with a 16 GB of DDR4 RAM at 3200 MHz, running Microsoft Windows 10 operating system was used to execute all of the simulations reported here. The algorithms were coded in the MATLAB 2021a platform. From Table 1, the running average time of different algorithms on 30-D functions. Each test is conducted 30 times independently and the computational time (in seconds).

4. Experimental Studies

In order to test the the performance of HFA-DLL, several experiments were conducted on the CEC 2017 global optimization benchmark function suite. Some state-of-the-art algorithms (PSO [20], FA [10], LF-FA [18], EE-FA [21], AD-IFA [11], and SHADE [22]) are used to compare with HFA-DLL. Further, the comparison of test results of different algorithms, strategy effectiveness, and parameter sensitivity were conducted to test the robustness of HFA-DLL.

4.1. Experimental Settings and Benchmark Functions

For all algorithms, the populations size N P = 30 , and the maximum number of fitness evaluations M a x F e s = 1000 × D , where D is the dimension of the benchmark functions. In the HFA-DLL, the random parameters α = 0.2 , the fixed optical absorption coefficient γ = 1 , and the coefficient of attraction b 0 = 1 . In the PSO, c 1 = 0.2 , c 2 = 0.2 , the maximum velocity v m a x = 0.8 . In the SHADE, P m i n = 2 N P , and H = N P = 30 . The parameters for other state-of-the-art algorithms (FA, LF-FA, EE-FA, and AD-IFA) were set to the identical guidelines from their original publications.
Due to the stochastic nature of these algorithms, each algorithm was independently run 30 times for statistical comparisons. The mean value and standard deviation value are calculated to assess the algorithm’s performance. For each problem, the best result is bolded. The results of HFA-DLL are compared with those of PSO, FA, LF-FA, EE-FA, SHADE, and AD-IFA, respectively, by Wilcoxon rank sum test at the significance level of 0.05. The marker “−” means it is worse than the HFA-DLL result, “+” is better than the HFA-DLL result, and “≈” is equivalent to the results of HFA-DLL.
The performance of HFA-DLL is evaluated on the CEC 2017 global optimization benchmark function suite; it has 30 benchmark functions. The CEC 2017 can be divided into four categories: unimodal functions (F1–F3), multimodal functions (F4–F10), hybrid functions (F11–F20), and composition functions (F21–F30). The detailed information is shown in Table 2, and also can be seen in [23]. Because our algorithm produces significantly different results each time when it runs on the F2 function, F2 is unstable, so the F2 function was excluded from the comparative experiment. We only use 29 functions of CEC 2017.

4.2. Comparison with Other State-of-the-Art Algorithms on CEC2017 for 30D/50D Problems

From Figure 2, Table 3 and Table 4, HFA-DLL is compared with three firefly algorithm variants and three traditional swarm intelligence algorithms on the CEC 2017 benchmark function suite. Some observations and conclusions are drawn from the analysis of the experimental results.
Firstly, for the unimodal functions (F1–F3), at 30D, HFA-DLL outperforms almost all other algorithms; only slightly worse than SHADE. In the case of 50D, SHADE can find the global optimal solution to F1 and F3, and HFA-DLL is only inferior than SHADE, and better than the other five algorithms on unimodal functions. For unimodal functions, HFA-DLL has a fast convergence ability and can quickly find the optimal solution. This is because the double-level learning strategy can converge quickly on unimodal problems. Through the information exchange of fireflies of different dimensions, the search scope is expanded and the search efficiency is improved.
Secondly, for the multimodal functions (F4–F10), at 30D, the performance of HFA-DLL is second only to SHADE on F7–F9 and better than all other algorithms on F10. Furthermore, HFA-DLL can find the optimal solution on F4 and F6. In the case of 50D, the performance of HFA-DLL is worse than SHADE on multimodal functions F5, F7–F9, and HFA-DLL outperforms all other algorithms on F4, F6, and F10. Because this category benchmark functions have the characteristics of translation, rotation, non-separability, extensibility, and continuous but differentiable only on a set of points. In addition to this, multimodal functions contain many local optima, which become more and more complex and difficult to optimize as the dimension increases. So it makes PSO, FA, and LF-FA more likely to become trapped in local optima and fails to find the global optimal solution. HFA-DLL uses the EAQ to store excellent information on the better position in each generation of the population to help the double-level learning strategy update the position. Each particle inherits through its elite fireflies, to improve the convergence accuracy. It can help the stagnation dimension jump out of the local minimum, and enhance the global search ability of HFA-DLL.
Thirdly, for the hybrid functions (F11–F20), at 30D, HFA-DLL can find the global optimal solution on F11, HFA-DLL is second only to SHADE on F12–F13, F15, F17–F18, and HFA-DLL maintains its advantage on F14, F16 and F19–F20. At 50D, HFA-DLL has worse capability than SHADE on F12, F15, F17 and F19–F20, and is better than all other algorithms on F11, F13–F14, F16 and F18. Since the category of hybrid functions is a composition of functions of several different types. It is a complicated optimization problem and contains a large number of locally optimal solutions. At the same time, the suboptimal local optimal solution is far from the global optimal solution. Within this category, different types of functions often exhibit different features and properties, and it is necessary to consider multiple different types of functions simultaneously. HFA-DLL retains an adaptive switching ratio when faced with a mixture function. It can leverage the fusion of multiple strategies to balance exploration and exploitation, and it can be optimized over a wider search space to improve global and local search capabilities.
Finally, for the composition function (F21–F30), at 30D, HFA-DLL shows significant advantages on F21, F23, F26 and F28-F30, but maintains the same local optimal performance with the SHADE on F27, and HFA-DLL is second only to SHADE and EE-FA on F22 and F25, respectively. In the face of the 50D problem, On F27 and F28, HFA-DLL, AD-IFA, EE-FA and LF-FA have the same performance, HFA-DLL outperforms the other six algorithms on F21, F23, F25–F26 and F29–F30, and is only slightly worse than SHADE on F22, F24 and F28. This is because the composition function is composed of multiple functions and the components often have some local properties. Each function is nonlinear, so its complexity is high, and it takes a long time to deal with such problems. HFA-DLL can not only expands the search range of the double-level learning strategy by learning from the elite fireflies, but also enhances the diversity of the population. Moreover, the competitive elimination mechanism is used to improve the convergence speed and accuracy of the algorithm while ensuring that the optimal value is not lost. In addition, a stochastic disturbance strategy is used to help the elite fireflies jump out of the local optimum, minimizing the time wasted in the wrong direction.
The effectiveness of the proposed HFA-DLL algorithm in terms of convergence accuracy, convergence speed, and algorithm reliability is verified through the CEC 2017 benchmark function suite. Experimental results show that the HFA-DLL algorithm significantly outperforms other classical FA and PSO algorithms and various other materialistic evolutionary algorithms in terms of statistical performance for most functions. Compared with the excellent algorithm SHADE, the ability of HFA-DLL is worse than the excellent algorithm SHADE in dealing with unimodal functions and multimodal functions, but it shows more powerful processing ability than SHADE in dealing with complex optimization problems such as hybrid functions and composition functions.
In the theorem of “no free lunch” theorem of optimization [24], that is “any elevated performance over one class of problems is offset by the performance over another class”. Therefore, a general purpose general optimization algorithm is theoretically impossible. The reason why HFA-DLL outperforms other algorithms is that HFA-DLL is able to use a two-level learning strategy to learn and communicate between different elite firefly individuals and different dimensions, which effectively enhances the diversity of the population and improves search efficiency. Competitive elimination is used instead of worst-case fireflies to speed up convergence, which can improve the accuracy of solving complex optimization problems. The stochastic disturbance strategy helps the elite fireflies to jump out of the local optimum, reducing the time wasted in the wrong direction. These strategies can work together to enable HFA-DLL to achieve optimal or suboptimal results on all 30-dimensional problems of the CEC 2017 benchmark suite of functions.
In order to evaluate the performance of all algorithms more effectively, the Friedman test was separately conducted for each algorithm in this study. The mean values of all algorithms on the 29 test functions were used in the Friedman test, where a lower average ranking value indicates better performance. As shown in Table 5, it is clear that the six algorithms in the 30-D scenario can be ranked in the following order: HFA-DLL, SHADE, AD-IFA, PSO, EE-FA, LF-FA, and FA. HFA-DLL achieves the best average ranking, while FA exhibits the weakest performance.

4.3. Parameter Sensitivity

The influence of the parameters m and N P on the performance of HFA-DLL are analyzed. Different combinations of m and N P values are compared on the CEC 2017 benchmark function set, and the average value is used to judge the results. The results of the sensitivity analysis for the parameters m and N P are shown in Table 6.
In parameter sensitivity analysis, when one parameter is analyzed, other parameters are set as standard values (i.e., m = 4 or N P = 30 ). HFA-DLL mainly relies on the (EAQ) to balance exploration and exploitation, to quickly find the global optimal solution. The EAQ can store excellent information from multiple generations of elite fireflies and, at the same time, it can help the double-level learning strategy to find new optimal solutions. The EAQ has a population size of 4 m , where m is the size of the elite fireflies selected from the population at each generation. The chosen parameter m determines the size of the EAQ and the ability of the firefly to learn from other good individuals, which is important for the solution. Therefore, different values of m were chosen based on the CEC 2017 benchmark function suite and performed 30 times.
HFA-DLL mainly relies on the elite archive queue (EAQ) to balance exploration and exploitation, and quickly finds the global optimal solution. The EAQ can store excellent information from multiple generations of elite fireflies. At the same time, it can help the double-level learning learning strategy to find a new optimal solution. The population size of the EAQ is 4 m , where m is the size of the elite fireflies selected from the population at each generation. The choice of the parameter m is determined The scale of the EAQ and the ability of fireflies to learn from other good individuals, which is very important for the solution. According to the analysis of experimental data, m = 4 is an appropriate parameter setting. This is because, when choosing a smaller m, it means that this firefly has fewer opportunities to learn from other elite fireflies, which leads to a decrease in population diversity, and a larger m will increase the exploration ability of HFA-DLL, but also significantly increase the computational cost. N P parameters influence on the performance of the HFA-DLL, when N P = 20 , small population size, leading to poor global search ability, on the multimodal functions and hybrid functions overall performance is poorer, but the larger population size cause to solve the waste of resources. Therefore, when N P = 30 and m = 4 , HFA-DLL achieves the best overall performance.

4.4. Strategy Effectiveness

HFA-DLL has three strategies: the double-level learning strategy, competitive elimination mechanism, and stochastic disturbance strategy. The details of these three strategies can be seen in Section 3. In order to verify the effectiveness of these three strategies, HFA-DLL(A), HFA-DLL(A + B), HFA-DLL(B + C), HFA-DLL(A + C), and HFA-DLL were compared in the CEC 2017 benchmark function suite. Here, A represents a double-level learning strategy, B represents a competitive elimination mechanism, and C represents a stochastic disturbance strategy. HFA-DLL(A) is HFA-DLL with only a double-level learning strategy, HFA-DLL(A + B) is HFA-DLL with a double-level learning strategy and competitive elimination mechanism, HFA-DLL(B + C) is HFA-DLL with a competitive elimination mechanism and stochastic disturbance strategy, HFA-DLL(A + C) is HFA-DLL with a double-level learning strategy and stochastic disturbance strategy, HFA-DLL is a standard HFA-DLL with these three strategies. The average of the 30 runs is shown in Table 7 and Figure 3. The best results obtained by the five algorithms are shown in bold. The “best” row represents the number of times the corresponding algorithm finds the best solution.
From Table 7 and Figure 3, we can see that HFA-DLL(A) is effective for the double-level learning strategy for unimodal function F1, multimodal (F4–F9), and hybrid (F11–F14, F16–F20) functions. This is due to the lack of local optima for unimodal functions, and, thus, it becomes simple to obtain ideal solutions as the convergence rate increases. However, it is slightly underrepresented in the composite function and does not show significant performance. The double-level learning strategy can speed up the firefly search, so on the unimodal functions; it can help HFA-DLL(A) to find the global optimal solution quickly. The multimodal function (F4–F9) has multi-local optimal solutions in the solution space. At the same time, as the dimensionality increases, the multimodal function becomes more and more complex. While the extension of the firefly search range can make it easier to search for optimal solutions for unimodal functions, it is not a good thing for multimodal functions. Therefore, it is important to balance the exploration and exploitation capabilities of fireflies. The double-level learning strategy and the adaptive switching mechanism can help fireflies to escape from the local optima. Hybrid and composite functions not only consist of a single function, but also have multiple properties. HFA-DLL uses the EAQ and a double-level learning strategy to increase the search horizon based on excellent solutions, which increases the probability and accuracy of finding the global optimal solution. At the same time, the adaptive switching mechanism can better balance global search and local search. It is not hard to conclude from Table 7 and Figure 3 that the double-level learning strategy is effective under the same number of iterations.
The experimental results show that, after updating the competitive elimination mechanism and stochastic disturbance strategy, the experimental results of the unimodal function, multimodal function, and hybrid function have achieved better results, but the optimal solution can not be localized on the composite function (F25–F29). The competitive elimination mechanism will increase the probability that the worst firefly jumps out of the local optimum and increase the convergence rate of the overall firefly. The competitive elimination mechanism improves the likelihood of jumping out of the local optimum of the solution space by learning from elite fireflies. The development power of the algorithm is enhanced and better solutions can be obtained for most functions on the test set. For complex mixture functions, where multiple local optima are close to each other, competitive elimination mechanisms using elite firefly updates do not support finding the optimal value when the elite firefly is also trapped in the local optimum. Therefore, a stochastic disturbance strategy is used to search for the global optimum, which helps the elite fireflies to jump out of the local optimum and maximizes the time wasted in the wrong direction. The experimental results validate the design expectation that the stochastic disturbance strategy focuses on global exploration while the competitive elimination mechanism focuses on local exploration.

5. Conclusions and Future Work

In this study, we integrated the three strategies into AD-IFA to obtain the HFA-DLL algorithm. In order to better balance the exploration behavior and exploitation characteristics of firefly algorithm, the adaptive search strategy of the AD-IFA algorithm is retained. At the same time, the double-level learning strategy is designed to effectively enhance the diversity of the population and avoid premature convergence. Then, a competitive elimination mechanism is proposed to improve the accuracy of HFA-DLL in solving complex optimization problems. Finally, a stochastic disturbance strategy is designed to enhance the ability of the algorithm to jump out of the local optimum and reduce the waste of resources in the wrong direction. The CEC2017 test functions were used to test the performance of the algorithm, including unimodal functions, multimodal functions, hybrid functions, and composition functions. Experimental results show that the performance of HFA-DLL algorithm is significantly better than that of the other six algorithms on most test functions. It is worth noting that the performance of the proposed HFA-DLL algorithm is not satisfactory on some unimodal and multimodal functions. The reasons for this phenomenon are complex, and there must be some underlying mechanisms that deserve further in-depth investigation. Combining other excellent learning strategies with the firefly algorithm can solve this problem. Our follow-up research will also include applying the proposed optimization algorithm to other complex practical engineering problems.

Author Contributions

Conceptualization, Y.W.; Methodology, Y.Z. (Yubo Zhao); Formal analysis, Y.Z. (Ying Zhan); Writing—original draft, K.C.; Writing—review & editing, C.X. All authors have read and agreedto the published version of the manuscript.

Funding

This work was supported in part by the Key Research Projects of Henan Science and Technology Department under Grant 232102310427 and 232102211058, in part by the Research and Practice Project of Research Teaching Reform in Henan Undergraduate University under Grant 2022SYJXLX114, in part by the Henan Science and Technology Think Tank Research Project under Grant HNKJZK-2023-51B, and in part by the Special Research Project for the Construction of Provincial Demonstration Schools at Nanyang University of Technology under Grant SFX202314.

Institutional Review Board Statement

Not applicable for studies not involving humans or animals.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Cui, L.; Li, G.; Wang, X.; Lin, Q.; Chen, J.; Lu, N.; Lu, J. A Ranking-Based Adaptive Artificial Bee Colony Algorithm for Global Numerical Optimization. Inf. Sci. 2017, 417, 169–185. [Google Scholar] [CrossRef]
  2. Kesemen, O.; Özkul, E.; Tezel, Ö.; Tiryaki, B.K. Artificial locust swarm optimization algorithm. Soft Comput. 2023, 27, 5663–5701. [Google Scholar] [CrossRef]
  3. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  4. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  5. Zhu, Q.; Tang, X.; Li, Y.; Yeboah, M.O. An improved differential-based harmony search algorithm with linear dynamic domain. Knowl. -Based Syst. 2020, 187, 104809.1–104809.14. [Google Scholar] [CrossRef]
  6. Iscan, H.; Kiran, M.S.; Gunduz, M. A novel candidate solution generation strategy for fruit fly optimizer. IEEE Access 2019, 7, 130903–130921. [Google Scholar] [CrossRef]
  7. Kiranl, M.S. TSA: Tree-seed algorithm for continuous optimization. Expert Syst. Appl. 2015, 42, 6686–6698. [Google Scholar] [CrossRef]
  8. Zhao, F.A.; Qin, S.A.; Zhang, Y.B.; Ma, W.C.; Zhang, C.D.; Song, H.A. A two-stage differential biogeography-based optimization algorithm and its performance analysis. Expert Syst. Appl. 2019, 115, 329–345. [Google Scholar] [CrossRef]
  9. Cui, L.; Li, G.; Lin, Q.; Chen, J.; Lu, N. Adaptive differential evolution algorithm with novel mutation strategies in multiple sub-populations. Comput. Oper. Res. 2016, 67, 155–173. [Google Scholar] [CrossRef]
  10. Yang, X. Firefly Algorithms for Multimodal Optimization; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  11. Wu, J.; Wang, Y.G.; Burrage, K.; Tian, Y.C.; Lawson, B.; Ding, Z. An improved firefly algorithm for global continuous optimization problems. Expert Syst. Appl. 2020, 149, 113340. [Google Scholar] [CrossRef]
  12. Wang, H.; Wang, W.; Cui, Z.; Zhou, X.; Zhao, J.; Li, Y. A new dynamic firefly algorithm for demand estimation of water resources. Inf. Sci. 2018, 438, 95–106. [Google Scholar] [CrossRef]
  13. Wang, Y.; Wang, B.; Li, Z.; Xu, C. A novel particle swarm optimization based on hybrid-learning model. Math. Biosci. Eng. 2023, 20, 7056–7087. [Google Scholar] [CrossRef] [PubMed]
  14. Paula, L.; Soares, A.S.; Soares, T.W.L.; Delbem, A.; Coelho, C.J.; Filho, A. Parallelization of a Modified Firefly Algorithm using GPU for Variable Selection in a Multivariate Calibration Problem. Int. J. Nat. Comput. Res. 2014, 4, 31–42. [Google Scholar] [CrossRef]
  15. Aydilek, I.B. A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl. Soft Comput. 2018, 66, 232–249. [Google Scholar] [CrossRef]
  16. Farshi, T.R.; Ardabili, A.K. A hybrid firefly and particle swarm optimization algorithm applied to multilevel image thresholding. Multimed. Syst. 2021, 27, 125–142. [Google Scholar] [CrossRef]
  17. Ch, S.; Sohani, S.K.; Kumar, D.; Malik, A.; Chahar, B.R.; Nema, A.K.; Panigrahi, B.K.; Dhiman, R.C. A Support Vector Machine-Firefly Algorithm based forecasting model to determine malaria transmission. Neurocomputing 2014, 129, 279–288. [Google Scholar] [CrossRef]
  18. Yang, X.S. Firefly Algorithm, Levy Flights and Global Optimization; Springer: Berlin/Heidelberg, Germany, 2010; pp. 209–218. [Google Scholar]
  19. Jensi, R.; Jiji, G.W. An enhanced particle swarm optimization with levy flight for global optimization. Appl. Soft Comput. 2016, 43, 248–261. [Google Scholar] [CrossRef]
  20. Wang, F.; Zhang, H.; Li, K.; Lin, Z.; Yang, J.; Shen, X.L. A hybrid particle swarm optimization algorithm using adaptive learning strategy. Inf. Sci. 2018, 436, 162–177. [Google Scholar] [CrossRef]
  21. Liu, J.; Shi, J.; Hao, F.; Dai, M.; Zhang, X. A novel enhanced exploration firefly algorithm for global continuous optimization problems. Eng. Comput. 2022, 38, 4479–4500. [Google Scholar] [CrossRef]
  22. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  23. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  24. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
Figure 1. The main framework of HFA-DLL.
Figure 1. The main framework of HFA-DLL.
Mathematics 11 03569 g001
Figure 2. Evolutionary curves of different algorithms on 30-D.
Figure 2. Evolutionary curves of different algorithms on 30-D.
Mathematics 11 03569 g002
Figure 3. Evolutionary curves of different strategies.
Figure 3. Evolutionary curves of different strategies.
Mathematics 11 03569 g003
Table 1. Running time(s) of different algorithms on 30-D functions.
Table 1. Running time(s) of different algorithms on 30-D functions.
Func.PSOFALF-FAEE-FASHADEAD-IFAHFA-DLL
F14.7210.6533.1141.6815.0483.2513.714
F34.6680.6643.0971.6806.4653.1593.712
F44.5960.6393.1091.6755.3523.1453.072
F54.8250.6113.1391.6915.9773.1393.733
F67.7540.8573.3321.8537.0053.3543.909
F75.1880.6423.1431.7116.1503.1843.779
F84.9510.6243.1181.7565.8713.1723.725
F95.1410.6803.1481.6917.2013.1773.721
F106.5780.7523.2341.7467.4893.2813.078
F114.9600.6223.1161.7425.4103.1363.872
F125.7770.7373.1791.7486.2263.1953.715
F135.2540.6913.1621.7265.8423.1803.791
F145.8620.6783.1991.7506.0613.2413.751
F155.0520.6243.1491.7275.7703.1673.797
F165.7900.6863.1641.7356.6783.1963.739
F176.1940.7493.2111.7479.5773.2423.792
F185.3600.6253.1571.7386.0103.2033.774
F198.8220.9573.4121.87913.6663.4563.728
F2011.0690.8493.2701.79110.0003.3163.882
F219.5121.0233.4521.87110.6803.4894.016
F2211.0691.1573.5651.91110.6843.5824.125
F2312.1521.2563.6631.94813.2013.6624.226
F2411.5931.0993.5931.93112.5783.6194.190
F2511.7811.2483.6091.93411.8093.6371.201
F2614.1641.4313.7792.02015.4763.8041.394
F2715.8251.4753.8852.09216.2383.9431.515
F2811.7691.4783.7572.01214.0303.7791.351
F2911.1601.1093.4951.09714.3393.5931.163
F3013.7011.3503.7231.99210.4583.7551.311
Table 2. Details of CEC 2017 benchmark functions.
Table 2. Details of CEC 2017 benchmark functions.
NO.FunctionsSearch Ranges F * = F ( x * )
Unimodal1Shifted and Rotated Bent Cigar Function [ 100 , 100 ] D 100
functions3Shifted and Rotated Zakharov Function [ 100 , 100 ] D 300
Simple4Shifted and Rotated Rosenbrock’s Function [ 100 , 100 ] D 400
Multimodal5Shifted and Rotated Rastrigin’s Function [ 100 , 100 ] D 500
functions6Shifted and Rotated Expanded Scaffer’s F6 Function [ 100 , 100 ] D 600
7Shifted and Rotated Lunacek Bi-Rastrigin Function [ 100 , 100 ] D 700
8Shifted and Rotated Non-Continuous Rastrigin’s Function [ 100 , 100 ] D 800
9Shifted and Rotated Levy Function [ 100 , 100 ] D 900
10Shifted and Rotated Schwefel’s Function [ 100 , 100 ] D 1000
Hybrid11Hybrid Function 1 (N = 3) [ 100 , 100 ] D 1100
functions12Hybrid Function 2 (N = 3) [ 100 , 100 ] D 1200
13Hybrid Function 3 (N = 3) [ 100 , 100 ] D 1300
14Hybrid Function 4 (N = 4) [ 100 , 100 ] D 1400
15Hybrid Function 5 (N = 4) [ 100 , 100 ] D 1500
16Hybrid Function 6 (N = 4) [ 100 , 100 ] D 1600
17Hybrid Function 6 (N = 5) [ 100 , 100 ] D 1700
18Hybrid Function 6 (N = 5) [ 100 , 100 ] D 1800
19Hybrid Function 6 (N = 5) [ 100 , 100 ] D 1900
20Hybrid Function 6 (N = 6) [ 100 , 100 ] D 2000
composition21Composition Function 1 (N = 3) [ 100 , 100 ] D 2100
functions22Composition Function 2 (N = 3) [ 100 , 100 ] D 2200
23Composition Function 3 (N = 4) [ 100 , 100 ] D 2300
24Composition Function 4 (N = 4) [ 100 , 100 ] D 2400
25Composition Function 5 (N = 5) [ 100 , 100 ] D 2500
26Composition Function 6 (N = 5) [ 100 , 100 ] D 2600
27Composition Function 7 (N = 6) [ 100 , 100 ] D 2700
28Composition Function 8 (N = 6) [ 100 , 100 ] D 2800
29Composition Function 9 (N = 3) [ 100 , 100 ] D 2900
30Composition Function 10 (N = 3) [ 100 , 100 ] D 3000
Note: x * stands for the global optima, F ( · ) is the fitness value, F 2 has been excluded because it shows unstable behavior.
Table 3. Comparison of different algorithms on 30-D functions.
Table 3. Comparison of different algorithms on 30-D functions.
Func.CriteriaPSOFALF-FAEE-FASHADEAD-IFAHFA-DLL
F1Mean 1.57 × 10 + 8 - 6.85 × 10 + 10 - 3.31 × 10 + 6 - 8.13 × 10 + 5 - 1 . 00 × 10 + 2 + 6.62 × 10 + 5 - 6.30 × 10 + 3
Std 9.08 × 10 + 7 1.21 × 10 + 10 7.52 × 10 + 5 3.52 × 10 + 5 1.00 × 10 14 2.53 × 10 + 5 5.89 × 10 + 2
F3Mean 3.34 × 10 + 3 + 2.25 × 10 + 5 - 2.66 × 10 + 4 - 2.48 × 10 + 4 - 3 . 00 × 10 02 + 1.81 × 10 + 3 + 7.44 × 10 + 3
Std 5.74 × 10 + 2 2.37 × 10 + 4 1.16 × 10 + 4 1.52 × 10 + 4 2.38 × 10 13 7.15 × 10 + 2 1.70 × 10 + 2
F4Mean 5.77 × 10 + 2 - 3.38 × 10 + 4 - 5.91 × 10 + 2 - 4.86 × 10 + 2 - 4.60 × 10 + 2 - 4.82 × 10 + 2 - 4.00 × 10 + 2
Std 3.79 × 10 + 1 7.80 × 10 + 3 6.21 × 10 + 1 3.11 × 10 + 1 4.18 × 10 + 0 2.35 × 10 + 1 4.94 × 10 2
F5Mean 7.69 × 10 + 2 - 9.02 × 10 + 2 - 9.43 × 10 + 2 - 7.98 × 10 + 2 - 5.98 × 10 + 2 - 1.15 × 10 + 3 - 5.92 × 10 + 2
Std 3.58 × 10 + 1 8.03 × 10 + 1 5.17 × 10 + 1 3.10 × 10 + 1 7.93 × 10 + 0 7.79 × 10 + 1 3.60 × 10 + 1
F6Mean 6.58 × 10 + 2 - 6.73 × 10 + 2 - 6.66 × 10 + 2 - 6.62 × 10 + 2 - 6.00 × 10 + 2 6.41 × 10 + 2 - 6.00 × 10 + 2
Std 1.45 × 10 + 1 6.78 × 10 + 0 6.44 × 10 + 0 1.70 × 10 + 1 7.16 × 10 2 3.17 × 10 + 0 1.95 × 10 + 0
F7Mean 9.77 × 10 + 2 - 2.65 × 10 + 3 - 1.30 × 10 + 3 - 1.16 × 10 + 3 - 7 . 60 × 10 + 2 + 1.28 × 10 + 3 - 8.40 × 10 + 2
Std 2.64 × 10 + 2 1.49 × 10 + 2 1.21 × 10 + 2 9.30 × 10 + 1 1.03 × 10 + 1 1.73 × 10 + 2 6.03 × 10 + 1
F8Mean 9.84 × 10 + 2 - 1.16 × 10 + 3 - 1.11 × 10 + 3 - 9.78 × 10 + 3 - 8 . 40 × 10 + 2 + 1.04 × 10 + 3 - 9.02 × 10 + 2
Std 3.67 × 10 + 1 4.82 × 10 + 1 6.93 × 10 + 1 6.67 × 10 + 1 6.89 × 10 + 0 5.41 × 10 + 1 1.50 × 10 + 1
F9Mean 4.62 × 10 + 3 - 7.60 × 10 + 4 - 9.76 × 10 + 3 - 1.42 × 10 + 3 + 9 . 21 × 10 + 2 + 9.15 × 10 + 3 - 1.72 × 10 + 3
Std 1.44 × 10 + 3 1.45 × 10 + 3 1.79 × 10 + 3 1.01 × 10 + 3 1.98 × 10 + 1 2.75 × 10 + 3 1.39 × 10 + 3
F10Mean 6.84 × 10 + 3 - 5.97 × 10 + 3 - 6.13 × 10 + 3 - 6.04 × 10 + 3 - 3.40 × 10 + 3 - 5.86 × 10 + 3 - 3.34 × 10 + 3
Std 3.86 × 10 + 2 2.76 × 10 + 2 8.56 × 10 + 2 5.33 × 10 + 2 2.64 × 10 + 2 2.12 × 10 + 2 3.49 × 10 + 2
F11Mean 1.29 × 10 + 3 - 1.29 × 10 + 3 - 1.45 × 10 + 3 - 2.19 × 10 + 3 - 1.29 × 10 + 3 - 1.36 × 10 + 3 - 1.10 × 10 + 3
Std 5.16 × 10 + 1 1.16 × 10 + 3 1.04 × 10 + 2 5.87 × 10 + 1 7.41 × 10 + 1 5.15 × 10 + 1 1.67 × 10 1
F12Mean 2.05 × 10 + 7 - 1.23 × 10 + 10 - 6.54 × 10 + 6 - 1.97 × 10 + 6 - 2 . 76 × 10 + 3 + 6.12 × 10 + 6 - 7.73 × 10 + 5
Std 1.70 × 10 + 7 2.12 × 10 + 9 3.29 × 10 + 6 2.18 × 10 + 5 9.06 × 10 + 2 2.93 × 10 + 6 7.82 × 10 + 2
F13Mean 2.17 × 10 + 6 - 6.53 × 10 + 9 - 1.16 × 10 + 5 - 2.63 × 10 + 5 - 2 . 77 × 10 + 3 + 7.75 × 10 + 4 - 7.02 × 10 + 3
Std 4.99 × 10 + 5 3.41 × 10 + 9 9.51 × 10 + 4 7.84 × 10 + 4 1.21 × 10 + 1 5.50 × 10 + 4 3.71 × 10 + 2
F14Mean 4.31 × 10 + 3 - 2.74 × 10 + 6 - 1.67 × 10 + 4 - 4.24 × 10 + 5 - 1.53 × 10 + 3 - 3.35 × 10 + 4 - 1.51 × 10 + 3
Std 3.72 × 10 + 3 1.60 × 10 + 6 3.74 × 10 + 3 2.26 × 10 + 4 8.30 × 10 + 1 1.82 × 10 + 4 1.35 × 10 + 3
F15Mean 3.19 × 10 + 3 - 1.81 × 10 + 5 - 5.27 × 10 + 4 - 1.52 × 10 + 5 - 1 . 74 × 10 + 3 + 1.60 × 10 + 4 - 2.83 × 10 + 3
Std 1.07 × 10 + 3 3.02 × 10 + 4 3.13 × 10 + 4 4.21 × 10 + 4 4.19 × 10 + 1 7.95 × 10 + 3 7.11 × 10 + 2
F16Mean 2.53 × 10 + 3 - 4.00 × 10 + 3 - 3.52 × 10 + 3 - 2.75 × 10 + 3 - 2.18 × 10 + 3 - 2.89 × 10 + 3 - 2.11 × 10 + 3
Std 7.29 × 10 + 2 2.41 × 10 + 2 3.54 × 10 + 2 5.86 × 10 + 2 2.68 × 10 + 2 8.91 × 10 + 1 3.15 × 10 + 2
F17Mean 2.66 × 10 + 3 - 2.52 × 10 + 3 - 3.04 × 10 + 3 - 3.75 × 10 + 3 - 1 . 78 × 10 + 3 + 2.91 × 10 + 3 - 2.05 × 10 + 3
Std 6.06 × 10 + 2 1.04 × 10 + 3 3.01 × 10 + 2 2.07 × 10 + 2 5.38 × 10 + 1 1.63 × 10 + 2 3.22 × 10 + 2
F18Mean 3.99 × 10 + 5 - 4.45 × 10 + 5 - 2.00 × 10 + 5 - 1.59 × 10 + 6 - 2 . 02 × 10 + 3 + 3.89 × 10 + 5 - 1.20 × 10 + 5
Std 2.34 × 10 + 5 1.31 × 10 + 4 5.43 × 10 + 4 4.66 × 10 + 5 1.29 × 10 + 2 1.16 × 10 + 5 5.21 × 10 + 2
F19Mean 6.55 × 10 + 3 - 1.48 × 10 08 - 2.54 × 10 + 5 - 1.23 × 10 + 5 - 2.13 × 10 + 3 - 1.10 × 10 + 5 - 2.00 × 10 + 3
Std 1.11 × 10 + 3 1.24 × 10 07 7.53 × 10 + 4 6.73 × 10 + 4 5.32 × 10 + 1 4.55 × 10 + 4 7.77 × 10 + 2
F20Mean 2.67 × 10 + 3 - 3.07 × 10 + 3 - 2.87 × 10 + 3 - 2.60 × 10 + 3 - 2.11 × 10 + 3 - 2.88 × 10 + 3 - 2.03 × 10 + 3
Std 8.90 × 10 + 1 2.30 × 10 + 2 1.36 × 10 + 2 4.09 × 10 + 2 9.02 × 10 + 1 2.79 × 10 + 2 2.27 × 10 + 2
F21Mean 2.62 × 10 + 3 - 2.76 × 10 + 3 - 2.61 × 10 + 3 - 2.60 × 10 + 3 - 2.46 × 10 + 3 - 2.52 × 10 + 3 - 2.39 × 10 + 3
Std 6.24 × 10 + 1 1.50 × 10 + 1 1.21 × 10 + 2 7.14 × 10 + 1 5.72 × 10 + 1 3.72 × 10 + 1 4.98 × 10 + 1
F22Mean 7.33 × 10 + 3 - 9.12 × 10 + 3 - 7.22 × 10 + 3 - 6.15 × 10 + 3 - 2 . 80 × 10 + 3 + 6.84 × 10 + 3 - 4.91 × 10 + 3
Std 4.90 × 10 + 2 4.43 × 10 + 2 1.14 × 10 + 3 5.89 × 10 + 2 1.11 × 10 + 3 7.59 × 10 + 2 4.44 × 10 + 1
F23Mean 3.38 × 10 + 3 - 3.40 × 10 + 3 - 3.26 × 10 + 3 - 3.12 × 10 + 3 - 2.88 × 10 + 3 - 3.05 × 10 + 3 - 2.78 × 10 + 3
Std 1.20 × 10 + 2 2.15 × 10 + 2 7.48 × 10 + 1 1.16 × 10 + 2 7.24 × 10 + 1 1.51 × 10 + 2 1.93 × 10 + 1
F24Mean 3.39 × 10 + 3 - 3.85 × 10 + 3 - 3.58 × 10 + 3 - 3.94 × 10 + 3 - 2 . 86 × 10 + 3 + 3.58 × 10 + 3 - 3.05 × 10 + 3
Std 8.12 × 10 + 1 1.21 × 10 + 2 1.17 × 10 + 2 1.58 × 10 + 2 1.06 × 10 + 3 1.51 × 10 + 2 2.34 × 10 + 1
F25Mean 3.01 × 10 + 3 - 6.33 × 10 + 3 - 2.88 × 10 + 3 - 2 . 87 × 10 + 3 + 2.89 × 10 + 3 - 2.94 × 10 + 3 - 2.88 × 10 + 3
Std 7.00 × 10 + 2 8.12 × 10 + 2 2.67 × 10 + 1 7.04 × 10 1 2.53 × 10 + 1 1.07 × 10 + 1 2.47 × 10 + 1
F26Mean 8.22 × 10 + 3 - 1.27 × 10 + 4 - 6.71 × 10 + 3 - 9.70 × 10 + 3 - 4.02 × 10 + 3 - 7.09 × 10 + 3 - 2.90 × 10 + 3
Std 1.45 × 10 + 3 9.70 × 10 + 2 1.19 × 10 + 3 1.01 × 10 + 3 6.70 × 10 + 1 2.86 × 10 + 3 1.75 × 10 + 2
F27Mean 4.54 × 10 + 3 - 4.96 × 10 + 3 - 3 . 18 × 10 + 3 + 3.20 × 10 + 3 3.21 × 10 + 3 - 3.20 × 10 + 3 3.20 × 10 + 3
Std 1.40 × 10 4 6.00 × 10 + 2 1.20 × 10 + 1 3.87 × 10 + 2 1.78 × 10 4 1.04 × 10 4 2.48 × 10 4
F28Mean 3.32 × 10 + 3 - 9.90 × 10 + 3 - 3.30 × 10 + 3 3.40 × 10 + 3 - 3.35 × 10 + 3 - 3.30 × 10 + 3 3.30 × 10 + 3
Std 6.45 × 10 5 1.55 × 10 + 3 1.83 × 10 + 1 1.67 × 10 + 1 1.54 × 10 + 1 1.71 × 10 + 1 2.19 × 10 4
F29Mean 5.91 × 10 + 3 - 5.39 × 10 + 3 - 4.30 × 10 + 3 - 3.86 × 10 + 3 - 3.75 × 10 + 3 - 4.07 × 10 + 3 - 3.66 × 10 + 3
Std 5.82 × 10 + 2 1.77 × 10 + 3 2.27 × 10 + 2 1.44 × 10 + 2 1.02 × 10 + 1 2.32 × 10 + 2 3.85 × 10 + 2
F30Mean 6.57 × 10 + 4 - 1.47 × 10 + 8 - 4.13 × 10 + 5 - 8.87 × 10 + 5 - 5.60 × 10 + 3 - 2.57 × 10 + 5 - 5.39 × 10 + 3
Std 2.12 × 10 + 4 1.33 × 10 + 7 1.52 × 10 + 5 2.06 × 10 + 5 6.84 × 10 + 2 7.66 × 10 + 4 5.64 × 10 + 2
+/−/≈1/28/00/29/01/27/12/26/112/16/11/26/2
Table 4. Comparison of different algorithms on 50-D functions.
Table 4. Comparison of different algorithms on 50-D functions.
Func.CriteriaPSOFALF-FAEE-FASHADEAD-IFAHFA-DLL
F1Mean 1.30 × 10 + 9 - 1.42 × 10 + 11 - 1.24 × 10 + 7 - 7.32 × 10 + 6 - 1 . 00 × 10 + 2 + 9.72 × 10 + 10 - 2.79 × 10 + 4
Std 3.33 × 10 + 8 2.21 × 10 + 10 1.67 × 10 + 6 2.41 × 10 + 6 2.22 × 10 9 2.17 × 10 + 6 1.99 × 10 + 4
F3Mean 5.66 × 10 + 3 + 3.65 × 10 + 5 - 3.39 × 10 + 4 - 5.73 × 10 + 4 - 3 . 00 × 10 + 2 + 1.19 × 10 + 3 + 1.73 × 10 + 4
Std 9.93 × 10 + 2 9.79 × 10 + 4 2.24 × 10 + 4 9.22 × 10 + 4 3.73 × 10 13 1.27 × 10 + 2 2.68 × 10 + 2
F4Mean 7.83 × 10 + 2 - 4.03 × 10 + 4 - 3.84 × 10 + 6 - 5.09 × 10 + 2 - 4.67 × 10 + 2 - 6.17 × 10 + 2 - 4.46 × 10 + 2
Std 9.31 × 10 + 1 9.11 × 10 + 3 2.59 × 10 + 6 4.04 × 10 + 1 3.60 × 10 + 1 7.79 × 10 + 1 4.20 × 10 + 1
F5Mean 9.15 × 10 + 2 - 1.13 × 10 + 3 - 6.91 × 10 + 2 + 1.01 × 10 + 3 - 6 . 03 × 10 + 2 + 1.18 × 10 + 3 - 6.99 × 10 + 2
Std 2.81 × 10 + 1 3.94 × 10 + 1 7.66 × 10 + 1 8.06 × 10 + 1 2.78 × 10 + 1 5.24 × 10 + 1 3.30 × 10 + 1
F6Mean 6.76 × 10 + 2 - 6.88 × 10 + 2 - 6.91 × 10 + 2 - 6.70 × 10 + 2 - 6.01 × 10 + 2 6.80 × 10 + 2 - 6.01 × 10 + 2
Std 5.10 × 10 + 0 1.09 × 10 + 1 7.66 × 10 + 0 1.70 × 10 + 1 8.15 × 10 1 5.79 × 10 + 0 1.01 × 10 + 0
F7Mean 1.77 × 10 + 3 - 5.29 × 10 + 3 - 2.29 × 10 + 3 - 2.10 × 10 + 3 - 8 . 56 × 10 + 2 + 2.26 × 10 + 3 - 9.46 × 10 + 2
Std 1.38 × 10 + 2 3.30 × 10 + 2 1.72 × 10 + 2 1.16 × 10 + 2 1.01 × 10 + 1 2.23 × 10 + 2 5.18 × 10 + 1
F8Mean 1.29 × 10 + 3 - 1.57 × 10 + 3 - 1.62 × 10 + 3 - 1.31 × 10 + 3 - 8.83 × 10 + 3 - 1.58 × 10 + 3 - 1.00 × 10 + 3
Std 2.70 × 10 + 1 6.05 × 10 + 1 6.13 × 10 + 1 1.06 × 10 + 2 1.01 × 10 + 1 1.66 × 10 + 2 3.60 × 10 + 1
F9Mean 2.40 × 10 + 4 - 2.29 × 10 + 4 - 4.47 × 10 + 4 - 2.23 × 10 + 4 - 1 . 23 × 10 + 3 + 3.32 × 10 + 4 - 3.11 × 10 + 3
Std 4.02 × 10 + 3 4.63 × 10 + 3 4.48 × 10 + 3 2.97 × 10 + 3 1.66 × 10 + 2 3.84 × 10 + 3 2.88 × 10 + 3
F10Mean 1 . 95 × 10 + 3 + 9.38 × 10 + 3 - 8.64 × 10 + 3 - 8.99 × 10 + 3 - 5.27 × 10 + 3 - 8.99 × 10 + 3 - 5.23 × 10 + 3
Std 1.44 × 10 + 2 1.41 × 10 + 3 1.00 × 10 + 3 8.01 × 10 + 2 5.24 × 10 + 2 1.05 × 10 + 3 4.13 × 10 + 3
F11Mean 2.94 × 10 + 8 - 4.88 × 10 + 4 - 1.70 × 10 + 3 - 1.93 × 10 + 3 - 1.38 × 10 + 3 - 1.51 × 10 + 3 - 1.20 × 10 + 3
Std 2.78 × 10 + 8 1.16 × 10 + 4 1.51 × 10 + 2 2.34 × 10 + 2 9.03 × 10 + 1 5.68 × 10 + 1 4.49 × 10 + 1
F12Mean 3.78 × 10 + 6 - 6.26 × 10 + 10 - 4.19 × 10 + 7 - 1.00 × 10 + 8 - 7 . 42 × 10 + 3 + 3.18 × 10 + 7 - 3.47 × 10 + 6
Std 1.37 × 10 + 6 8.52 × 10 + 9 1.14 × 10 + 7 3.57 × 10 + 7 3.43 × 10 + 3 1.24 × 10 + 7 2.60 × 10 + 5
F13Mean 5.93 × 10 + 8 - 1.63 × 10 + 10 - 4.16 × 10 + 6 - 3.37 × 10 + 5 - 2.91 × 10 + 3 - 8.03 × 10 + 5 - 2.88 × 10 + 3
Std 2.55 × 10 + 8 8.41 × 10 + 9 2.80 × 10 + 5 1.52 × 10 + 5 1.08 × 10 + 3 7.20 × 10 + 4 1.01 × 10 + 3
F14Mean 9.12 × 10 + 4 - 3.20 × 10 + 7 - 2.94 × 10 + 4 - 7.40 × 10 + 5 - 5.21 × 10 + 3 - 3.39 × 10 + 5 - 4.29 × 10 + 3
Std 7.55 × 10 + 4 1.60 × 10 + 6 7.01 × 10 + 4 2.74 × 10 + 5 3.72 × 10 + 2 1.02 × 10 + 5 1.35 × 10 + 3
F15Mean 1.89 × 10 + 6 - 3.42 × 10 + 9 - 4.55 × 10 + 5 - 8.45 × 10 + 5 - 2 . 10 × 10 + 3 + 2.72 × 10 + 5 - 8.29 × 10 + 3
Std 1.37 × 10 + 6 3.02 × 10 + 9 1.20 × 10 + 5 3.34 × 10 + 5 2.36 × 10 + 2 7.95 × 10 + 3 1.34 × 10 + 3
F16Mean 5.18 × 10 + 3 - 6.45 × 10 + 3 - 4.93 × 10 + 3 - 4.41 × 10 + 3 - 2.66 × 10 + 3 - 4.70 × 10 + 3 - 2.47 × 10 + 3
Std 5.29 × 10 + 2 6.60 × 10 + 2 4.86 × 10 + 2 3.75 × 10 + 2 2.57 × 10 + 2 3.01 × 10 + 2 9.48 × 10 + 2
F17Mean 5.49 × 10 + 3 - 5.19 × 10 + 3 - 4.47 × 10 + 3 - 4.26 × 10 + 3 - 2 . 67 × 10 + 3 + 4.53 × 10 + 3 - 2.90 × 10 + 3
Std 7.06 × 10 + 2 2.41 × 10 + 3 3.80 × 10 + 2 4.38 × 10 + 2 2.04 × 10 + 2 4.85 × 10 + 2 2.97 × 10 + 2
F18Mean 2.49 × 10 + 6 - 9.22 × 10 + 5 - 4.79 × 10 + 5 - 2.54 × 10 + 6 - 2.08 × 10 + 3 - 1.15 × 10 + 6 - 2.03 × 10 + 3
Std 9.34 × 10 + 5 6.90 × 10 + 4 6.19 × 10 + 5 6.58 × 10 + 5 4.92 × 10 + 1 3.21 × 10 + 5 5.71 × 10 + 2
F19Mean 5.84 × 10 + 6 - 1.17 × 10 + 9 - 8.81 × 10 + 5 - 5.49 × 10 + 5 - 2 . 15 × 10 + 3 + 9.04 × 10 + 5 - 1.03 × 10 + 5
Std 1.92 × 10 + 6 1.06 × 10 + 9 3.28 × 10 + 5 2.44 × 10 + 5 9.71 × 10 + 1 2.32 × 10 + 5 1.51 × 10 + 3
F20Mean 3.71 × 10 + 3 - 3.75 × 10 + 3 - 4.15 × 10 + 3 - 4.37 × 10 + 3 - 2 . 62 × 10 + 3 + 4.04 × 10 + 3 - 3.21 × 10 + 3
Std 3.71 × 10 + 2 4.79 × 10 + 2 2.48 × 10 + 2 2.88 × 10 + 2 1.28 × 10 + 2 3.73 × 10 + 2 1.45 × 10 + 3
F21Mean 3.00 × 10 + 3 - 3.07 × 10 + 3 - 3.13 × 10 + 3 - 3.09 × 10 + 3 - 2.48 × 10 + 3 - 3.05 × 10 + 3 - 2.45 × 10 + 3
Std 7.24 × 10 + 1 5.76 × 10 + 1 1.03 × 10 + 2 1.27 × 10 + 2 2.81 × 10 + 1 5.61 × 10 + 1 2.37 × 10 + 1
F22Mean 1.33 × 10 + 4 - 1.09 × 10 + 4 - 1.28 × 10 + 4 - 1.13 × 10 + 4 - 6 . 72 × 10 + 3 + 1.13 × 10 + 4 - 7.43 × 10 + 3
Std 9.90 × 10 + 2 1.16 × 10 + 3 1.35 × 10 + 3 1.01 × 10 + 3 3.51 × 10 + 2 7.27 × 10 + 2 6.27 × 10 + 2
F23Mean 4.78 × 10 + 3 - 4.33 × 10 + 3 - 3.83 × 10 + 3 - 3.73 × 10 + 3 - 2.93 × 10 + 3 - 4.01 × 10 + 3 - 2.91 × 10 + 3
Std 2.20 × 10 + 2 2.74 × 10 + 2 2.32 × 10 + 2 1.41 × 10 + 2 2.18 × 10 + 1 1.56 × 10 + 2 9.38 × 10 + 1
F24Mean 3.86 × 10 + 3 - 4.70 × 10 + 3 - 3.85 × 10 + 3 - 4.81 × 10 + 3 - 2 . 99 × 10 + 3 + 5.14 × 10 + 3 - 3.40 × 10 + 3
Std 9.03 × 10 + 1 2.86 × 10 + 2 7.64 × 10 + 2 2.72 × 10 + 2 2.26 × 10 + 1 4.69 × 10 + 2 1.08 × 10 + 2
F25Mean 3.80 × 10 + 3 - 3.66 × 10 + 3 - 3.04 × 10 + 3 - 3.01 × 10 + 3 - 3.03 × 10 + 3 - 2.97 × 10 + 3 2.97 × 10 + 3
Std 3.18 × 10 + 1 1.85 × 10 + 3 4.69 × 10 + 1 3.10 × 10 + 1 4.85 × 10 + 1 5.63 × 10 + 1 2.91 × 10 + 1
F26Mean 1.54 × 10 + 4 - 2.21 × 10 + 4 - 9.77 × 10 + 3 - 1.21 × 10 + 4 - 4.87 × 10 + 3 - 1.03 × 10 + 4 - 4.42 × 10 + 3
Std 7.63 × 10 + 2 3.31 × 10 + 3 3.76 × 10 + 3 1.19 × 10 + 3 2.11 × 10 + 2 1.10 × 10 + 3 1.32 × 10 + 2
F27Mean 6.75 × 10 + 3 - 6.92 × 10 + 3 - 3.20 × 10 + 3 3.20 × 10 + 3 3.39 × 10 + 3 - 3.20 × 10 + 3 3.20 × 10 + 3
Std 5.49 × 10 + 2 6.05 × 10 + 2 1.10 × 10 + 1 3.87 × 10 12 1.06 × 10 + 2 3.10 × 10 2 2.52 × 10 2
F28Mean 3.63 × 10 + 3 - 1.47 × 10 + 4 - 3.30 × 10 + 3 3.30 × 10 + 3 3 . 29 × 10 + 3 + 3.30 × 10 + 3 3.30 × 10 + 3
Std 1.44 × 10 + 2 1.79 × 10 + 3 1.25 × 10 + 1 1.67 × 10 14 2.04 × 10 + 1 1.19 × 10 + 1 2.19 × 10 4
F29Mean 8.91 × 10 + 3 - 1.39 × 10 + 4 - 5.10 × 10 + 3 - 5.87 × 10 + 3 - 3.78 × 10 + 3 - 5.11 × 10 + 3 - 3.68 × 10 + 3
Std 7.82 × 10 + 2 4.02 × 10 + 3 4.95 × 10 + 2 6.22 × 10 + 2 2.74 × 10 + 2 2.16 × 10 + 2 1.49 × 10 + 2
F30Mean 8.42 × 10 + 6 - 1.85 × 10 + 9 - 2.73 × 10 + 6 - 2.83 × 10 + 6 - 7.14 × 10 + 5 - 3.08 × 10 + 6 - 6.80 × 10 + 3
Std 3.21 × 10 + 6 4.80 × 10 + 8 1.48 × 10 + 6 1.32 × 10 + 6 1.80 × 10 + 5 1.62 × 10 + 6 1.49 × 10 + 2
+/−/≈2/27/00/29/01/26/20/27/213/15/11/25/3
Table 5. Average rankings achieved by Friedman test at 30-D.
Table 5. Average rankings achieved by Friedman test at 30-D.
NO.AlgorithmsThe Average Ranking
1HFA-DLL1.59
2SHADE1.82
3AD-IFA3.97
4PSO4.45
5EE-FA4.66
6LF-FA4.80
7FA6.31
Table 6. Computational results of HFA-DLL with different m and N P settings over benchmark functions with 30 variables.
Table 6. Computational results of HFA-DLL with different m and N P settings over benchmark functions with 30 variables.
Func. NP = 60
m = 3
NP = 60
m = 4
NP = 60
m = 5
NP = 20
m = 3
NP = 20
m = 4
NP = 20
m = 5
NP = 30
m = 3
NP = 30
m = 5
NP = 30
m = 4
F1 5.89 × 10 + 3 + 5.64 × 10 + 3 + 5.80 × 10 + 3 + 6.30 × 10 + 3 6.91 × 10 + 3 - 6.89 × 10 + 3 - 6.41 × 10 + 3 - 6.30 × 10 + 3 6.30 × 10 + 3
F3 7.68 × 10 + 3 - 7.44 × 10 + 3 6.89 × 10 + 3 + 7.87 × 10 + 3 - 7.67 × 10 + 3 - 7.01 × 10 + 3 + 6.95 × 10 + 3 + 7.41 × 10 + 3 + 7.44 × 10 + 3
F4 4.00 × 10 + 2 4.01 × 10 + 2 - 4.00 × 10 + 2 4.11 × 10 + 2 - 4.13 × 10 + 2 - 4.05 × 10 + 2 - 4.00 × 10 + 2 4.03 × 10 + 2 - 4.00 × 10 + 2
F5 6.41 × 10 + 2 6.41 × 10 + 2 6.15 × 10 + 2 + 6.14 × 10 + 2 + 6.41 × 10 + 2 6.45 × 10 + 2 - 6.40 × 10 + 2 + 6.41 × 10 + 2 6.41 × 10 + 2
F6 6.01 × 10 + 2 - 6.00 × 10 + 2 6.00 × 10 + 2 6.00 × 10 + 2 6.13 × 10 + 2 - 6.11 × 10 + 2 - 6.00 × 10 + 2 6.01 × 10 + 2 - 6.00 × 10 + 2
F7 8.50 × 10 + 2 - 8.40 × 10 + 2 8.64 × 10 + 2 - 8.51 × 10 + 2 - 9.04 × 10 + 2 - 8.82 × 10 + 2 - 8.41 × 10 + 2 - 8.40 × 10 + 2 8.40 × 10 + 2
F8 9.15 × 10 + 2 - 9.02 × 10 + 2 9.36 × 10 + 2 - 9.11 × 10 + 2 - 9.20 × 10 + 2 - 9.15 × 10 + 2 - 9.09 × 10 + 2 - 9.21 × 10 + 2 - 9.02 × 10 + 2
F9 1.72 × 10 + 3 1.72 × 10 + 3 1.96 × 10 + 3 - 1.68 × 10 + 3 + 2.05 × 10 + 3 - 1.72 × 10 + 3 1.73 × 10 + 3 - 1.70 × 10 + 3 + 1.72 × 10 + 3
F10 3.92 × 10 + 3 - 3.34 × 10 + 3 3.35 × 10 + 3 - 4.75 × 10 + 3 - 4.56 × 10 + 3 - 4.04 × 10 + 3 - 3.34 × 10 + 3 3.44 × 10 + 3 - 3.34 × 10 + 3
F11 1.18 × 10 + 3 - 1.10 × 10 + 3 1.10 × 10 + 3 1.21 × 10 + 3 - 1.22 × 10 + 3 - 1.28 × 10 + 3 - 1.18 × 10 + 3 - 1.20 × 10 + 3 - 1.10 × 10 + 3
F12 7.73 × 10 + 5 7.89 × 10 + 5 - 7.59 × 10 + 5 + 7.86 × 10 + 5 - 7.80 × 10 + 5 - 7.63 × 10 + 5 + 7.73 × 10 + 5 7.89 × 10 + 5 - 7.73 × 10 + 5
F13 6.98 × 10 + 3 + 7.02 × 10 + 3 7.02 × 10 + 3 6.77 × 10 + 3 + 6.71 × 10 + 3 + 7.02 × 10 + 3 7.15 × 10 + 3 - 6.67 × 10 + 3 + 7.02 × 10 + 3
F14 1.72 × 10 + 3 - 1.67 × 10 + 3 - 1.51 × 10 + 3 1.85 × 10 + 3 - 1.51 × 10 + 3 1.64 × 10 + 3 - 1.51 × 10 + 3 1.49 × 10 + 3 + 1.51 × 10 + 3
F15 2.83 × 10 + 3 2.64 × 10 + 3 + 2.56 × 10 + 3 + 2.80 × 10 + 3 + 2.79 × 10 + 3 + 2.84 × 10 + 3 - 2.83 × 10 + 3 3.00 × 10 + 3 - 2.83 × 10 + 3
F16 3.08 × 10 + 3 - 2.84 × 10 + 3 - 3.06 × 10 + 3 - 2.60 × 10 + 3 - 2.59 × 10 + 3 - 2.55 × 10 + 3 - 2.11 × 10 + 3 2.36 × 10 + 3 - 2.11 × 10 + 3
F17 2.05 × 10 + 3 2.11 × 10 + 3 - 2.29 × 10 + 3 - 2.23 × 10 + 3 - 2.18 × 10 + 3 - 2.05 × 10 + 3 2.30 × 10 + 3 - 2.22 × 10 + 3 - 2.05 × 10 + 3
F18 3.69 × 10 + 5 - 3.85 × 10 + 5 - 3.61 × 10 + 5 - 2.54 × 10 + 5 - 2.54 × 10 + 5 - 2.86 × 10 + 5 - 1.43 × 10 + 5 - 1.59 × 10 + 5 - 1.20 × 10 + 5
F19 2.28 × 10 + 3 - 2.00 × 10 + 3 2.08 × 10 + 3 - 2.00 × 10 + 3 2.28 × 10 + 3 - 2.43 × 10 + 3 - 2.22 × 10 + 3 - 2.00 × 10 + 3 2.00 × 10 + 3
F20 2.03 × 10 + 3 2.18 × 10 + 3 - 2.24 × 10 + 3 - 2.19 × 10 + 3 - 2.18 × 10 + 3 - 2.03 × 10 + 3 2.03 × 10 + 3 2.19 × 10 + 3 - 2.03 × 10 + 3
F21 2.43 × 10 + 3 - 2.42 × 10 + 3 - 2.43 × 10 + 3 - 2.39 × 10 + 3 2.39 × 10 + 3 2.40 × 10 + 3 - 2.39 × 10 + 3 2.43 × 10 + 3 - 2.39 × 10 + 3
F22 5.54 × 10 + 3 - 5.02 × 10 + 3 - 5.40 × 10 + 3 - 4.98 × 10 + 3 - 4.35 × 10 + 3 + 4.34 × 10 + 3 + 4.96 × 10 + 3 - 4.18 × 10 + 3 + 4.91 × 10 + 3
F23 2.80 × 10 + 3 - 2.81 × 10 + 3 - 2.83 × 10 + 3 - 2.78 × 10 + 3 2.82 × 10 + 3 - 2.82 × 10 + 3 - 2.83 × 10 + 3 - 2.78 × 10 + 3 2.78 × 10 + 3
F24 3.13 × 10 + 3 - 3.10 × 10 + 3 - 3.12 × 10 + 3 - 3.11 × 10 + 3 - 3.05 × 10 + 3 3.05 × 10 + 3 3.16 × 10 + 3 - 3.13 × 10 + 3 - 3.05 × 10 + 3
F25 2.88 × 10 + 3 2.88 × 10 + 3 2.88 × 10 + 3 2.88 × 10 + 3 2.91 × 10 + 3 - 2.91 × 10 + 3 - 2.88 × 10 + 3 2.88 × 10 + 3 2.88 × 10 + 3
F26 4.20 × 10 + 3 - 4.64 × 10 + 3 - 4.68 × 10 + 3 - 4.88 × 10 + 3 - 4.08 × 10 + 3 - 4.88 × 10 + 3 - 3.38 × 10 + 3 - 2.98 × 10 + 3 - 2.90 × 10 + 3
F27 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3
F28 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3
F29 3.86 × 10 + 3 - 3.65 × 10 + 3 + 3.70 × 10 + 3 - 3.69 × 10 + 3 - 3.68 × 10 + 3 - 3.67 × 10 + 3 - 3.68 × 10 + 3 - 3.76 × 10 + 3 - 3.66 × 10 + 3
F30 5.59 × 10 + 3 - 5.46 × 10 + 3 - 5.39 × 10 + 3 5.34 × 10 + 3 + 5.75 × 10 + 3 - 5.24 × 10 + 3 + 5.39 × 10 + 3 5.91 × 10 + 3 - 5.39 × 10 + 3
+/−/≈2/17/103/13/135/15/95/16/83/20/64/18/72/14/135/16/8
Table 7. Comparison of different strategies in a 30-D function.
Table 7. Comparison of different strategies in a 30-D function.
Func.HFA-DLL(A)HFA-DLL(A + B)HFA-DLL(B + C)HFA-DLL(A + C)HFA-DLL
F1 1.03 × 10 + 6 1.71 × 10 + 4 5.58 × 10 + 5 4.80 × 10 + 5 6.30 × 10 + 3
F3 7.05 × 10 + 2 5.01 × 10 + 4 1.01 × 10 + 3 1.26 × 10 + 4 7.44 × 10 + 3
F4 4.78 × 10 + 2 4.83 × 10 + 2 4.76 × 10 + 2 4.79 × 10 + 2 4.00 × 10 + 2
F5 7.54 × 10 + 2 6.89 × 10 + 2 6.79 × 10 + 2 7.86 × 10 + 2 5.79 × 10 + 2
F6 6.66 × 10 + 2 6.02 × 10 + 2 6.53 × 10 + 2 6.62 × 10 + 2 6.00 × 10 + 2
F7 1.20 × 10 + 3 9.57 × 10 + 2 1.13 × 10 + 3 1.12 × 10 + 3 8.40 × 10 + 2
F8 1.05 × 10 + 3 9.48 × 10 + 2 9.98 × 10 + 2 9.69 × 10 + 2 8.90 × 10 + 2
F9 6.72 × 10 + 3 4.61 × 10 + 3 5.33 × 10 + 3 6.44 × 10 + 3 1.72 × 10 + 3
F10 5.36 × 10 + 3 5.34 × 10 + 3 4.36 × 10 + 3 5.76 × 10 + 3 3.34 × 10 + 3
F11 1.29 × 10 + 3 1.23 × 10 + 3 1.17 × 10 + 3 1.31 × 10 + 3 1.10 × 10 + 3
F12 3.70 × 10 + 6 4.93 × 10 + 6 1.08 × 10 + 7 5.19 × 10 + 6 7.73 × 10 + 5
F13 1.18 × 10 + 5 2.27 × 10 + 4 4.12 × 10 + 4 2.74 × 10 + 5 1.60 × 10 + 3
F14 6.64 × 10 + 4 1.11 × 10 + 6 2.85 × 10 + 5 4.56 × 10 + 4 1.51 × 10 + 3
F15 9.28 × 10 + 4 1.93 × 10 + 3 1.62 × 10 + 4 2.40 × 10 + 4 2.83 × 10 + 3
F16 3.25 × 10 + 3 2.70 × 10 + 3 3.01 × 10 + 3 2.60 × 10 + 3 2.11 × 10 + 3
F17 2.27 × 10 + 3 2.19 × 10 + 3 2.29 × 10 + 3 2.57 × 10 + 3 2.05 × 10 + 3
F18 3.62 × 10 + 5 3.32 × 10 + 5 3.19 × 10 + 5 1.74 × 10 + 5 2.82 × 10 + 4
F19 3.04 × 10 + 5 2.17 × 10 + 3 1.22 × 10 + 5 1.72 × 10 + 5 2.00 × 10 + 3
F20 2.83 × 10 + 3 2.44 × 10 + 3 2.66 × 10 + 3 2.40 × 10 + 3 2.03 × 10 + 3
F21 2.57 × 10 + 3 2.42 × 10 + 3 2.47 × 10 + 3 2.64 × 10 + 3 2.39 × 10 + 3
F22 7.18 × 10 + 3 5.02 × 10 + 3 5.23 × 10 + 3 5.78 × 10 + 3 4.91 × 10 + 3
F23 3.44 × 10 + 3 2.82 × 10 + 3 3.25 × 10 + 3 3.24 × 10 + 3 2.78 × 10 + 3
F24 4.08 × 10 + 3 3.36 × 10 + 3 3.56 × 10 + 3 3.50 × 10 + 3 3.08 × 10 + 3
F25 2.95 × 10 + 3 2.89 × 10 + 3 2.89 × 10 + 3 2.88 × 10 + 3 2.87 × 10 + 3
F26 8.18 × 10 + 3 2.90 × 10 + 3 2.83 × 10 + 3 2.90 × 10 + 3 2.90 × 10 + 3
F27 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3 3.20 × 10 + 3
F28 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3 3.30 × 10 + 3
F29 4.32 × 10 + 3 4.11 × 10 + 3 4.71 × 10 + 3 3.86 × 10 + 3 3.66 × 10 + 3
F30 6.74 × 10 + 5 1.32 × 10 + 4 2.11 × 10 + 5 6.89 × 10 + 4 5.39 × 10 + 3
Best111026
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zhao, Y.; Xu, C.; Zhan, Y.; Chen, K. A Novel Hybrid Firefly Algorithm with Double-Level Learning Strategy. Mathematics 2023, 11, 3569. https://doi.org/10.3390/math11163569

AMA Style

Wang Y, Zhao Y, Xu C, Zhan Y, Chen K. A Novel Hybrid Firefly Algorithm with Double-Level Learning Strategy. Mathematics. 2023; 11(16):3569. https://doi.org/10.3390/math11163569

Chicago/Turabian Style

Wang, Yufeng, Yubo Zhao, Chunyu Xu, Ying Zhan, and Ke Chen. 2023. "A Novel Hybrid Firefly Algorithm with Double-Level Learning Strategy" Mathematics 11, no. 16: 3569. https://doi.org/10.3390/math11163569

APA Style

Wang, Y., Zhao, Y., Xu, C., Zhan, Y., & Chen, K. (2023). A Novel Hybrid Firefly Algorithm with Double-Level Learning Strategy. Mathematics, 11(16), 3569. https://doi.org/10.3390/math11163569

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop