4.1. Experimental Settings and Benchmark Functions
For all algorithms, the populations size , and the maximum number of fitness evaluations , where D is the dimension of the benchmark functions. In the HFA-DLL, the random parameters , the fixed optical absorption coefficient , and the coefficient of attraction . In the PSO, , , the maximum velocity . In the SHADE, , and . The parameters for other state-of-the-art algorithms (FA, LF-FA, EE-FA, and AD-IFA) were set to the identical guidelines from their original publications.
Due to the stochastic nature of these algorithms, each algorithm was independently run 30 times for statistical comparisons. The mean value and standard deviation value are calculated to assess the algorithm’s performance. For each problem, the best result is bolded. The results of HFA-DLL are compared with those of PSO, FA, LF-FA, EE-FA, SHADE, and AD-IFA, respectively, by Wilcoxon rank sum test at the significance level of 0.05. The marker “−” means it is worse than the HFA-DLL result, “+” is better than the HFA-DLL result, and “≈” is equivalent to the results of HFA-DLL.
The performance of HFA-DLL is evaluated on the CEC 2017 global optimization benchmark function suite; it has 30 benchmark functions. The CEC 2017 can be divided into four categories: unimodal functions (F1–F3), multimodal functions (F4–F10), hybrid functions (F11–F20), and composition functions (F21–F30). The detailed information is shown in
Table 2, and also can be seen in [
23]. Because our algorithm produces significantly different results each time when it runs on the F2 function, F2 is unstable, so the F2 function was excluded from the comparative experiment. We only use 29 functions of CEC 2017.
4.2. Comparison with Other State-of-the-Art Algorithms on CEC2017 for 30D/50D Problems
From
Figure 2,
Table 3 and
Table 4, HFA-DLL is compared with three firefly algorithm variants and three traditional swarm intelligence algorithms on the CEC 2017 benchmark function suite. Some observations and conclusions are drawn from the analysis of the experimental results.
Firstly, for the unimodal functions (F1–F3), at 30D, HFA-DLL outperforms almost all other algorithms; only slightly worse than SHADE. In the case of 50D, SHADE can find the global optimal solution to F1 and F3, and HFA-DLL is only inferior than SHADE, and better than the other five algorithms on unimodal functions. For unimodal functions, HFA-DLL has a fast convergence ability and can quickly find the optimal solution. This is because the double-level learning strategy can converge quickly on unimodal problems. Through the information exchange of fireflies of different dimensions, the search scope is expanded and the search efficiency is improved.
Secondly, for the multimodal functions (F4–F10), at 30D, the performance of HFA-DLL is second only to SHADE on F7–F9 and better than all other algorithms on F10. Furthermore, HFA-DLL can find the optimal solution on F4 and F6. In the case of 50D, the performance of HFA-DLL is worse than SHADE on multimodal functions F5, F7–F9, and HFA-DLL outperforms all other algorithms on F4, F6, and F10. Because this category benchmark functions have the characteristics of translation, rotation, non-separability, extensibility, and continuous but differentiable only on a set of points. In addition to this, multimodal functions contain many local optima, which become more and more complex and difficult to optimize as the dimension increases. So it makes PSO, FA, and LF-FA more likely to become trapped in local optima and fails to find the global optimal solution. HFA-DLL uses the EAQ to store excellent information on the better position in each generation of the population to help the double-level learning strategy update the position. Each particle inherits through its elite fireflies, to improve the convergence accuracy. It can help the stagnation dimension jump out of the local minimum, and enhance the global search ability of HFA-DLL.
Thirdly, for the hybrid functions (F11–F20), at 30D, HFA-DLL can find the global optimal solution on F11, HFA-DLL is second only to SHADE on F12–F13, F15, F17–F18, and HFA-DLL maintains its advantage on F14, F16 and F19–F20. At 50D, HFA-DLL has worse capability than SHADE on F12, F15, F17 and F19–F20, and is better than all other algorithms on F11, F13–F14, F16 and F18. Since the category of hybrid functions is a composition of functions of several different types. It is a complicated optimization problem and contains a large number of locally optimal solutions. At the same time, the suboptimal local optimal solution is far from the global optimal solution. Within this category, different types of functions often exhibit different features and properties, and it is necessary to consider multiple different types of functions simultaneously. HFA-DLL retains an adaptive switching ratio when faced with a mixture function. It can leverage the fusion of multiple strategies to balance exploration and exploitation, and it can be optimized over a wider search space to improve global and local search capabilities.
Finally, for the composition function (F21–F30), at 30D, HFA-DLL shows significant advantages on F21, F23, F26 and F28-F30, but maintains the same local optimal performance with the SHADE on F27, and HFA-DLL is second only to SHADE and EE-FA on F22 and F25, respectively. In the face of the 50D problem, On F27 and F28, HFA-DLL, AD-IFA, EE-FA and LF-FA have the same performance, HFA-DLL outperforms the other six algorithms on F21, F23, F25–F26 and F29–F30, and is only slightly worse than SHADE on F22, F24 and F28. This is because the composition function is composed of multiple functions and the components often have some local properties. Each function is nonlinear, so its complexity is high, and it takes a long time to deal with such problems. HFA-DLL can not only expands the search range of the double-level learning strategy by learning from the elite fireflies, but also enhances the diversity of the population. Moreover, the competitive elimination mechanism is used to improve the convergence speed and accuracy of the algorithm while ensuring that the optimal value is not lost. In addition, a stochastic disturbance strategy is used to help the elite fireflies jump out of the local optimum, minimizing the time wasted in the wrong direction.
The effectiveness of the proposed HFA-DLL algorithm in terms of convergence accuracy, convergence speed, and algorithm reliability is verified through the CEC 2017 benchmark function suite. Experimental results show that the HFA-DLL algorithm significantly outperforms other classical FA and PSO algorithms and various other materialistic evolutionary algorithms in terms of statistical performance for most functions. Compared with the excellent algorithm SHADE, the ability of HFA-DLL is worse than the excellent algorithm SHADE in dealing with unimodal functions and multimodal functions, but it shows more powerful processing ability than SHADE in dealing with complex optimization problems such as hybrid functions and composition functions.
In the theorem of “no free lunch” theorem of optimization [
24], that is “any elevated performance over one class of problems is offset by the performance over another class”. Therefore, a general purpose general optimization algorithm is theoretically impossible. The reason why HFA-DLL outperforms other algorithms is that HFA-DLL is able to use a two-level learning strategy to learn and communicate between different elite firefly individuals and different dimensions, which effectively enhances the diversity of the population and improves search efficiency. Competitive elimination is used instead of worst-case fireflies to speed up convergence, which can improve the accuracy of solving complex optimization problems. The stochastic disturbance strategy helps the elite fireflies to jump out of the local optimum, reducing the time wasted in the wrong direction. These strategies can work together to enable HFA-DLL to achieve optimal or suboptimal results on all 30-dimensional problems of the CEC 2017 benchmark suite of functions.
In order to evaluate the performance of all algorithms more effectively, the Friedman test was separately conducted for each algorithm in this study. The mean values of all algorithms on the 29 test functions were used in the Friedman test, where a lower average ranking value indicates better performance. As shown in
Table 5, it is clear that the six algorithms in the 30-D scenario can be ranked in the following order: HFA-DLL, SHADE, AD-IFA, PSO, EE-FA, LF-FA, and FA. HFA-DLL achieves the best average ranking, while FA exhibits the weakest performance.
4.3. Parameter Sensitivity
The influence of the parameters
m and
on the performance of HFA-DLL are analyzed. Different combinations of
m and
values are compared on the CEC 2017 benchmark function set, and the average value is used to judge the results. The results of the sensitivity analysis for the parameters
m and
are shown in
Table 6.
In parameter sensitivity analysis, when one parameter is analyzed, other parameters are set as standard values (i.e., or ). HFA-DLL mainly relies on the (EAQ) to balance exploration and exploitation, to quickly find the global optimal solution. The EAQ can store excellent information from multiple generations of elite fireflies and, at the same time, it can help the double-level learning strategy to find new optimal solutions. The EAQ has a population size of , where m is the size of the elite fireflies selected from the population at each generation. The chosen parameter m determines the size of the EAQ and the ability of the firefly to learn from other good individuals, which is important for the solution. Therefore, different values of m were chosen based on the CEC 2017 benchmark function suite and performed 30 times.
HFA-DLL mainly relies on the elite archive queue (EAQ) to balance exploration and exploitation, and quickly finds the global optimal solution. The EAQ can store excellent information from multiple generations of elite fireflies. At the same time, it can help the double-level learning learning strategy to find a new optimal solution. The population size of the EAQ is , where m is the size of the elite fireflies selected from the population at each generation. The choice of the parameter m is determined The scale of the EAQ and the ability of fireflies to learn from other good individuals, which is very important for the solution. According to the analysis of experimental data, is an appropriate parameter setting. This is because, when choosing a smaller m, it means that this firefly has fewer opportunities to learn from other elite fireflies, which leads to a decrease in population diversity, and a larger m will increase the exploration ability of HFA-DLL, but also significantly increase the computational cost. parameters influence on the performance of the HFA-DLL, when , small population size, leading to poor global search ability, on the multimodal functions and hybrid functions overall performance is poorer, but the larger population size cause to solve the waste of resources. Therefore, when and , HFA-DLL achieves the best overall performance.
4.4. Strategy Effectiveness
HFA-DLL has three strategies: the double-level learning strategy, competitive elimination mechanism, and stochastic disturbance strategy. The details of these three strategies can be seen in
Section 3. In order to verify the effectiveness of these three strategies, HFA-DLL(A), HFA-DLL(A + B), HFA-DLL(B + C), HFA-DLL(A + C), and HFA-DLL were compared in the CEC 2017 benchmark function suite. Here, A represents a double-level learning strategy, B represents a competitive elimination mechanism, and C represents a stochastic disturbance strategy. HFA-DLL(A) is HFA-DLL with only a double-level learning strategy, HFA-DLL(A + B) is HFA-DLL with a double-level learning strategy and competitive elimination mechanism, HFA-DLL(B + C) is HFA-DLL with a competitive elimination mechanism and stochastic disturbance strategy, HFA-DLL(A + C) is HFA-DLL with a double-level learning strategy and stochastic disturbance strategy, HFA-DLL is a standard HFA-DLL with these three strategies. The average of the 30 runs is shown in
Table 7 and
Figure 3. The best results obtained by the five algorithms are shown in bold. The “best” row represents the number of times the corresponding algorithm finds the best solution.
From
Table 7 and
Figure 3, we can see that HFA-DLL(A) is effective for the double-level learning strategy for unimodal function F1, multimodal (F4–F9), and hybrid (F11–F14, F16–F20) functions. This is due to the lack of local optima for unimodal functions, and, thus, it becomes simple to obtain ideal solutions as the convergence rate increases. However, it is slightly underrepresented in the composite function and does not show significant performance. The double-level learning strategy can speed up the firefly search, so on the unimodal functions; it can help HFA-DLL(A) to find the global optimal solution quickly. The multimodal function (F4–F9) has multi-local optimal solutions in the solution space. At the same time, as the dimensionality increases, the multimodal function becomes more and more complex. While the extension of the firefly search range can make it easier to search for optimal solutions for unimodal functions, it is not a good thing for multimodal functions. Therefore, it is important to balance the exploration and exploitation capabilities of fireflies. The double-level learning strategy and the adaptive switching mechanism can help fireflies to escape from the local optima. Hybrid and composite functions not only consist of a single function, but also have multiple properties. HFA-DLL uses the EAQ and a double-level learning strategy to increase the search horizon based on excellent solutions, which increases the probability and accuracy of finding the global optimal solution. At the same time, the adaptive switching mechanism can better balance global search and local search. It is not hard to conclude from
Table 7 and
Figure 3 that the double-level learning strategy is effective under the same number of iterations.
The experimental results show that, after updating the competitive elimination mechanism and stochastic disturbance strategy, the experimental results of the unimodal function, multimodal function, and hybrid function have achieved better results, but the optimal solution can not be localized on the composite function (F25–F29). The competitive elimination mechanism will increase the probability that the worst firefly jumps out of the local optimum and increase the convergence rate of the overall firefly. The competitive elimination mechanism improves the likelihood of jumping out of the local optimum of the solution space by learning from elite fireflies. The development power of the algorithm is enhanced and better solutions can be obtained for most functions on the test set. For complex mixture functions, where multiple local optima are close to each other, competitive elimination mechanisms using elite firefly updates do not support finding the optimal value when the elite firefly is also trapped in the local optimum. Therefore, a stochastic disturbance strategy is used to search for the global optimum, which helps the elite fireflies to jump out of the local optimum and maximizes the time wasted in the wrong direction. The experimental results validate the design expectation that the stochastic disturbance strategy focuses on global exploration while the competitive elimination mechanism focuses on local exploration.