1. Introduction
The swarm intelligence algorithm is a common approach in computational intelligence and an emerging evolutionary computing technique whose basic theory is to emulate the behavior of groups of ants, birds, bees, wolves, bacteria, and other organisms in nature and to use the mechanism of interaction to form group intelligence to solve complex problems through the exchange of information and cooperation between groups. Numerous researchers have carried out much work on such intelligent algorithms and proposed many novel algorithms, such as the grey wolf optimizer (GWO) [
1], lightning search algorithm (LSA) [
2], marine predators algorithm (MPA) [
3], sine cosine algorithm (SCA) [
4], salp swarm algorithm (SSA) [
5], water cycle algorithm (WCA) [
6], whale optimization algorithm (WOA) [
7], cuckoo serach (CS) [
8], artificial bee colony (ABC) [
9], and moth flame optimization (MFO) [
10].
The Harris hawks optimization (HHO) algorithm is a novel swarm intelligence algorithm proposed by Herdari et al. [
11], who were inspired by observing the chase and escape action between Harris hawks and their prey. The algorithm is simple in principle, with few parameters while having a powerful global search capability, so it has received widespread attention and has been adopted in many engineering fields since its introduction. However, similar to other intelligent optimization algorithms, the basic Harris hawks algorithm is susceptible to such defects as low precision of convergence and the trend of falling into the local optimum during the search process when solving complex optimization problems. Numerous scholars have proposed different improvement schemes to address these shortcomings. Qu et al. [
12] introduced a method of information exchange to increase the diversity of populations, and a nonlinear energy escape factor was proposed and perturbed by chaotic interference to balance the local exploitation and global exploration of the algorithm. Xiaolong Liu et al. [
13] optimized the method by setting up a square neighborhood topology with multiple subgroups to lead individuals in each subgroup to explore randomly in both directions. Andi Tang et al. [
14] introduced the elite hierarchy strategy to make full use of the dominant population for enhancing the population diversity and improving the convergence of the algorithm in terms of speed and accuracy. Kaveh et al. [
15] combined HHO and the imperialist competitive algorithm (ICA) [
16] named imperialist competitive Harris hawks optimization (ICHHO), and it had good performance in structure optimization problems. Elgamal et al. [
17] presented application of the chaotic maps at the initialization phase of the HHO, and the current best solution was analyzed using the simulated annealing (SA) [
18] algorithm to improve the utilization of the HHO. Wenyu Li [
19] invented a form of HHO by incorporating the novice protection tournament (NpTHHO), which was developed by adding a novice protection mechanism to better reallocate resources and introducing a variation mechanism in the exploration phase to further enhance the global search efficiency of the HHO algorithm. Essam H. et al. [
20] combined HHO with a support vector machine (SVM) for the selection of chemical descriptors as well as the activity of compounds and drug design and discovery. Wunnava et al. [
21] introduced an adaptive improvement algorithm performed by differencing to address the problem of the exploration ability of the algorithm being limited if the escape energy of the HHO is equal to zero, resulting in invalid random behavior at that stage, and they applied the algorithm to image segmentation.
In summary, the main improvement of the HHO algorithm is to improve the optimization ability of local exploitation and global exploration through various optimization strategies and then improve the the accuracy of convergence and performance of the algorithm before applying it in practical engineering. Among the existing improved versions of HHO, some are achieved by incorporating other algorithms, such as those in [
15,
17], while others are achieved by improving one or several stages of the underlying algorithm to achieve optimization. Compared with the improved algorithms that have been proposed, the improved strategies proposed in this paper include a more comprehensive scope, including the population initialization phase, the population update phase, the energy escape factor optimization, and the variation strategy. To overcome the weaknesses of the basic Harris hawks algorithm, such as low accuracy, slow speed of convergence, and easily being trapped in the local optimum, this paper presents improved multi-strategy Harris hawks optimization (MSHHO). The main contributions of this research are as follows:
We propose an improved multi-strategy Harris hawks optimization algorithm. To compensate for the shortcomings of the algorithm, four strategies are adopted in this work to improve the basic HHO algorithm. First, the population is initialized using Sobol sequences to increase the variety of the population. Second, we incorporate elite opposition-based learning to improve the population diversity and quality. Furthermore, the energy update strategy of the basic HHO algorithm is optimized to enhance the exploration and exploitation capability of the algorithm in a nonlinear update manner. Finally, Gaussian walk learning is introduced to avoid the algorithm being trapped in a stagnant state and falling into a local optimum.
The presentation of the proposed algorithm in working out 33 global optimization benchmark functions in multiple dimensions is investigated by comparing it with other novel swarm intelligence algorithm experiments. The results suggest that MSHHO had a positive performance. The Wilcoxon signed-rank test was passed to validate the effectiveness of the scheme. The advantages of this algorithm are demonstrated by comparing it with other HHO improvement algorithms. The original HHO algorithm is selected for comparison tests on each benchmark function in 100 dimensions versus 500 dimensions to inspect the utility of the algorithm in high-dimensional problems.
We apply it to two engineering application problems to inspect the practicality of the introduced algorithm. A new scheme is provided for the swarm intelligence algorithm in practical engineering applications.
This paper is organized as follows.
Section 1 describes the current state of development of swarm intelligence algorithms and some existing strategies for improving the Harris hawks algorithm.
Section 2 describes the basic principles of the basic Harris hawks algorithm.
Section 3 details the improvement strategy introduced in this paper and gives the time complexity of the algorithm.
Section 4 shows the experimental results conducted to demonstrate the effectiveness of the proposed algorithm.
Section 5 shows the application of the algorithm presented in this paper to engineering optimization problems.
Section 6 concludes the paper and provides an outlook for future work.
3. Improved Multi-Strategy Harris Hawks Optimization (MSHHO)
The HHO algorithm has multiple development modes, and the algorithm shifts between the different modes, making it good for local development, but it is also prone to the problem of falling into the local optimum. To remedy this deficiency, we introduce four improvement strategies to improve the original algorithm in this chapter, which are described in detail in the following subsections.
3.1. Sobol Sequence Initialization Populations
The distribution of the primitive solution in the solution space largely affects the convergence speed and the convergence precision of the intelligent algorithm. In the basic HHO algorithm, the initialized population is generated by randomization. However, the individuals generated in this way are not homogeneously distributed throughout the exploration space, which in turn affects the speed of convergence and precision of the algorithm. The Sobol sequence [
22] is a deterministic low-difference sequence that has the feature of distributing the points in the space as uniformly as possible compared to the random sequence. The expression for the original population generated by the Sobol sequence can be represented by
where
and
are the lower and upper bounds of the exploration space, respectively, and
is the random number generated by the Sobol sequence, where
.
Assuming that the search space is two-dimensional, the population size is 100, and the upper and lower bounds are 1 and 0, respectively, the distribution of the original population space comparing the random initialization and the Sobol sequence initialization population space is shown in
Figure 1.
As shown in
Figure 1, the original population generated by the Sobol sequence is more uniformly distributed, thus enabling the optimization algorithm to perform a better global exploration in the exploration space, increasing the diversity of the population and enhancing the convergence speed of the algorithm.
3.2. Elite Opposition-Based Learning
Opposition-based learning (OBL) [
23] is an effective method of intelligent computing which was proposed by Tizhoosh in 2005. In recent years, this strategy has been employed for the improvement of various algorithms and has achieved outstanding optimization results [
24,
25]. Assuming that a feasible solution in a
d-dimensional search space is
, then its opposition-based solution is defined as
, where
,
r is the coefficient of uniform distribution inside
.
The inverse solution generated by the opposition-based learning strategy is not necessarily searching for the global optimal solution more easily than the current exploration space. To address this problem, elite opposition-based learning (EOBL) is proposed. Assuming that the extreme point of the current population in the search space is the elite individual
, then its inverse solution
can be specified as follows:
where
,
k is a random value inside
,
and
are the upper and lower bounds of the dynamic boundary, respectively, and
,
. Replacing the fixed boundary with a dynamic boundary is beneficial for making the generated inverse solution gradually reduce the search space and speed up the convergence of the algorithm. Since the elite inverse solution may jump out of the boundary and lose its feasibility, the following approach is taken to reset the value:
3.3. Escape Energy Update Optimization
In the basic HHO, a Harris hawk relies on the energy factor
E to manage the transition of the algorithm from the global search phase to the local search phase. However, as shown in Equation (3), its energy factor
E is reduced from 2 to 1 using a linear update, which tends to trap it in a local optimum in the second half of the iteration. To overcome the deficiency of only local searching when the algorithm proceeds to the later phase, a new updated version of the energy factor is used:
where
t is the current number of iterations,
T is the maximum number of iterations, and
r is the random number inside
.
From
Figure 2, we can see that early in the iteration, the deceleration rate is fast to control the global search capability of the algorithm. In the middle of the iteration, the decreasing rate slows down to balance the capability of local exploitation and global exploration. In the later part of the iteration, the local search speeds up, and its value becomes smaller rapidly. From
Figure 3, it can be seen that
has fluctuating energy parameters throughout the iterative process and is capable of both global and local searching throughout the iterative process, with global exploration being undertaken mainly in the early stage and more local exploitation while still retaining the possibility of global exploration in the later stage.
3.4. Gaussian Walk Learning
Gaussian walk learning (GWL) is a classical stochastic walk strategy with strong exploitation capability [
26,
27,
28]. Thus, this paper uses this strategy to mutate the population individuals to improve the diversity of the population while helping it leap out of the local optimum trap. The Gaussian walk learning model is shown in Equation (
19):
where
indicates the individual in the generation population
t,
is the Gaussian distribution with
as the expectation and
as the standard deviation, and
is the location of the random individual in the generation population
t. The step size of Gaussian walk learning is adjusted by the function
. An image of this is shown in
Figure 4. To balance the search ability of the algorithm, the perturbation applied in the early iterations is larger and rapidly decreases in the later stages to increase the algorithm’s development ability.
3.5. Flow of the MSHHO Algorithm
In summary, the main steps of the the improved multi-strategy Harris hawks optimization (MSHHO) algorithm is shown in Algothrim 2, and the flow chart of MSHHO is shown in
Figure 5.
Algorithm 2 Main steps of MSHHO algorithm |
Input: Population size N and the maximum number of iterations T - 1:
According to Equation (14) initialize the population - 2:
whiledo - 3:
Generate the reverse population using the elite opposition-based learning mechanism and calculate the fitness of the original population and its reverse population individuals - 4:
if the algorithm is stagnant then - 5:
According to Equation (19) update the location - 6:
else - 7:
for i = 1:N do - 8:
According to Equation (18) update the escape energy E - 9:
if then - 10:
According to Equation ( 1) update the location - 11:
else if - 12:
if and then - 13:
According to Equation ( 4) update the location - 14:
else if and - 15:
According to Equation ( 6) update the location - 16:
else if and - 17:
According to Equation ( 10) update the location - 18:
else if and - 19:
According to Equation ( 11) update the location - 20:
end if - 21:
end if - 22:
end for - 23:
end if - 24:
- 25:
end while - 26:
return
|
3.6. Time Complexity Analysis of the Algorithm
The time complexity of the basic HHO algorithm depends mainly on three stages: the initialization phase, the fitness calculation process, and the location update operation of the population. Assuming that the size of the Harris hawk population is N, the problem dimension is D, and the maximum number of iterations is T, the time complexity of the initialization phase is , and the time complexity of finding the prey’s location and updating the population’s location vector is , so the time complexity of the basic HHO algorithm is . For the improved algorithm proposed in this paper, the time complexity of the Sobol sequence initialization population is , the time complexity of the elite reverse learning strategy to optimize the population is , and the average time complexity of the Gaussian random wandering strategy is . Thus, the time complexity of MSHHO is .
4. Experiment and Results
To verify the performance of the proposed MSHHO algorithm, the GWO [
1], SCA [
4], SSA [
5], and WOA [
7] approaches were selected for running a comparison with the original HHO [
11] algorithm. The same experimental environment, platform, and parameters were selected for the experiments.
Table 1 shows the parameter settings of the comparison algorithms. The environment of the simulation test for the experiment was the 64 bit Windows 10 operating system, the CPU was an Intel(R) Core(TM) i7-7700HQ at 2.80 GHz, and the simulation software was MATLAB R2016b.
4.1. Benchmark Functions and Numerical Experiment
Twenty-three functions were selected from the CEC2005 benchmark [
29,
30], where
–
are unimodal functions,
–
are multimodal functions, and
–
are fixed-dimension functions. The specific information of the benchmark function is shown in
Table A1,
Table A2 and
Table A3. For this experiment, the selected dimension of the test functions
–
was 30, while the other functions had different dimensions lower than 30. For each algorithm, the population size was set to 30, and the maximum number of iterations was 500. Each algorithm was run 30 times independently on each test function to prevent chance from bringing bias to the experimental results, and the mean, best value, and standard deviation of each algorithm run are shown in
Table 2 and
Table 3.
The CEC2017 benchmark functions are characterized by a large problem size and more complex optimization searching, which can effectively distinguish the direct differences in the searching ability of different algorithms. There are 30 single-objective benchmark functions in CEC2017, including unimodal functions (
–
), simple multimodal functions (
–
), hybrid functions (
–
), and composition functions (
–
). In order to further verify the improvement effect of the MSHHO algorithm, 10 benchmark functions (
,
,
,
,
,
,
,
,
, and
) with different characteristics were selected for testing in the experiment, and their characteristics are shown in
Table 4. Each algorithm was also run 30 times in the experiment, with a maximum number of iterations of 1000 and a population size of 100. The experimental results are shown in
Table 5.
4.2. Results Analysis
In the experiment with CEC2005 as the test function, the unimodal functions – selected for this experiment were used to test the development capability of the algorithm.
From the experimental results, it can be seen that for the test functions –, MSHHO could directly find the best value of zero, and HHO has the second-best performance, while the SCA, SSA and WOA performed poorly to varying degrees. For and , MSHHO performed best in terms of both average and best results and with much higher accuracy than the other algorithms. For , MSHHO performed similarly to HHO, but numerically, MSHHO had a slightly better mean, optimal value, and stability and performed significantly better than the other comparison algorithms. Overall, among all unimodal test functions, MSHHO had the best performance, stable results, and significantly better optimization than the comparison algorithms.
– are multimodal functions to evaluate the exploration capability of the algorithm. The experimental results show that in functions –, compared with the other algorithms, MSHHO could achieve the optimal optimization effect in most functions, and many functions could find the best value, such as , , ,, , , and . In and , the stability performance of MSHHO was slightly inferior to that of the SSA. Overall, the combined performance of MSHHO in the multimodal test function was still the best result.
In the experiments with CEC2017 as the test function, it can be seen that MSHHO performed well in the hybrid functions and composition functions, both of which could obtain the best optimal and mean values with good stability. In unimodal functions and simple multimodal functions, although the performance was not the best, it had a great improvement effect compared with the original HHO.
To visualize the convergence performance of MSHHO, the iterative convergence curves of the test functions were experimentally plotted, and the convergence plots of some of the test functions are shown in
Figure 6,
Figure 7,
Figure 8 and
Figure 9. As can be seen from the figures, both in terms of the speed of convergence and the accuracy of convergence, MSHHO outperformed the other comparison algorithms. It showed good performance not only in the unimodal test functions but also the multimodal functions. The box graph shows that MSHHO also performed better in terms of stability compared with the other algorithms.
4.3. Nonparametric Statistical Analysis
In order to analyze the test results of each experiment more precisely and avoid the influence of chance on the validation of the experimental results, the results of 30 instances of the 6 algorithms solving 33 test functions were passed through the Wilcoxon rank sum test at a significance level of 0.05 to identify significant discrepancies between the results of the comparison algorithms and MSHHO. If the
p-value of the rank sum test was greater than 0.05, then this meant that there was no significant difference between the two results; otherwise, it meant that the results of the two algorithms were significantly different in the whole. The results of the rank sum test are shown in
Table 6, and NaN indicates that the two groups of samples were the same. For the CEC2005 benchmark functions, the results in the table show that MSHHO was significantly different from GWO and the SCA and SSA in all 23 functions, from HHO in 22 functions, and from WOA in 19 functions. Among the 10 benchmark functions in CEC2017, MSHHO was significantly different from the SCA in all functions, from HHO and the WOA in 9 functions, from GWO in 8 functions, and from the SSA in 5 functions. Therefore, it can be concluded that there was a statistically significant difference in the optimization performance of MSHHO compared with the other algorithms, and the MSHHO algorithm performed significantly better.
4.4. Comparison with Other Improved Strategies of the HHO Algorithm
To better illustrate the improvement of the algorithm in terms of optimization performance, the experimental results of the chaotic elite Harris hawks optimization (CEHHO) algorithm in [
14] were selected for comparison with the MSHHO algorithm proposed in this paper, with the same parameters as those set in [
14], setting the number of populations to 50 and the maximum number of iterations to 300, with 17 common test functions selected for the experiments, and the comparison results are shown in
Table 7.
From the results in the table, it is apparent that for functions , , and , both optimization algorithms could reach the optimal results with a standard deviation of zero, while for , , and , both algorithms found the optimal values as well. The standard deviation of MSHHO was smaller, and the algorithm was more stable in comparison. For the other functions, MSHHO had better performance in terms of both mean and standard deviation.
4.5. Experimental Analysis of Solving High-Dimensional Functions
Based on the experimental results above, we verified the optimization effectiveness of the improved MSHHO algorithm in low-dimensional functions. However, most algorithms would be much less effective or even fail when solving complex problems in high-dimensional functions.
In order to verify the practicality of MSHHO in high-dimensional problems, the original HHO algorithm and the improved MSHHO algorithm were selected for comparison experiments on 100-dimensional and 500-dimensional
–
functions, respectively, and the experimental results are shown in
Table 8.
From the results in
Table 8, it can be seen that the improved MSHHO algorithm still had better result values than the original HHO algorithm for each test function in 100 and 500 dimensions with good stability, and the MSHHO algorithm still found the optimal results in functions
–
.
6. Conclusions
In this paper, a multi-strategy improvement method was presented to improve the original Harris hawks optimization algorithm by removing the weaknesses, such as low accuracy, slow speed of convergence, and easy being trapped in the local optimality. The improvement algorithm first uses a Sobol sequence to initialize the population and improve the population diversity. In the process of population update iteration, elite opposition-based learning is used to increase the population diversity and population quality. The energy update strategy in the original algorithm is improved to balance the exploration and exploitation capability of the algorithm in a nonlinear update way. The Gaussian walk learning strategy is incorporated to avoid the algorithm from stagnating and falling into the local optimum.
To validate the performance of MSHHO, 23 benchmark functions with unimodal, multimodal, and fixed dimensions in CEC2005 and 10 benchmark functions with unimodal functions, simple multimodal functions, hybrid functions, and composition functions in CEC2017 were selected for comparison experiments with other algorithms, including and HHO GWO as well as the SCA, SSA, and WOA. Subsequently, comparisons with the basic HHO algorithm were made at 100 and 500 dimensions to verify the practicality of the high-dimensional problem. The results show that the improved algorithm had good performance in terms of search accuracy, convergence speed, and stability, which effectively compensated for the defects of the original algorithm. In addition, the MSHHO algorithm was applied to solve two engineering application problems in this paper. The experimental results show that MSHHO could achieve the best results compared with the other algorithms.
In future work, the next step in research focuses on applying the MSHHO algorithm to solving large-scale, complex multi-objective optimization and practical engineering applications, such as microgrid scheduling optimization problems.