Next Article in Journal
Viral Infection Spreading in Cell Culture with Intracellular Regulation
Next Article in Special Issue
Further Optimization of Maxwell-Type Dynamic Vibration Absorber with Inerter and Negative Stiffness Spring Using Particle Swarm Algorithm
Previous Article in Journal
Application of Finite Element Method to Create a Digital Elevation Model
Previous Article in Special Issue
A Compact and High-Performance Acoustic Echo Canceller Neural Processor Using Grey Wolf Optimizer along with Least Mean Square Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Multi-Strategy Harris Hawks Optimization and Its Application in Engineering Problems

School of Computer Science and Engineering, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1525; https://doi.org/10.3390/math11061525
Submission received: 19 February 2023 / Revised: 13 March 2023 / Accepted: 18 March 2023 / Published: 21 March 2023

Abstract

:
In order to compensate for the low convergence accuracy, slow rate of convergence, and easily falling into the trap of local optima for the original Harris hawks optimization (HHO) algorithm, an improved multi-strategy Harris hawks optimization (MSHHO) algorithm is proposed. First, the population is initialized by Sobol sequences to increase the diversity of the population. Second, the elite opposition-based learning strategy is incorporated to improve the versatility and quality of the solution sets. Furthermore, the energy updating strategy of the original algorithm is optimized to enhance the exploration and exploitation capability of the algorithm in a nonlinear update manner. Finally, the Gaussian walk learning strategy is introduced to avoid the algorithm being trapped in a stagnant state and slipping into a local optimum. We perform experiments on 33 benchmark functions and 2 engineering application problems to verify the performance of the proposed algorithm. The experimental results show that the improved algorithm has good performance in terms of optimization seeking accuracy, the speed of convergence, and stability, which effectively remedies the defects of the original algorithm.

1. Introduction

The swarm intelligence algorithm is a common approach in computational intelligence and an emerging evolutionary computing technique whose basic theory is to emulate the behavior of groups of ants, birds, bees, wolves, bacteria, and other organisms in nature and to use the mechanism of interaction to form group intelligence to solve complex problems through the exchange of information and cooperation between groups. Numerous researchers have carried out much work on such intelligent algorithms and proposed many novel algorithms, such as the grey wolf optimizer (GWO) [1], lightning search algorithm (LSA) [2], marine predators algorithm (MPA) [3], sine cosine algorithm (SCA) [4], salp swarm algorithm (SSA) [5], water cycle algorithm (WCA) [6], whale optimization algorithm (WOA) [7], cuckoo serach (CS) [8], artificial bee colony (ABC) [9], and moth flame optimization (MFO) [10].
The Harris hawks optimization (HHO) algorithm is a novel swarm intelligence algorithm proposed by Herdari et al. [11], who were inspired by observing the chase and escape action between Harris hawks and their prey. The algorithm is simple in principle, with few parameters while having a powerful global search capability, so it has received widespread attention and has been adopted in many engineering fields since its introduction. However, similar to other intelligent optimization algorithms, the basic Harris hawks algorithm is susceptible to such defects as low precision of convergence and the trend of falling into the local optimum during the search process when solving complex optimization problems. Numerous scholars have proposed different improvement schemes to address these shortcomings. Qu et al. [12] introduced a method of information exchange to increase the diversity of populations, and a nonlinear energy escape factor was proposed and perturbed by chaotic interference to balance the local exploitation and global exploration of the algorithm. Xiaolong Liu et al. [13] optimized the method by setting up a square neighborhood topology with multiple subgroups to lead individuals in each subgroup to explore randomly in both directions. Andi Tang et al. [14] introduced the elite hierarchy strategy to make full use of the dominant population for enhancing the population diversity and improving the convergence of the algorithm in terms of speed and accuracy. Kaveh et al. [15] combined HHO and the imperialist competitive algorithm (ICA) [16] named imperialist competitive Harris hawks optimization (ICHHO), and it had good performance in structure optimization problems. Elgamal et al. [17] presented application of the chaotic maps at the initialization phase of the HHO, and the current best solution was analyzed using the simulated annealing (SA) [18] algorithm to improve the utilization of the HHO. Wenyu Li [19] invented a form of HHO by incorporating the novice protection tournament (NpTHHO), which was developed by adding a novice protection mechanism to better reallocate resources and introducing a variation mechanism in the exploration phase to further enhance the global search efficiency of the HHO algorithm. Essam H. et al. [20] combined HHO with a support vector machine (SVM) for the selection of chemical descriptors as well as the activity of compounds and drug design and discovery. Wunnava et al. [21] introduced an adaptive improvement algorithm performed by differencing to address the problem of the exploration ability of the algorithm being limited if the escape energy of the HHO is equal to zero, resulting in invalid random behavior at that stage, and they applied the algorithm to image segmentation.
In summary, the main improvement of the HHO algorithm is to improve the optimization ability of local exploitation and global exploration through various optimization strategies and then improve the the accuracy of convergence and performance of the algorithm before applying it in practical engineering. Among the existing improved versions of HHO, some are achieved by incorporating other algorithms, such as those in [15,17], while others are achieved by improving one or several stages of the underlying algorithm to achieve optimization. Compared with the improved algorithms that have been proposed, the improved strategies proposed in this paper include a more comprehensive scope, including the population initialization phase, the population update phase, the energy escape factor optimization, and the variation strategy. To overcome the weaknesses of the basic Harris hawks algorithm, such as low accuracy, slow speed of convergence, and easily being trapped in the local optimum, this paper presents improved multi-strategy Harris hawks optimization (MSHHO). The main contributions of this research are as follows:
  • We propose an improved multi-strategy Harris hawks optimization algorithm. To compensate for the shortcomings of the algorithm, four strategies are adopted in this work to improve the basic HHO algorithm. First, the population is initialized using Sobol sequences to increase the variety of the population. Second, we incorporate elite opposition-based learning to improve the population diversity and quality. Furthermore, the energy update strategy of the basic HHO algorithm is optimized to enhance the exploration and exploitation capability of the algorithm in a nonlinear update manner. Finally, Gaussian walk learning is introduced to avoid the algorithm being trapped in a stagnant state and falling into a local optimum.
  • The presentation of the proposed algorithm in working out 33 global optimization benchmark functions in multiple dimensions is investigated by comparing it with other novel swarm intelligence algorithm experiments. The results suggest that MSHHO had a positive performance. The Wilcoxon signed-rank test was passed to validate the effectiveness of the scheme. The advantages of this algorithm are demonstrated by comparing it with other HHO improvement algorithms. The original HHO algorithm is selected for comparison tests on each benchmark function in 100 dimensions versus 500 dimensions to inspect the utility of the algorithm in high-dimensional problems.
  • We apply it to two engineering application problems to inspect the practicality of the introduced algorithm. A new scheme is provided for the swarm intelligence algorithm in practical engineering applications.
This paper is organized as follows. Section 1 describes the current state of development of swarm intelligence algorithms and some existing strategies for improving the Harris hawks algorithm. Section 2 describes the basic principles of the basic Harris hawks algorithm. Section 3 details the improvement strategy introduced in this paper and gives the time complexity of the algorithm. Section 4 shows the experimental results conducted to demonstrate the effectiveness of the proposed algorithm. Section 5 shows the application of the algorithm presented in this paper to engineering optimization problems. Section 6 concludes the paper and provides an outlook for future work.

2. Harris Hawks Optimization (HHO)

The HHO algorithm models the Harris hawk’s strategy for capturing prey under different mechanisms in a mathematical formulation, where individual Harris hawks form candidate solutions and the optimal solution produced by every iteration is considered the prey. The algorithm comprises two main phases, namely exploration and exploitation, and transitions between the two phases are performed by the magnitude of the prey’s escape energy. The original Harris hawks optimization algorithm is described below.

2.1. Exploration Phase

The global search phase is majorly dictated by the location information of the Harris hawk population, and its update strategy is as follows:
X ( t + 1 ) = X r a n d ( t ) r 1 X r a n d ( t ) 2 r 2 X ( t ) ( X p r e y ( t ) X m ( t ) ) r 3 ( L B + r 4 ( U B L B ) )           q 0.5           q < 0.5
where  X ( t + 1 ) represents the location of the hawks in the iteration  t + 1 X p r e y ( t ) represents the location of the prey,  X ( t ) represents the position of the hawks in the current generation t,  r 1 r 4 and q are randomly generated between (0,1), being renewed in each iteration,  U B and  L B are the upper and lower bounds of the population, respectively,  X r a n d ( t ) represents a hawk chosen randomly from the current population, and  X m ( t ) represents the average of the positions of individuals in the current population, which is obtained from Equation (2):
X m ( t ) = 1 n k = 1 n X k ( t )
where  X k ( t ) denotes the position of hawk k in the iteration t and n denotes the number of hawks.

2.2. Transition from Exploration to Exploitation

The energy equation controlling the escape of prey is as follows:
E = 2 E 0 ( 1 t / T )
where t is the current number of iterations, T denotes the maximum number of iterations, and the value of  E 0 is a random number within (−1,1) that indicates the initial state of the energy. When the escape energy  E 1 , the Harris hawks search different areas to further explore for the location of the prey, which corresponds to the global exploration phase, and when  E < 1 , the Harris hawks explore the adjacent solutions locally, thus corresponding to the local exploitation phase.

2.3. Exploitation Phase

In this phase, the Harris hawk will besiege the target prey after finding it, based on the exploration results of the previous phases, while the prey will try to escape from the pursuit. On the basis of the behavior of Harris’s hawk and prey, four possible strategies are proposed to be used for this phase of the simulation. The hard beseige and soft besiege by Harris’s hawk are simulated by E. The parameter r is specified to indicate whether the prey successfully escapes or not.

2.3.1. Soft Besiege

When  E 0.5 and  r 0.5 , the prey tries to escape from the pursuit by jumping, and the Harris hawk will use a soft besiege to gradually consume the prey’s energy. The behavior is modeled as follows:
X ( t + 1 ) = Δ X ( t ) E J X p r e y ( t ) X ( t )
Δ X ( t ) = X p r e y ( t ) X ( t )
where  r 5 represents a randomly generated number within (0,1) and J is introduced to simulate the nature of prey movement, with its value randomly varied in each iteration.

2.3.2. Hard Besiege

When  E < 0.5 and  r 0.5 , the prey does not have enough energy to escape, and so the Harris hawk attacks in a hard besiege manner, using Equation (6) to update the current position:
X ( t + 1 ) = X p r e y ( t ) E Δ X ( t )

2.3.3. Soft Besiege with Progressive Rapid Dives

When  E 0.5 and  r < 0.5 , at this time, the prey has enough energy to escape from the pursuit. Harris hawks will update their positions according to the rule in Equation (7):
Y = X p r e y ( t ) E J X p r e y ( t ) X ( t )
Z = Y + S × L F ( D )
where D is the dimension of the problem, S is a random vector of a size  1 × D , and  L F is the levy flight function, which can be described as in Equation (9):
L F ( x ) = 0.01 × u × σ υ 1 β σ = Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 ( β 1 2 ) 1 β
where u and v are random values within (0,1) and  β is the default constant, set to 1.5.
Hence, the final strategy for updating the hawks’ positions during the soft siege phase can be executed by Equation (10):
X ( t + 1 ) = Y , if F ( Y ) < F ( X ( t ) ) Z , if F ( Z ) < F ( X ( t ) )
where Y and Z are obtained using Equations (7) and (8), respectively.

2.3.4. Hard Besiege with Progressive Rapid Dives

When  E < 0.5 and  r < 0.5 , the prey does not have enough energy to make an escape, and the following strategy is defined to be executed under these conditions:
X ( t + 1 ) = Y , if F ( Y ) < F ( X ( t ) ) Z , if F ( Z ) < F ( X ( t ) )
Y = X p r e y ( t ) E J X p r e y ( t ) X m ( t )
Z = Y + S × L F ( D )
where  X m ( t ) is obtained using Equation (2).

2.4. The Main Steps of HHO

The main steps of the overall HHO algorithm are as shown in Algorithm 1.
Algorithm 1 Main steps of HHO algorithm
Input: Population size N and the maximum number of iterations T
1:
Initialize the population
2:
while t < T do       
3:
    Calculate the fitness of each solution and get the optimal individual       
4:
    for i=1:N do             
5:
        According to Equation (3) update the escape energy E
6:
        if  E 1  then
7:
           According to Equation (1) update the location
8:
        else if  then E < 1
9:
           if  E 0.5 and  r 0.5  then
10:
               According to Equation (4) update the location
11:
           else if  then E < 0.5 and  r 0.5
12:
               According to Equation (6) update the location
13:
           else if  then E 0.5 and  r < 0.5
14:
               According to Equation (10) update the location
15:
           else if  then E < 0.5 and  r < 0.5
16:
               According to Equation (11) update the location
17:
           end if
18:
        end if      
19:
    end for
20:
     t = t + 1
21:
end while
22:
return   X p r e y

3. Improved Multi-Strategy Harris Hawks Optimization (MSHHO)

The HHO algorithm has multiple development modes, and the algorithm shifts between the different modes, making it good for local development, but it is also prone to the problem of falling into the local optimum. To remedy this deficiency, we introduce four improvement strategies to improve the original algorithm in this chapter, which are described in detail in the following subsections.

3.1. Sobol Sequence Initialization Populations

The distribution of the primitive solution in the solution space largely affects the convergence speed and the convergence precision of the intelligent algorithm. In the basic HHO algorithm, the initialized population is generated by randomization. However, the individuals generated in this way are not homogeneously distributed throughout the exploration space, which in turn affects the speed of convergence and precision of the algorithm. The Sobol sequence [22] is a deterministic low-difference sequence that has the feature of distributing the points in the space as uniformly as possible compared to the random sequence. The expression for the original population generated by the Sobol sequence can be represented by
X i = L b + S n × ( U b L b )
where  L b and  U b are the lower and upper bounds of the exploration space, respectively, and  S n is the random number generated by the Sobol sequence, where  S n [ 0 , 1 ] .
Assuming that the search space is two-dimensional, the population size is 100, and the upper and lower bounds are 1 and 0, respectively, the distribution of the original population space comparing the random initialization and the Sobol sequence initialization population space is shown in Figure 1.
As shown in Figure 1, the original population generated by the Sobol sequence is more uniformly distributed, thus enabling the optimization algorithm to perform a better global exploration in the exploration space, increasing the diversity of the population and enhancing the convergence speed of the algorithm.

3.2. Elite Opposition-Based Learning

Opposition-based learning (OBL) [23] is an effective method of intelligent computing which was proposed by Tizhoosh in 2005. In recent years, this strategy has been employed for the improvement of various algorithms and has achieved outstanding optimization results [24,25]. Assuming that a feasible solution in a d-dimensional search space is  X = ( x 1 , x 2 , , x d ) ( x j [ a j , b j ] ) , then its opposition-based solution is defined as  X ¯ = ( x 1 ¯ , x 2 ¯ , , x d ¯ ) , where  x j ¯ = r ( a j + b j ) x j , r is the coefficient of uniform distribution inside  [ 0 , 1 ] .
The inverse solution generated by the opposition-based learning strategy is not necessarily searching for the global optimal solution more easily than the current exploration space. To address this problem, elite opposition-based learning (EOBL) is proposed. Assuming that the extreme point of the current population in the search space is the elite individual  X e = ( x 1 e , x 2 e , , x d e ) , then its inverse solution  X e ¯ = ( x 1 e ¯ , x 2 e ¯ , , x d e ¯ ) can be specified as follows:
x j e ¯ = k · ( a j + b j ) x j e
where  x j e [ a j , b j ] , k is a random value inside  [ 0 , 1 ] b j and  a j are the upper and lower bounds of the dynamic boundary, respectively, and  a j = min ( x j e ) b j = max ( x j e ) . Replacing the fixed boundary with a dynamic boundary is beneficial for making the generated inverse solution gradually reduce the search space and speed up the convergence of the algorithm. Since the elite inverse solution may jump out of the boundary and lose its feasibility, the following approach is taken to reset the value:
x i e ¯ = r a n d ( a j , b j )

3.3. Escape Energy Update Optimization

In the basic HHO, a Harris hawk relies on the energy factor E to manage the transition of the algorithm from the global search phase to the local search phase. However, as shown in Equation (3), its energy factor E is reduced from 2 to 1 using a linear update, which tends to trap it in a local optimum in the second half of the iteration. To overcome the deficiency of only local searching when the algorithm proceeds to the later phase, a new updated version of the energy factor is used:
E = cos ( π × ( t / T + 1 / 2 ) + 2 ) , t T / 2 cos ( π × ( t / T 1 / 2 ) 1 / 3 ) , t > T / 2
E 1 = E × ( 2 × r a n d 1 )
where t is the current number of iterations, T is the maximum number of iterations, and r is the random number inside  [ 0 , 1 ] .
From Figure 2, we can see that early in the iteration, the deceleration rate is fast to control the global search capability of the algorithm. In the middle of the iteration, the decreasing rate slows down to balance the capability of local exploitation and global exploration. In the later part of the iteration, the local search speeds up, and its value becomes smaller rapidly. From Figure 3, it can be seen that  E 1 has fluctuating energy parameters throughout the iterative process and is capable of both global and local searching throughout the iterative process, with global exploration being undertaken mainly in the early stage and more local exploitation while still retaining the possibility of global exploration in the later stage.

3.4. Gaussian Walk Learning

Gaussian walk learning (GWL) is a classical stochastic walk strategy with strong exploitation capability [26,27,28]. Thus, this paper uses this strategy to mutate the population individuals to improve the diversity of the population while helping it leap out of the local optimum trap. The Gaussian walk learning model is shown in Equation (19):
X ( t + 1 ) = G a u s s ( X ( t ) , τ )
τ = cos ( π / 2 × ( t / T ) 2 ) × ( X ( t ) X r ( t ) )
where  X ( t ) indicates the individual in the generation population t G a u s s ( X ( t ) , τ ) is the Gaussian distribution with  X ( t ) as the expectation and  τ as the standard deviation, and  X ( t ) is the location of the random individual in the generation population t. The step size of Gaussian walk learning is adjusted by the function  cos ( π / 2 × ( t / T ) 2 ) . An image of this is shown in Figure 4. To balance the search ability of the algorithm, the perturbation applied in the early iterations is larger and rapidly decreases in the later stages to increase the algorithm’s development ability.

3.5. Flow of the MSHHO Algorithm

In summary, the main steps of the the improved multi-strategy Harris hawks optimization (MSHHO) algorithm is shown in Algothrim 2, and the flow chart of MSHHO is shown in Figure 5.
Algorithm 2 Main steps of MSHHO algorithm
Input: Population size N and the maximum number of iterations T
1:
According to Equation (14) initialize the population
2:
while   t < T do       
3:
    Generate the reverse population using the elite opposition-based learning mechanism and calculate the fitness of the original population and its reverse population individuals       
4:
    if the algorithm is stagnant then             
5:
        According to Equation (19) update the location
6:
    else
7:
        for i = 1:N do
8:
           According to Equation (18) update the escape energy E
9:
           if  E 1  then
10:
               According to Equation (1) update the location
11:
           else if  then E < 1
12:
               if  E 0.5 and  r 0.5  then
13:
                   According to Equation (4) update the location
14:
               else if  then E < 0.5 and  r 0.5
15:
                   According to Equation (6) update the location
16:
               else if  then E 0.5 and  r < 0.5
17:
                   According to Equation (10) update the location
18:
               else if  then E < 0.5 and  r < 0.5
19:
                   According to Equation (11) update the location
20:
               end if
21:
           end if
22:
        end for
23:
    end if
24:
     t = t + 1
25:
end while
26:
return   X p r e y

3.6. Time Complexity Analysis of the Algorithm

The time complexity of the basic HHO algorithm depends mainly on three stages: the initialization phase, the fitness calculation process, and the location update operation of the population. Assuming that the size of the Harris hawk population is N, the problem dimension is D, and the maximum number of iterations is T, the time complexity of the initialization phase is  O ( N ) , and the time complexity of finding the prey’s location and updating the population’s location vector is  O ( N T ) + O ( N D T ) , so the time complexity of the basic HHO algorithm is  O ( N ( 1 + T + D T ) ) . For the improved algorithm proposed in this paper, the time complexity of the Sobol sequence initialization population is  O ( N D ) , the time complexity of the elite reverse learning strategy to optimize the population is  O ( N D T ) , and the average time complexity of the Gaussian random wandering strategy is  O ( N / 2 D T ) . Thus, the time complexity of MSHHO is  O ( N ( 3 / 2 D T + D + 1 ) ) .

4. Experiment and Results

To verify the performance of the proposed MSHHO algorithm, the GWO [1], SCA [4], SSA [5], and WOA [7] approaches were selected for running a comparison with the original HHO [11] algorithm. The same experimental environment, platform, and parameters were selected for the experiments. Table 1 shows the parameter settings of the comparison algorithms. The environment of the simulation test for the experiment was the 64 bit Windows 10 operating system, the CPU was an Intel(R) Core(TM) i7-7700HQ at 2.80 GHz, and the simulation software was MATLAB R2016b.

4.1. Benchmark Functions and Numerical Experiment

Twenty-three functions were selected from the CEC2005 benchmark [29,30], where  F 1 F 7 are unimodal functions,  F 8 F 13 are multimodal functions, and  F 14 F 23 are fixed-dimension functions. The specific information of the benchmark function is shown in Table A1, Table A2 and Table A3. For this experiment, the selected dimension of the test functions  F 1 F 13 was 30, while the other functions had different dimensions lower than 30. For each algorithm, the population size was set to 30, and the maximum number of iterations was 500. Each algorithm was run 30 times independently on each test function to prevent chance from bringing bias to the experimental results, and the mean, best value, and standard deviation of each algorithm run are shown in Table 2 and Table 3.
The CEC2017 benchmark functions are characterized by a large problem size and more complex optimization searching, which can effectively distinguish the direct differences in the searching ability of different algorithms. There are 30 single-objective benchmark functions in CEC2017, including unimodal functions ( F 1 F 3 ), simple multimodal functions ( F 4 F 10 ), hybrid functions ( F 11 F 20 ), and composition functions ( F 21 F 30 ). In order to further verify the improvement effect of the MSHHO algorithm, 10 benchmark functions ( F 1 F 3 F 5 F 7 F 14 F 15 F 18 F 21 F 24 , and  F 30 ) with different characteristics were selected for testing in the experiment, and their characteristics are shown in Table 4. Each algorithm was also run 30 times in the experiment, with a maximum number of iterations of 1000 and a population size of 100. The experimental results are shown in Table 5.

4.2. Results Analysis

In the experiment with CEC2005 as the test function, the unimodal functions  F 1 F 7 selected for this experiment were used to test the development capability of the algorithm.
From the experimental results, it can be seen that for the test functions  F 1 F 4 , MSHHO could directly find the best value of zero, and HHO has the second-best performance, while the SCA, SSA and WOA performed poorly to varying degrees. For  F 5 and  F 6 , MSHHO performed best in terms of both average and best results and with much higher accuracy than the other algorithms. For  F 7 , MSHHO performed similarly to HHO, but numerically, MSHHO had a slightly better mean, optimal value, and stability and performed significantly better than the other comparison algorithms. Overall, among all unimodal test functions, MSHHO had the best performance, stable results, and significantly better optimization than the comparison algorithms.
F 8 F 23 are multimodal functions to evaluate the exploration capability of the algorithm. The experimental results show that in functions  F 8 F 23 , compared with the other algorithms, MSHHO could achieve the optimal optimization effect in most functions, and many functions could find the best value, such as  F 9 F 11 F 14 , F 16 F 17 F 18 , and  F 19 . In  F 17 and  F 19 , the stability performance of MSHHO was slightly inferior to that of the SSA. Overall, the combined performance of MSHHO in the multimodal test function was still the best result.
In the experiments with CEC2017 as the test function, it can be seen that MSHHO performed well in the hybrid functions and composition functions, both of which could obtain the best optimal and mean values with good stability. In unimodal functions and simple multimodal functions, although the performance was not the best, it had a great improvement effect compared with the original HHO.
To visualize the convergence performance of MSHHO, the iterative convergence curves of the test functions were experimentally plotted, and the convergence plots of some of the test functions are shown in Figure 6, Figure 7, Figure 8 and Figure 9. As can be seen from the figures, both in terms of the speed of convergence and the accuracy of convergence, MSHHO outperformed the other comparison algorithms. It showed good performance not only in the unimodal test functions but also the multimodal functions. The box graph shows that MSHHO also performed better in terms of stability compared with the other algorithms.

4.3. Nonparametric Statistical Analysis

In order to analyze the test results of each experiment more precisely and avoid the influence of chance on the validation of the experimental results, the results of 30 instances of the 6 algorithms solving 33 test functions were passed through the Wilcoxon rank sum test at a significance level of 0.05 to identify significant discrepancies between the results of the comparison algorithms and MSHHO. If the p-value of the rank sum test was greater than 0.05, then this meant that there was no significant difference between the two results; otherwise, it meant that the results of the two algorithms were significantly different in the whole. The results of the rank sum test are shown in Table 6, and NaN indicates that the two groups of samples were the same. For the CEC2005 benchmark functions, the results in the table show that MSHHO was significantly different from GWO and the SCA and SSA in all 23 functions, from HHO in 22 functions, and from WOA in 19 functions. Among the 10 benchmark functions in CEC2017, MSHHO was significantly different from the SCA in all functions, from HHO and the WOA in 9 functions, from GWO in 8 functions, and from the SSA in 5 functions. Therefore, it can be concluded that there was a statistically significant difference in the optimization performance of MSHHO compared with the other algorithms, and the MSHHO algorithm performed significantly better.

4.4. Comparison with Other Improved Strategies of the HHO Algorithm

To better illustrate the improvement of the algorithm in terms of optimization performance, the experimental results of the chaotic elite Harris hawks optimization (CEHHO) algorithm in [14] were selected for comparison with the MSHHO algorithm proposed in this paper, with the same parameters as those set in [14], setting the number of populations to 50 and the maximum number of iterations to 300, with 17 common test functions selected for the experiments, and the comparison results are shown in Table 7.
From the results in the table, it is apparent that for functions  F 9 F 10 , and  F 11 , both optimization algorithms could reach the optimal results with a standard deviation of zero, while for  F 16 F 17 , and  F 18 , both algorithms found the optimal values as well. The standard deviation of MSHHO was smaller, and the algorithm was more stable in comparison. For the other functions, MSHHO had better performance in terms of both mean and standard deviation.

4.5. Experimental Analysis of Solving High-Dimensional Functions

Based on the experimental results above, we verified the optimization effectiveness of the improved MSHHO algorithm in low-dimensional functions. However, most algorithms would be much less effective or even fail when solving complex problems in high-dimensional functions.
In order to verify the practicality of MSHHO in high-dimensional problems, the original HHO algorithm and the improved MSHHO algorithm were selected for comparison experiments on 100-dimensional and 500-dimensional  F 1 F 13 functions, respectively, and the experimental results are shown in Table 8.
From the results in Table 8, it can be seen that the improved MSHHO algorithm still had better result values than the original HHO algorithm for each test function in 100 and 500 dimensions with good stability, and the MSHHO algorithm still found the optimal results in functions  F 1 F 4 .

5. Engineering Optimization Problems

5.1. Pressure Vessel Design Problem

The pressure vessel is an essential and important piece of equipment in industrial production, the main function of which is to store liquids or gases under a certain pressure. The design problem of the pressure vessel is a nonlinear programming problem that belongs to the category of existence of multiple constraints. The objective of the problem is to minimize the cost of making the pressure vessel. The design of the pressure vessel is shown in Figure 10.
The pressure vessel was composed of a cylindrical vessel part and a hemispherical capping part at the head end and a tail end. L is the length of the cylindrical section, R is the radius of the inner wall of the vessel section, S is the wall thickness of the vessel section, and H is the wall thickness of the hemispherical cap, where, L, R, S, and H were the four variables to be optimized for the pressure vessel design problem. The objective function and constraints of the problem can be expressed as follows:
x = [ x 1 , x 2 , x 3 , x 4 ] = [ S , H , R , L ] M i n f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
subject to
g 1 ( x ) = x 1 + 0.0193 x 3 0 g 2 ( x ) = x 2 + 0.00954 x 3 0 g 3 ( x ) = π x 3 2 4 3 π x 3 3 + 1296000 0 g 4 ( x ) = x 4 240 0 0 x i 100 , i = 1 , 2 10 x i 200 , i = 3 , 4
The problem was solved by the MSHHO and HHO algorithms [11] as well as the SCA [4], SSA [5], and WOA [7] with the same relevant parameter settings for the algorithms, and the results are shown in Table 9. It can be seen that MSHHO demonstrated the best results in solving this problem.
The convergence curve and box plot of the problem are shown in Figure 11. It can be visually seen that the MSHHO performed well in terms of accuracy and stability.

5.2. Compression Spring Design Problem

The optimal design of the spring must achieve the minimum value of its mass within the constraints of the shear stress, surge frequency, fluke curvature, and other relevant index criteria. The design is shown in Figure 12 and includes three design variables, namely the spring wire diameter d, the average spring coil diameter D, and the number of effective spring coils N. The objective function and constraints are described as follows:
x = [ x 1 , x 2 , x 3 ] = [ d , D , N ] M i n f ( x ) = ( x 3 + 2 ) x 1 2 x 2
subject to
g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ) = x 2 2 x 1 x 2 12566 ( x 1 3 x 2 x 1 4 ) + 1 5108 x 1 2 0 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 0 0.05 x 1 2.00 0.25 x 2 1.3 2.00 x 3 15.0
The problem was solved by the MSHHO and HHO algorithms [11], as well as the SCA [4], SSA [5], and WOA [7] with the same relevant parameter settings for the algorithms, and the results are shown in Table 10, while the convergence curve and box plot of the spring design problem are shown in Figure 13. It can be seen that MSHHO demonstrated the best results in solving this problem.

6. Conclusions

In this paper, a multi-strategy improvement method was presented to improve the original Harris hawks optimization algorithm by removing the weaknesses, such as low accuracy, slow speed of convergence, and easy being trapped in the local optimality. The improvement algorithm first uses a Sobol sequence to initialize the population and improve the population diversity. In the process of population update iteration, elite opposition-based learning is used to increase the population diversity and population quality. The energy update strategy in the original algorithm is improved to balance the exploration and exploitation capability of the algorithm in a nonlinear update way. The Gaussian walk learning strategy is incorporated to avoid the algorithm from stagnating and falling into the local optimum.
To validate the performance of MSHHO, 23 benchmark functions with unimodal, multimodal, and fixed dimensions in CEC2005 and 10 benchmark functions with unimodal functions, simple multimodal functions, hybrid functions, and composition functions in CEC2017 were selected for comparison experiments with other algorithms, including and HHO GWO as well as the SCA, SSA, and WOA. Subsequently, comparisons with the basic HHO algorithm were made at 100 and 500 dimensions to verify the practicality of the high-dimensional problem. The results show that the improved algorithm had good performance in terms of search accuracy, convergence speed, and stability, which effectively compensated for the defects of the original algorithm. In addition, the MSHHO algorithm was applied to solve two engineering application problems in this paper. The experimental results show that MSHHO could achieve the best results compared with the other algorithms.
In future work, the next step in research focuses on applying the MSHHO algorithm to solving large-scale, complex multi-objective optimization and practical engineering applications, such as microgrid scheduling optimization problems.

Author Contributions

Conceptualization, F.T. and J.W.; methodology, F.T.; software, F.T. and F.C.; validation, F.T., J.W. and F.C.; formal analysis, F.T.; investigation, F.T.; resources, J.W.; data curation, F.T.; writing—original draft preparation, F.T.; writing—review and editing, F.T.; visualization, F.T.; supervision, J.W.; project administration, J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 61772031 and the Natural Science Foundation of Hunan Province under grant number 2020JJ4753.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1

Table A1, Table A2 and Table A3 provide specific information on the benchmark function.
Table A1. Information on unimodal benchmark functions.
Table A1. Information on unimodal benchmark functions.
FunctionDimensionRangef min
  F 1 ( x ) = 1 n x i 2 30[−100,100]0
  F 2 ( x ) = i = 1 n x i + i = 1 n x i 30[−10,10]0
  F 3 ( x ) = i = 1 n j = 1 i x j 2 30[−100,100]0
  F 4 ( x ) = max i x i , 1 i n 30[−100,100]0
  F 5 ( x ) = i = 1 n 100 ( x i 2 x i + 1 ) 2 + ( x i 1 ) 2 30[−30,30]0
  F 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100,100]0
  F 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28,1.28]0
Table A2. Information on multimodal benchmark functions.
Table A2. Information on multimodal benchmark functions.
FunctionDimensionRangef min
  F 8 ( x ) = i = 1 n x i sin ( x i ) 30[−500, 500]   418.9829 n
  F 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
  F 10 ( x ) = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos ( 2 π x i ) + 20 + e 30[−32, 32]0
  F 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30[−600, 600]0
  F 12 ( x ) = π n 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2
  + i = 1 n u ( x i , 10 , 100 , 4 )
  y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) m , x i < a 30[−50, 50]0
  F 13 ( x ) = 0.1 sin 2 ( 3 π x i ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ]
  + ( x i 1 ) 2 [ 1 + sin 2 ( 2 π x i ) ] + i = 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
Table A3. Information on fixed-dimension benchmark functions.
Table A3. Information on fixed-dimension benchmark functions.
FunctionDimensionRangef min
  F 14 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1 2[−65, 65]1
  F 15 ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
  F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
  F 17 ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398
  F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ]
  × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
  F 19 ( x ) = i = 1 4 c i exp j = 1 3 a i j ( x j p i j ) 2 3[0, 1]−3.86
  F 20 ( x ) = i = 1 4 c i exp j = 1 6 a i j ( x j p i j ) 2 6[0, 1]−3.32
  F 21 ( x ) = i = 1 5 X a i X a i T + c i 1 4[0, 10]−10.1532
  F 22 ( x ) = i = 1 7 X a i X a i T + c i 1 4[0, 10]−10.4028
  F 23 ( x ) = i = 1 10 X a i X a i T + c i 1 4[0, 10]−10.5363

References

  1. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  2. Shareef, H.; Ibrahim, A.A.; Mutlag, A.H. Lightning search algorithm. Appl. Soft Comput. 2015, 36, 315–333. [Google Scholar] [CrossRef]
  3. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  4. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  5. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  6. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  8. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  9. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  10. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  11. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  12. Qu, C.; He, W.; Peng, X.; Peng, X. Harris hawks optimization with information exchange. Appl. Math. Model. 2020, 84, 52–75. [Google Scholar] [CrossRef]
  13. Liu, X.; Liang, T. Harris hawk optimization algorithm based on square neighborhood and random array. Control. Decis. 2021, 37, 2467–2476. [Google Scholar]
  14. TANG, A.; HAN, T.; XU, D. Chaotic elite Harris hawks optimization algorithm. J. Comput. Appl. 2021, 41, 2265. [Google Scholar]
  15. Kaveh, A.; Rahmani, P.; Eslamlou, A.D. An efficient hybrid approach based on Harris Hawks optimization and imperialist competitive algorithm for structural optimization. Eng. Comput. 2021, 38, 1555–1583. [Google Scholar] [CrossRef]
  16. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
  17. Elgamal, Z.M.; Yasin, N.B.M.; Tubishat, M.; Alswaitti, M.; Mirjalili, S. An improved harris hawks optimization algorithm with simulated annealing for feature selection in the medical field. IEEE Access 2020, 8, 186638–186652. [Google Scholar] [CrossRef]
  18. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  19. Li, W.; Shi, R.; Dong, J. Harris hawks optimizer based on the novice protection tournament for numerical and engineering optimization problems. Appl. Intell. 2023, 53, 6133–6158. [Google Scholar] [CrossRef]
  20. Houssein, E.H.; Hosney, M.E.; Oliva, D.; Mohamed, W.M.; Hassaballah, M. A novel hybrid Harris hawks optimization and support vector machines for drug design and discovery. Comput. Chem. Eng. 2020, 133, 106656. [Google Scholar] [CrossRef]
  21. Wunnava, A.; Naik, M.K.; Panda, R.; Jena, B.; Abraham, A. A differential evolutionary adaptive Harris hawks optimization for two dimensional practical Masi entropy-based multilevel image thresholding. J. King Saud-Univ.-Comput. Inf. Sci. 2022, 34, 3011–3024. [Google Scholar] [CrossRef]
  22. Bratley, P.; Fox, B. Implementing sobols quasirandom sequence generator (algorithm 659). ACM Trans. Math. Softw. 2003, 29, 49–57. [Google Scholar]
  23. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Washington, DC, USA, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  24. Tubishat, M.; Idris, N.; Shuib, L.; Abushariah, M.A.; Mirjalili, S. Improved Salp Swarm Algorithm based on opposition based learning and novel local search algorithm for feature selection. Expert Syst. Appl. 2020, 145, 113122. [Google Scholar] [CrossRef]
  25. Ewees, A.A.; Abd Elaziz, M.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
  26. Peng, H.; Zeng, Z.; Deng, C.; Wu, Z. Multi-strategy serial cuckoo search algorithm for global optimization. Knowl.-Based Syst. 2021, 214, 106729. [Google Scholar] [CrossRef]
  27. Zhu, X.; Ghahramani, Z.; Lafferty, J.D. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), Washington, DC, USA, 21–24 August 2003; pp. 912–919. [Google Scholar]
  28. Farahani, S.M.; Abshouri, A.A.; Nasiri, B.; Meybodi, M. A Gaussian firefly algorithm. Int. J. Mach. Learn. Comput. 2011, 1, 448. [Google Scholar] [CrossRef] [Green Version]
  29. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005005, 2005. [Google Scholar]
  30. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
Figure 1. Comparison of Sobol population generation and random population generation.
Figure 1. Comparison of Sobol population generation and random population generation.
Mathematics 11 01525 g001
Figure 2. Iterative change graph of E.
Figure 2. Iterative change graph of E.
Mathematics 11 01525 g002
Figure 3. Iterative change graph of  E 1 .
Figure 3. Iterative change graph of  E 1 .
Mathematics 11 01525 g003
Figure 4. Graph of wandering step length change control.
Figure 4. Graph of wandering step length change control.
Mathematics 11 01525 g004
Figure 5. The flow chart of MSHHO.
Figure 5. The flow chart of MSHHO.
Mathematics 11 01525 g005
Figure 6. Qualitative results of F5, F6, F7, and F8 (CEC2005).
Figure 6. Qualitative results of F5, F6, F7, and F8 (CEC2005).
Mathematics 11 01525 g006
Figure 7. Qualitative results of F10, F12, F13, and F14 (CEC2005).
Figure 7. Qualitative results of F10, F12, F13, and F14 (CEC2005).
Mathematics 11 01525 g007
Figure 8. Qualitative results of F15, F17, F19, and F20 (CEC2005).
Figure 8. Qualitative results of F15, F17, F19, and F20 (CEC2005).
Mathematics 11 01525 g008
Figure 9. Qualitative results of F18, F21, F24, and F30 (CEC2017).
Figure 9. Qualitative results of F18, F21, F24, and F30 (CEC2017).
Mathematics 11 01525 g009
Figure 10. Pressure vessel design problem.
Figure 10. Pressure vessel design problem.
Mathematics 11 01525 g010
Figure 11. Qualitative results of pressure vessel design problem.
Figure 11. Qualitative results of pressure vessel design problem.
Mathematics 11 01525 g011
Figure 12. Sping design problem.
Figure 12. Sping design problem.
Mathematics 11 01525 g012
Figure 13. Qualitative results of spring design problem.
Figure 13. Qualitative results of spring design problem.
Mathematics 11 01525 g013
Table 1. Parameter sets of the algorithms.
Table 1. Parameter sets of the algorithms.
AlgorithmParameters
GWOA variable decreases linearly from 2 to 0
r1, r2 (random numbers)  [ 0 , 1 ]
SSAC2, C3 (random numbers)  [ 0 , 1 ]
SCAA (constant) = 2
WOAA variable decreases linearly from 2 to 0
a2 variable decreases linearly from 2 to 0
Table 2. Results of CEC2005 benchmark functions.
Table 2. Results of CEC2005 benchmark functions.
FunItemMSHHOHHOGWOSCASSAWOA
F1Ave0.00E+001.86E−991.12E−271.84E+012.01E−071.64E−73
Best0.00E+001.36E−1163.16E−294.46E−022.50E−087.96E−91
Std0.00E+005.80E−991.91E−274.25E+012.80E−076.89E−73
F2Ave0.00E+004.53E−499.67E−174.13E−022.30E+009.59E−50
Best0.00E+003.91E−581.51E−174.04E−058.17E−021.41E−58
Std0.00E+002.44E−486.31E−171.17E−011.56E+002.94E−49
F3Ave0.00E+001.00E−691.68E−058.64E+031.80E+034.34E+04
Best0.00E+001.32E−951.89E−085.50E+023.28E+021.69E+04
Std0.00E+005.50E−694.68E−054.43E+031.27E+031.45E+04
F4Ave0.00E+002.18E−475.89E−073.42E+011.14E+014.05E+01
Best0.00E+007.44E−573.16E−081.59E+014.69E+001.93E−03
Std0.00E+001.19E−463.23E−079.96E+004.13E+002.68E+01
F5Ave2.73E−061.22E−022.70E+013.57E+042.65E+022.80E+01
Best4.98E−096.77E−062.61E+013.36E+012.73E+012.75E+01
Std4.55E−061.76E−026.27E−016.43E+043.70E+023.98E−01
F6Ave9.27E−092.00E−047.53E−011.51E+013.40E−074.92E−01
Best6.52E−121.58E−078.51E−054.37E+002.77E−085.80E−02
Std1.55E−082.75E−044.13E−011.37E+018.01E−073.22E−01
F7Ave6.17E−051.28E−041.88E−038.26E−021.72E−012.97E−03
Best2.04E−061.13E−068.16E−047.69E−035.79E−026.12E−05
Std4.51E−051.73E−049.00E−048.56E−026.42E−023.80E−03
F8Ave−12,537.7−12,493.4−5872.01−3826.45−7321.88−10,872.7
Best−12,569.5−12,569.5−7039.92−4714.89−8719.91−12,563.3
Std1.74E+023.74E+027.59E+023.14E+028.06E+021.64E+03
F9Ave0.00E+000.00E+003.21E+003.15E+015.65E+011.89E−15
Best0.00E+000.00E+000.00E+003.53E−011.79E+010.00E+00
Std0.00E+000.00E+004.32E+003.42E+012.41E+011.04E−14
F10Ave8.88E−168.88E−169.65E−149.77E+002.69E+004.44E−15
Best8.88E−168.88E−167.55E−142.32E−021.65E+008.88E−16
Std0.00E+000.00E+001.85E−149.50E+006.90E−012.64E−15
F11Ave0.00E+000.00E+008.17E−039.35E−011.80E−021.18E−02
Best0.00E+000.00E+000.00E+001.17E−015.11E−040.00E+00
Std0.00E+000.00E+001.23E−022.69E−011.35E−024.53E−02
F12Ave2.58E−097.55E−065.32E−024.12E+047.10E+002.26E−02
Best3.05E−112.78E−081.32E−027.84E−012.63E+007.38E−03
Std4.14E−091.35E−052.45E−022.18E+054.09E+001.71E−02
Table 3. Results of CEC2005 benchmark functions.
Table 3. Results of CEC2005 benchmark functions.
FunItemMSHHOHHOGWOSCASSAWOA
F13Ave2.86E−089.53E−056.28E−014.14E+041.24E+015.56E−01
Best3.40E−104.10E−093.62E−017.40E+006.43E−021.02E−01
Std4.29E−081.23E−042.06E−017.62E+041.26E+012.87E−01
F14Ave0.9980041.228635.850332.117661.031142.96061
Best0.9980040.9980040.9980040.9980040.9980040.998004
Std2.92E−169.23E−014.84E+001.90E+001.81E−013.31E+00
F15Ave0.00031060.00035400.00513310.00101040.00220310.0007264
Best0.00030750.00030930.00030750.00033300.00058010.0003134
Std1.42E−054.40E−058.55E−033.73E−044.94E−034.69E−04
F16Ave−1.03163−1.03163−1.03163−1.03156−1.03163−1.03163
Best−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163
Std1.23E−131.34E−092.28E−087.53E−051.98E−149.76E−10
F17Ave0.3978870.3978950.3978880.3993360.3978870.397903
Best0.3978870.3978870.3978870.39790.3978870.397887
Std1.09E−111.37E−059.13E−070.001233551.20E−144.34E−05
F18Ave333.000043.0001433.00018
Best333333
Std5.99E−133.38E−075.20E−052.98E−041.74E−123.37E−04
F19Ave−3.86278−3.86063−3.86161−3.85458−3.86278−3.85393
Best−3.86278−3.86278−3.86278−3.86093−3.86278−3.86278
Std1.69E−074.26E−032.31E−032.45E−035.41E−101.82E−02
F20Ave−3.32199−3.05318−3.27542−2.89641−3.2164−3.22982
Best−3.32199−3.24908−3.32199−3.12557−3.322−3.32197
Std7.06E−069.88E−026.34E−023.92E−015.50E−021.03E−01
F21Ave−5.90486−5.21017−9.56373−3.03801−7.48769−10.1446
Best−10.1532−9.80978−10.1532−7.11165−10.1532−10.1531
Std1.76E+008.69E−011.83E+001.96E+003.59E+002.91E+00
F22Ave−6.3279−5.24959−10.0479−3.06053−8.91931−10.3984
Best−10.4029−10.1296−10.4026−6.68804−10.4029−10.4013
Std2.29E+009.22E−011.34E+001.68E+002.78E+002.96E+00
F23Ave−7.29165−5.46617−10.535−3.35072−8.67822−10.535
Best−10.5364−10.4401−10.536−6.52536−10.5364−10.535
Std2.69E+001.30E+006.93E−041.72E+003.20E+003.46E+00
Table 4. The CEC2017 benchmark functions selected for the experiment.
Table 4. The CEC2017 benchmark functions selected for the experiment.
TypeFunctionDimRangef min
Unimodal FunctionsShifted and Rotated Bent Cigar Function (CEC2017-01)10[−100,100]100
Shifted and Rotated Zakharov Function (CEC2017-03)10[−100,100]300
Simple Multimodal FunctionsShifted and Rotated Rastrigin’s Function (CEC2017-05)10[−100,100]500
Shifted and Rotated Lunacek Bi_Rastrigin Function (CEC2017-07)10[−100,100]700
Hybrid FunctionsHybrid Function 4 (N = 4) (CEC2017-14)10[−100,100]1400
Hybrid Function 5 (N = 4) (CEC2017-15)10[−100,100]1500
Hybrid Function 6 (N = 5) (CEC2017-18)10[−100,100]1800
Composition FunctionsComposition Function 1 (N = 3) (CEC2017-21)10[−100,100]2100
Composition Function 4 (N = 4) (CEC2017-24)10[−100,100]2400
Composition Function 10 (N = 3) (CEC2017-30)10[−100,100]3000
Table 5. Results of CEC2017 benchmark functions.
Table 5. Results of CEC2017 benchmark functions.
FunItemMSHHOHHOGWOSCASSAWOA
CEC2017-01Ave49,440.6181,8772.48E+065.99E+083183.02338,954
Best3140.5963,447.73764.241.40E+08100.98423,441.5
Std108,718117,5948.64E+062.35E+083397.61755,780
CEC2017-03Ave300.329300.732625.499975.373300573.145
Best300.037300.165336.522495.131300308.754
Std0.2728030.270405496.478371.9534.94E−10289.931
CEC2017-05Ave519.841538.494511.584542.678524.008551.057
Best502.985511.036505.978527.153508.955526.905
Std10.155210.69043.681357.2972211.174615.8918
CEC2017-07Ave733.654772.701725.715767.856735.3781.678
Best718.843734.95709.702753.853718.987730.015
Std6.7090617.74229.620297.8237210.795827.0508
CEC2017-14Ave1478.581510.072341.041555.821484.171527.32
Best1433.71471.911433.951462.981442.561439.08
Std24.726625.67821573.3451.907628.759845.6609
CEC2017-15Ave1664.991934.252292.481867.081940.313336.1
Best1522.891558.061535.81556.091590.511644.25
Std98.0525510.4811068.21234.834434.4661660.81
CEC2017-18Ave14,985.716,253.925882.775,409.416,880.717,233.1
Best2206.42398.883143.4423,592.52254.642129.58
Std12612.110,061.817,274.939,931.811,704.213,153.9
CEC2017-21Ave2204.272315.22303.952219.692257.212299.47
Best2201.182202.12201.512204.122202.022203.47
Std1.7937358.799835.582932.628660.699661.1923
CEC2017-24Ave2503.622756.832743.722760.062738.662767.97
Best2500.032500.862729.32535.922501.052503.79
Std19.301106.4559.33964.015145.691152.9464
CEC2017-30Ave102,074953,406528643591,396198,622523,855
Best5427.236507.985859.8991465.36371.175601.07
Std238,4471.37E+06679,480463,886377,547590,372
Table 6. Results of Wilcoxon rank sum test for different algorithms.
Table 6. Results of Wilcoxon rank sum test for different algorithms.
FunHHOGWOSCASSAWOA
CEC2005-F11.21E−121.21E−121.21E−121.21E−121.21E−12
CEC2005-F21.21E−121.21E−121.21E−121.21E−121.21E−12
CEC2005-F31.21E−121.21E−121.21E−121.21E−121.21E−12
CEC2005-F41.21E−121.21E−121.21E−121.21E−121.21E−12
CEC2005-F54.50E−113.02E−113.02E−113.02E−113.02E−11
CEC2005-F63.02E−113.02E−112.61E−103.02E−113.02E−11
CEC2005-F70.0933413.02E−113.02E−113.02E−111.33E−10
CEC2005-F81.85E−093.00E−113.00E−113.00E−111.46E−10
CEC2005-F9NaN4.53E−121.21E−121.21E−120.333711
CEC2005-F10NaN1.13E−121.21E−121.21E−121.22E−08
CEC2005-F11NaN6.61E−051.21E−121.21E−120.160802
CEC2005-F123.02E−113.02E−113.02E−113.02E−113.02E−11
CEC2005-F132.37E−103.02E−113.02E−113.02E−113.02E−11
CEC2005-F142.16E−112.15E−111.79E−052.16E−112.16E−11
CEC2005-F153.47E−108.84E−073.02E−113.34E−115.49E−11
CEC2005-F161.14E−092.92E−112.19E−032.92E−112.92E−11
CEC2005-F171.63E−053.01E−115.42E−073.01E−111.09E−10
CEC2005-F181.20E−083.01E−116.00E−083.01E−113.01E−11
CEC2005-F193.02E−113.69E−119.92E−113.02E−116.07E−11
CEC2005-F203.02E−112.20E−076.77E−053.02E−113.34E−11
CEC2005-F213.82E−104.08E−050.02812873.82E−100.304177
CEC2005-F223.02E−112.84E−049.53E−072.92E−090.53951
CEC2005-F232.03E−091.95E−031.11E−049.06E−081.56E−02
CEC2017-F13.69E−110.283783.02E−115.57E−103.50E−09
CEC2017-F32.38E−073.02E−113.02E−113.02E−113.02E−11
CEC2017-F51.36E−072.53E−046.12E−100.529783.65E−08
CEC2017-F72.37E−102.26E−031.46E−100.84181.78E−10
CEC2017-F142.13E−051.18E−045.97E−090.371082.49E−06
CEC2017-F153.03E−041.02E−045.27E−051.34E−052.61E−10
CEC2017-F180.363220.0241572.87E−100.403540.53951
CEC2017-F214.68E−084.31E−084.62E−100.728271.96E−10
CEC2017-F244.50E−113.02E−113.69E−115.57E−103.34E−11
CEC2017-F309.03E−040.706177.11E−090.008240.00111
Table 7. Comparison of the results of different improved algorithms.
Table 7. Comparison of the results of different improved algorithms.
FunCEHHOMSHHO
AveStdAveStd
CEC2005-F13.11E−829.82E−820.00E+000.00E+00
CEC2005-F24.57E−402.50E−390.00E+000.00E+00
CEC2005-F31.59E−598.70E−590.00E+000.00E+00
CEC2005-F43.03E−441.04E−430.00E+000.00E+00
CEC2005-F56.62E−048.52E−047.31E−062.73E−05
CEC2005-F66.04E−067.14E−061.73E−189.41E−18
CEC2005-F71.32E−041.18E−047.48E−059.25E−05
CEC2005-F90.00E+000.00E+000.00E+000.00E+00
CEC2005-F108.88E−160.00E+008.88E−160.00E+00
CEC2005-F110.00E+000.00E+000.00E+000.00E+00
CEC2005-F125.41E−076.13E−074.43E−122.00E−11
CEC2005-F141.16E+003.77E−019.98E−019.06E−14
CEC2005-F153.28E−041.49E−053.08E−041.39E−06
CEC2005-F16−1.03E+002.88E−10−1.03E+006.86E−12
CEC2005-F173.00E+003.59E−083.00E+001.37E−12
CEC2005-F18−3.86E+003.16E−04−3.86E+007.00E−07
CEC2005-F20−3.20E+008.75E−02−3.32E+007.60E−06
Table 8. Experimental analysis of solving high-dimensional functions.
Table 8. Experimental analysis of solving high-dimensional functions.
FunDimHHOMSHHO
AveStdAveStd
CEC2005-F11002.40E−948.30E−940.00E+000.00E+00
5004.64E−942.53E−930.00E+000.00E+00
CEC2005-F21003.08E−501.21E−490.00E+000.00E+00
5004.11E−491.22E−480.00E+000.00E+00
CEC2005-F31002.23E−549.48E−540.00E+000.00E+00
5001.46E−307.38E−300.00E+000.00E+00
CEC2005-F41004.93E−472.68E−460.00E+000.00E+00
5003.30E−481.48E−470.00E+000.00E+00
CEC2005-F51005.06E−026.33E−021.75E−044.20E−04
5002.42E−012.62E−012.56E−023.82E−02
CEC2005-F61004.59E−046.12E−047.23E−057.42E−05
5001.82E−031.84E−032.70E−035.80E−03
CEC2005-F71001.55E−041.69E−048.85E−051.15E−04
5001.91E−041.89E−041.91E−041.89E−04
CEC2005-F8100−4.19E+041.47E+00−4.19E+041.33E−03
500−2.09E+052.65E+01−2.09E+056.97E−06
CEC2005-F91000.00E+000.00E+000.00E+000.00E+00
5000.00E+000.00E+000.00E+000.00E+00
CEC2005-F101008.88E−160.00E+008.88E−160.00E+00
5008.88E−160.00E+008.88E−160.00E+00
CEC2005-F111000.00E+000.00E+000.00E+000.00E+00
5000.00E+000.00E+000.00E+000.00E+00
CEC2005-F121005.74E−067.54E−069.23E−071.12E−06
5003.43E−063.78E−062.72E−062.30E−06
CEC2005-F131001.77E−041.63E−044.65E−053.73E−05
5007.29E−047.97E−043.95E−043.58E−04
Table 9. Results of pressure vessel design problem for different algorithms.
Table 9. Results of pressure vessel design problem for different algorithms.
AlgorithmSHRLAveBestStd
MSHHO1.089954.04e−1065.1354710.38712530.82302.55384.066
HHO0.48292041.29134186.9013033.732302.55427.569
SCA0040.391282005199.82310.761590.27
SSA0.55132043.5991158.8883495.342302.55379.247
WOA1.15242065.22523104074.132302.562367.06
Table 10. Results of spring design problem for different algorithms.
Table 10. Results of spring design problem for different algorithms.
AlgorithmdDNAveBestStd
MSHHO0.139041.2957811.97153.673493.661890.022855
HHO0.138651.2842312.16013.690873.661890.029172
SCA0.135311.1921513.93613.73713.667880.058998
SSA0.135051.1847913.92853.714343.661920.123823
WOA0.136221.2171213.30523.700613.661890.028036
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, F.; Wang, J.; Chu, F. Improved Multi-Strategy Harris Hawks Optimization and Its Application in Engineering Problems. Mathematics 2023, 11, 1525. https://doi.org/10.3390/math11061525

AMA Style

Tian F, Wang J, Chu F. Improved Multi-Strategy Harris Hawks Optimization and Its Application in Engineering Problems. Mathematics. 2023; 11(6):1525. https://doi.org/10.3390/math11061525

Chicago/Turabian Style

Tian, Fulin, Jiayang Wang, and Fei Chu. 2023. "Improved Multi-Strategy Harris Hawks Optimization and Its Application in Engineering Problems" Mathematics 11, no. 6: 1525. https://doi.org/10.3390/math11061525

APA Style

Tian, F., Wang, J., & Chu, F. (2023). Improved Multi-Strategy Harris Hawks Optimization and Its Application in Engineering Problems. Mathematics, 11(6), 1525. https://doi.org/10.3390/math11061525

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop