Next Article in Journal
Determinantal Expressions, Identities, Concavity, Maclaurin Power Series Expansions for van der Pol Numbers, Bernoulli Numbers, and Cotangent
Next Article in Special Issue
An Improved Sparrow Search Algorithm for Global Optimization with Customization-Based Mechanism
Previous Article in Journal
Modeling of COVID-19 in View of Rough Topology
Previous Article in Special Issue
A Learning—Based Particle Swarm Optimizer for Solving Mathematical Combinatorial Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Whale Optimization Algorithm Based on Fusion Gravity Balance

1
School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
2
College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321000, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(7), 664; https://doi.org/10.3390/axioms12070664
Submission received: 17 May 2023 / Revised: 27 June 2023 / Accepted: 28 June 2023 / Published: 4 July 2023

Abstract

:
In order to improve the shortcomings of the whale optimization algorithm (WOA) in dealing with optimization problems, and further improve the accuracy and stability of the WOA, we propose an enhanced regenerative whale optimization algorithm based on gravity balance (GWOA). In the initial stage, the nonlinear time-varying factor and inertia weight strategy are introduced to change the foraging trajectory and exploration range, which improves the search efficiency and diversity. In the random walk stage and the encircling stage, the excellent solutions are protected by the gravitational balance strategy to ensure the high quality of solution. In order to prevent the algorithm from rapidly converging to the local extreme value and failing to jump out, a regeneration mechanism is introduced to help the whale population escape from the local optimal value, and to help the whale population find a better solution within the search interval through reasonable position updating. Compared with six algorithms on 16 benchmark functions, the contribution values of each strategy and Wilcoxon rank sum test show that GWOA performs well in 30-dimensional and 100-dimensional test functions and in practical applications. In general, GWOA has better optimization ability. In each algorithm contribution experiment, compared with the WOA, the indexes of the strategies added in each stage were improved. Finally, GWOA is applied to robot path planning and three classical engineering problems, and the stability and applicability of GWOA are verified.

1. Introduction

Optimization problems can be found in industrial manufacturing [1], engineering optimization [2,3], image processing [4], path planning [5,6], aerospace [7], medical [8], construction project risk aversion [9] and other research fields. However, with the emergence of more complex optimization problems, traditional optimization schemes such as the gradient descent method and the Newton method cannot achieve high efficiency standards in terms of time cost and solution accuracy. Therefore, through exploration and research, researchers have proposed a number of excellent swarm intelligence optimization algorithms according to the habits of natural organisms, which provides the research direction for the majority of scholars. A novel Particle Swarm Optimization (PSO) algorithm was proposed by Eberhart R et al. [10] in 2002. In 1991, Dorigo M designed and proposed the Ant colony Optimizing (ACO) algorithm, which was inspired by the pheromones produced during ant communication [11]. In 2014, Seyedali Mirjalili et al. [12] proposed a Grey Wolf Optimizer (GWO), with reasonable logic and simple parameters inspired by the hunting method of wolves. Fathollahi-Fard et al. [13] developed a simple and intelligent single-solution Social Engineering Optimizer (SEO) in 2018. Xue et al. [14] proposed a Sparrow Search Algorithm (SSA) with simple parameters and population classification based on the characteristics of the division of labor of sparrows seeking food in 2020. Compared with traditional optimization methods, swarm intelligence algorithms have advantages such as simple parameters, clear logic and unique mechanisms, which have been explored and studied by many scholars.
Seyedali Mirjalili et al. [15] designed and proposed a novel metaheuristic optimization algorithm named Whale Optimization Algorithm (WOA) by observing the behavior of humpback whales in three stages. This algorithm has clear logic, simple structure, fast convergence and strong global search and local search performance. According to the “no free lunch” theory [16], an algorithm can not achieve good results in all optimization problems. At present, the WOA has gradually shown its advantages in solving many complex optimization problems. However, the WOA is easily affected by the local optimal value in the iterative process and cannot jump out of the local extreme value. In the optimization process, due to the fast convergence speed and low solving accuracy, the WOA cannot achieve satisfactory results in the face of many complex optimization problems.
In order to improve the shortcomings of the WOA mentioned above, many scholars have conducted researches on WOA variants to make them have a stronger ability to solve optimization problems. Xiong et al. [17] put forward two new prey search strategies, which effectively balanced local search and global exploration, better solved the problems of stagnation and premature convergence and obtained obvious advantages in solving the parameter extraction problem of photovoltaic models. Li et al. [18] introduced nonlinear adjustment parameters and Gaussian perturbation operators, and achieved good results in both convergence rate and optimization accuracy, as well as good application results in the location problem of the critical sliding surface of a soil slope. Shahraki et al. [19] proposed a unique pooling mechanism and combined it with three effective search strategies to propose a binary improved whale optimization algorithm, which effectively solved the problem of COVID-19 medical detection. Yang et al. [20] introduced adaptive nonlinear inertial weights into the WOA to control the convergence rate, and adopted the finite mutation mechanism to improve the convergence efficiency, showing superior performance in economic load scheduling problems. Shen et al. [21] showed that, combined with the multi-population evolution strategy, WOAs have a unique global search ability and excellent ability to jump out of local optima, and have excellent performance in various engineering design problems. Guo et al. [22] proposed that the wavelet mutation strategy and social learning principle have greatly improved the performance of global searching and jumping out of local optima, and achieved good application results in water resource demand prediction. Oliva et al. [23] used chaotic mapping to greatly improve the ability to search for optimal solutions, and achieved excellent results in parameter estimation of solar cells. Ning et al. [24] added the Gaussian variational method and the non-fixed penalty function method to the initial population and convergence factors, which played a prominent role in global search ability, stability, convergence rate, convergence accuracy and robustness, and also had a good effect on more complex constrained optimization problems. Zhang et al. [25] improved the algorithm performance to a great extent by using nonlinear adaptive weights and gold sine operators, which has fast global convergence and can avoid falling into local optimality. Yang et al. [26] adopted a multi-strategy approach integrating chaotic mapping, dynamic convergence factors, Levi flight mechanisms and evolutionary population dynamics mechanisms, which has significant advantages and effectiveness in dealing with global optimization problems. Li et al. [27] used chaotic mapping and tournament selection strategies to enrich the diversity and randomness of the population in the initialization stage, which made it difficult to fall into local optimality and significantly improved the accuracy of the algorithm. Jiang et al. [28] used scheduling rules, nonlinear convergence factors and mutation operations to effectively improve the quality of the initial solution, balance the exploration and development ability of the algorithm and avoid stagnation, which has been proven to effectively solve the energy-saving scheduling problem. Although the above scholars put forward relatively complete and constructive attempts, a larger search scope was not realized in the exploration stage. In the local search stage, there is still insufficient exploration of the surrounding area, which leads to insufficient precision or local optima. In addition, there was still a lack of a protection mechanism for better solutions in the random walk stage, resulting in insufficient solving accuracy and slow convergence rate. To this end, this study makes a new exploration and attempts to solve the above problems by proposing an enhanced regenerative whale optimization algorithm based on gravity balance (GWOA). The main work is as follows:
  • Using improved nonlinear time-varying factors and inertia weights;
  • By combining the idea of gravitation with the random walk strategy and shrink and surround strategy in the whale optimization algorithm, a new position update model is proposed;
  • Designing a novel strategy for rebirth;
  • Testing the improved algorithm on the benchmark test function, cec2013, robot path planning and three classical engineering optimization problems, and finally analyzing related experiments.
The article has a clear logical structure. Section 1 introduces the research background in the field of optimization algorithm of recent years. Section 2 briefly introduces the details of the WOA. Section 3 introduces the improved strategy according to the search characteristics of different stages, and analyzes the time complexity of GWOA. Section 4 verifies the feasibility of the GWOA, which combines search strategies of different stages, and tests the GWOA and six other algorithms on 16 benchmark test functions. The experimental results are recorded in a table, and the experimental data are compared and analyzed to verify the overall performance of the GWOA. Then, a contribution value experiment is carried out to verify the contribution value of related strategies. Finally, the uniqueness of the GWOA is verified with the Wilcoxon rank sum test. Section 5 applies the GWOA to robot path planning and three classical engineering optimization problems to further verify the feasibility and effectiveness of the improved algorithm. Section 6 analyzes and summarizes the work of this paper, and makes a brief plan and outlook for the next step.

2. Whale Optimization Algorithm

A paper by Seyedali Mirjalili et al [15]. proposed an intelligent optimization algorithm with clear logic and a complete structure after studying the complete predation behavior of humpback whales. The predation behavior of the WOA mainly includes the following three aspects: random walk search, that is, whales randomly search for prey in the surrounding environment in the ocean; surrounding prey stage, where whales spiral around their prey from bottom to top; and bubble net predation, where whales engage in bubble net hunting to kill food as they spiral up and surround the prey.

2.1. Random Walk Search

In the optimization process of the WOA algorithm, each whale carries out different position updates according to different stages, and seeks prey through position changes in the optimization process; the prey position is the optimal location. At this stage, the purpose of the whale is relatively random and the exact location of the prey is not clear, so it is necessary to conduct a large-scale search in the search space, and slowly determine the population of whales to determine the final prey location. The location update formula of the random walk stage of the WOA algorithm is controlled by parameter A. When A > 1 and p 0.5 , the WOA will randomly search for an individual whale for a location update based on whale population location information. Its position update formula is as follows [15]:
X = X 1 , X 1 , X 1 , , X d
D = C X r a n d X t
X t + 1 = X r a n d A D
A = C a a
C = 2 r
X r a n d is the random individual, X t is the individual under t iterations, X t + 1 is the updated value, a is a series of decreasing numbers from 2 to 0, A and C are the representation coefficients, and r is a random number of (0,1).

2.2. Surrounding Prey Stage

When the WOA reaches the stage of surrounding prey, its position update will no longer be a random walk search, but rather a position update towards the current optimal solution. According to the previous stage, the whale can roughly grasp the general position of the prey, and can know the position of the individual whale closest to the target position, so as to use the position of the nearest whale as a reference to achieve the purpose of approaching the target position. At this stage, when parameter A < 1 and p < 0.5 , the optimal location of the WOA population is close to or just at the target position. Therefore, excellent individuals of the WOA population serve as the driving force for progress. Other individuals of the WOA population update their positions through the current optimal individual position, forming a spiraling encircling state and gradually approaching the target position until it is reached. The formula of position update in this stage is as follows [15]:
X t + 1 = X b e s t t A C X b e s t t X t
D = C X b e s t X t

2.3. Bubble Net Predation

During the WOA iteration process, the algorithm continues to perform position transformations based on the optimal whale. At this stage, the optimal whale determines the final location of the target, and drives the prey through the whale’s unique spiral and bursts a series of bubbles, finally driving the gathered prey into the whale to eat. The hunt is now complete. The model is analyzed as follows: When parameter A < 1 and probability p 0.5 , the WOA meets the hunting requirements and uses a bubble net strategy to hunt prey. This phase can be modeled as follows [15]:
X t + 1 = D b t e b l c o s 2 π l + X b e s t , p 0.5
b is a constant and l is a random number (−1,1).
The WOA flowchart is shown in Figure 1:

3. Improvement Strategies

In the WOA with multi-strategy integration, the WOA itself has three phases. Therefore, it is worth trying out which strategy to use at different stages of the WOA. Therefore, in this paper, nonlinear time-varying factors and inertia weight strategies are introduced in the random search phase of the WOA to make the population more diverse. In the wandering and encircling phases of the WOA, the gravitational balancing strategy is added to help the poor individuals get closer to the good ones faster. At the end of the three stages, a respawn mechanism is added to help individual whales jump out of the local optimum. The strategy proposed above is an attempt to improve the WOA, and is described in detail below.

3.1. Nonlinear Time-Varying Factors and Inertia Weights

Parameter A has important significance during the optimization period, and the whale optimization algorithm adjusts it through the step size factor a. In the literature, A is linearly decreasing. The magnitude of A decreases significantly in the early exploration phase, and the population relies on a longer step size to search for surrounding food. Later in the iteration, the magnitude of A, due to its own large step descent value, appears to be missing when it is necessary to mine hidden optimal solutions around the optimal solution. Therefore, inspired by the dynamic convergence factor in reference [20], this article proposes an improved nonlinear time-varying factor to improve a. This is used to compensate for the insufficient exploration ability in the early exploration phase of the the WOA and enhance its later development. The formula for the nonlinear time-varying factors is as follows:
a = 1 + sin p i 2 p i × t M a x i t e r   t < M a x i t e r / 2 a = 1 sin p i × t 250 M a x i t e r     t M a x i t e r / 2
Moreover, the adaptive weighting strategy is added to give weight to the population, which delays the convergence rate of the GWOA, enhances its search ability, and makes the search area more thorough. The weight growth will be slowed down by the increase in the number of iterations. At the beginning of the iteration, the weight is small and has little effect on the position and exploration ability of the whale. However, with the gradual increase in the number of iterations, the weight gradually increases, and the position of the optimal solution becomes more attractive to the population, which makes the population close to the optimal value by degrees. The search ability, exploitation ability and solution accuracy of the GWOA are improved. The formula is described as follows:
w = p i t a n ( p i / 4 × ( t / M a x i t e r )
X t + 1 = w × X r a n d A D
X t + 1 = w × X b e s t A D
X r a n d is a random whale individual position under iteration t , and X b e s t is the location of the best individual whale under iteration t .

3.2. Gravitational Equilibrium Strategy

Because the adaptive weight strategy delays the convergence rate of the algorithm in the iteration period, and considering the randomness of the whale population in the random walk stage of the whale optimization algorithm, there are two methods. First, in the random walk stage, individuals with poor fitness move closer to excellent individuals. Secondly, the individuals with good fitness value are displaced by those with poor fitness value. Whale populations need excellent individuals to help the population capture the final location of their prey, encouraging the first situation above in the way of location renewal. Some of the best whales are therefore affected by the randomness of the random walk. We aimed to reduce the influence of the random walk strategy on excellent individuals and make up for the slow convergence caused by inertia weight strategy. Therefore, inspired by the Virtual Force algorithm (VFA) proposed by Howard A et al. [29], this study proposes a gravity balance strategy to solve the problem of the influence of inferior solutions on excellent individuals in the random walk stage, and at the same time make up for the slow convergence of the inertia weight strategy. The concept of this strategy comes from physics. Celestial bodies have a mutual attraction; this state of mutual attraction exists even between two atoms have interaction, in which case it is defined as quantum entanglement. The concept of gravitational equilibrium is that two objects that are attracted to each other have a hypothetical equilibrium point on their line such that the point exerts the same attraction on both A and B. The model diagram is shown in Figure 2. Therefore, under the condition that the positions of the two celestial bodies remain unchanged, excluding the influence of other objects on the two reference objects, there are three conditions for the equilibrium position of the two celestial bodies:
  • If the mass of celestial body A is greater than that of celestial body B , the equilibrium point is on the line between the two celestial bodies and is close to celestial body B ;
  • If the mass of object A is less than that of object B , the equilibrium point is on the line between the two bodies and is close to object A ;
  • If the mass of object A is equal to that of object B , the equilibrium point is at the midpoint of the line between the two bodies;
  • Its position expression is:
    G M t M o R t o 2 = G M r a n d M o R r a n d o 2
    R t o = L t r a n d M t M r a n d + 1
    R t o + R r a n d o = L t r a n d
If M t is equal to M r a n d , the formula is:
R t o = R r a n d o = L t r a n d 2
M is the fitness value of t whale individuals in the number of iterations, L t r a n d is the distance between two whale individuals, and G is the gravitational constant.
Finally, the final position updating formula is combined with Formulas (11) and (12) to obtain the new Formulas (17) and (18). Considering that the random walk only finds the individual whale population for displacement, according to the gravitational balance strategy, it can be known that the R t o of the excellent solution is small when it moves to the inferior solution, ensuring the quality of the excellent solution itself. When the disadvantageous solution moves to the superior solution, the distance of R r a n d o is long, which is also helpful for the position update of the disadvantageous solution. The construction of the strategy model shows a reasonable protection for the excellent solution during the process of the random walk. At the same time, the inferior solution retains the original location update characteristics and increases the process of the inferior solution to develop new excellent solutions around the excellent solution.
X t + 1 = w × X r a n d A L t r a n d / ( M t M r a n d + 1 )
X t + 1 = w × X b e s t A L t b e s t / ( M t M b e s t + 1 )

3.3. Regeneration Mechanism

Like other optimization algorithms, it is easy for the WOA to be caught in local extremum stagnation during the iterative process, which affects the search accuracy. This study proposes a regeneration mechanism to help the WOA jump out of local stagnation stasis and find better solutions, inspired by the benefit for hundreds of millions of organisms that arises after whale death. As the saying goes, when a whale falls, everything is born. When a whale dies, it creates a micro-ecosystem around it, and the whale becomes an energy source for a small biosphere that benefits the creatures in it. When the WOA falls into stagnation in the late iteration period, the current optimal value does not reach the ideal state and no position update occurs, and the whale enters the near-death state by default. When the number of N d population exceeds the near-death warning value N g , no position change is made at the same time, and it is considered that the whale needs to give up its current position and fall into a local optimal state. At this time, it needs to assign a new position to replace the current position of an individual whale. It can continue to find more suitable points around the local optimal solution, and realize a better solution in the sub-optimal selection. Its location is updated as follows:
X i , j t + 1 = u b l b × r a n d + l b

3.4. Algorithm Flow

Step 1:
Initialize the population size N , problem dimension d i m , maximum number of iterations T m a x , and calculate the initial fitness values of each whale and record them;
Step 2:
Update parameters a and w according to Formulas (9) and (10), and update A , C , L and p at the same time;
Step 3:
If p < 0.5   &   A 1 , refer to Formula (17) to carry out the random walk strategy. If not, skip to Step (4);
Step 4:
If p < 0.5   &   A < 1 , carry out the strategy of encircling the prey according to Formula (18). If conditions (3) and (4) are not met, skip to step (5);
Step 5:
If p 0.5   &   A < 1 , update the location of the globally optimal whale according to spiral Formula (8);
Step 6:
Determine whether the regeneration strategy position update mechanism is satisfied. If yes, update the whale position according to Equation (19);
Step 7:
t = t + 1 , judge whether the loop condition is over, meet the end condition, output the global optimal location and solution, otherwise return to step 2.
The pseudo-code of the GWOA is as follows in Algorithm 1:
Algorithm 1. GWOA
1. The population size N of the algorithm is initialized, the fitness value function problem dimension Dim, T m a x , FESmax.
2. t = 0
3. F E S = 0
4. While   ( t T m a x   a n d   F E S F E S m a x ) do
5. t = t + 1
6. for  i N  do
7. Calculate the initial population fitness value, record the current population fitness value, Find the whale individual with the best fitness X b e s t . The individual fitness value FES + 1 is calculated once
8. end for
9. for  j N  do
10. according to Formulas (9) and (10), Update parameters a , w , A , C , l and p
11. If  ( p < 0.5   &   A 1 )
12. Finding a random individual in the whale population
13. Updating whale position with random walk strategy according to Formula (17)
14. Else if  ( p < 0.5   &   A < 1 )
15. Find an optimal individual in the whale population
16. Use the Formula (18) to surround the prey, update the whale position
17. Else
18. Updating the optimal whale location based on the spiral Formula (8)
19. end if
20. end for
21. for  i N  do
22. The machine will determine whether the position update mechanism of the regeneration strategy is met, if it is met, a new whale individual will be generated based on the Formula (19) to replace the position of the endangered whale
23. end for
24. End while
25. end
26. output: optimal solution

3.5. Time Complexity Analysis

Time complexity is one of the important indices for measuring optimization efficiency, performance and computation cost. To prove that the GWOA does not increase the time complexity of the WOA, this paper analyzes both the GWOA and WOA. In the WOA process, the population size is M, the time consumed by the initial individuals is O(t1), and the time consumed by the objective function of the Dim dimension problem is O(D). Therefore, the time complexity to enter the initial population stage is O (M × (t1)). Enter the cycle and calculate the fitness value of the individual population as O (M × (D)). As can be seen from Equation (3), the time at which the solution is generated in the random walk stage is O(t2). As can be seen from Equation (6), the time at which a new solution is generated according to the optimal solution in the shrinkage inclusion stage is O(t3). From Equation (8), it can be seen that, in the hunting stage ended by the last step, the time at which the solution is generated is O(t4). Because t2, t3 and t4 are parallel structures, the total time complexity of each phase of the WOA is O (M × (t1)) + O (M × (D + t)). Assuming that the maximum number of iterations is T, the final time complexity of the WOA is O (M × (t1)) + O (M × (D + t2 + t3 + t4)) = O (T × M × (t + D)). At present, the PSO time complexity analysis methods are as follows: the time to initialize the population M and calculate the fitness value is O (M × t1), the time to calculate the speed V is t2 when entering the cycle, and the time complexity of the adopted problem is O (D) when the population individual updates t3. Therefore, the time taken at the end of a cycle is O (t1) + O ((t2 + t3 + D) × M), and the number of iterations assumed is T. Then, the final time complexity is O (M × t1) + O (T × M × (D + t2 + t3)) = O (T × M × (D + t)).
Although different strategies are added to the GWOA in different stages, the nonlinear factors, inertia weights and gravitational balance strategies are the same as for the WOA in stochastic and enveloping stages. Although the respawning mechanism produces new solutions, only very few of them need to be replaced. It is assumed that the time of the generation of the regeneration stage solution is o (t5). Therefore, the total time complexity of the GWOA is O (M × (t1)) + O (T × M × (D + t2 + t3 + t4 + t5)) = O (T × M × (t + D)). Therefore, It can be concluded that when all parameters are the same, the time complexity of the GWOA is the same as that of the WOA and PSO, and the improvement of the GWOA will not increase the time complexity of the algorithm.

4. Experiment Tests and Analysis

In order to verify the optimization performance of the algorithm after adding the above strategies, 16 classical benchmark test functions were first selected to test the performance of GWOA, and six algorithms were selected to conduct 30-dimensional and 100-dimensional comparison experiments. The mean and standard deviation values were used to measure the comprehensive performance of GWOA. At the same time, the convergence graphs and interval box graphs of each algorithm in 30 dimensions are listed to help prove the convergence effect and stability of GWOA. Next, through the contribution experiment of each algorithm, we verify the improvement of the three strategies added to the original algorithm. Finally, the Wilcoxon rank sum test and Friedman test were used to help verify the difference between algorithms. The comprehensive experiment and analysis results will be presented below.
The following table shows 16 common benchmark function expressions, dimensions, ranges of variable values and theoretical optimal values in the range of variable values. The function F1–F7 is a unimodal function, F8–F11 is a multimodal function, and F12-F16 is a test function with fixed dimension. Detailed test function information is shown in Table 1:
Simulation experiments are carried out on the MATLAB2020b platform, and experiments were run to solve problems in 30 and 100 dimensions of 16 test functions. The optimization results of the GWOA are compared with the WOA [15], DECWOA [30], GWO [12], PSO [10], MPA [31] and HHO [32]. During the experiments, in order to ensure the rationality of the experimental results of each algorithm, the control variable method were adopted. the GWOA and some comparison algorithms are not suitable for comparison based on the unfairness of the same number of iterations [33], so a more reasonable evaluation method based on FEs was chosen. Therefore, we set the overall size of all algorithms to 30 and evaluated 50,000 times. The parameter settings related to each algorithm are shown in Table 2:
The simulation environment of this experiment was as follows: the simulation experiment software was MATLAB R2020b, the operating system was Windows 11, the main frequency was 3.20 ghz, and 16 GB memory was used. The results are sufficient to avoid the influence of experimental randomness and chance. The GWOA and comparison algorithms ran each benchmark function 30 times independently on two dimensions, and the records are shown in Table 3 and Table 4. In this study, the mean value and standard deviation are used to evaluate the performance of the above algorithms. The average value represents the average optimization ability of the algorithm on the objective function, and the smaller the standard deviation, the better the stability of the optimization algorithm in the current benchmark test function. Finally, under the same benchmark function, the optimal data are marked in bold.
From Table 3, we can see that the average ranking of GWOA is 1.625, ranking first among the comparison algorithms. Among the seven single-peak functions F1–F7, the average and standard deviation of six functions are better for the GWOA than the comparison algorithm, and GWOA can find the optimal value in F1 and F3 functions, but the accuracy of the F6 function is not as good as that of PSO. For the multimodal function F8–F11, GWOA ranks the first among the four multimodal functions, the average accuracy of the solution of F10 is ahead of the comparison algorithm, and the average accuracy of the solution of F8 is closest to the theoretical optimal value, thus proving the advantage of the GWOA in the multimodal function. The GWOA also shows excellent optimization performance on complex fixed-dimensional functions. Only the F12 function ranks first, but the average values of the F14, F15 and F16 fixed dimensional functions is infinitely close to the theoretical optimal value, only slightly lower than the first MPA algorithm in standard deviation. However, it can also be noticed that the F13 function ranks last; there are still some shortcomings. Combined with the data of fixed dimension function, GWOA is better than the WOA algorithm in terms of accuracy and stability of solution. Therefore, from the data analysis of the three types of test functions, it can be judged that the overall performance of the GWAO algorithm is the best, and it can show good optimization ability for more functions.
In order to verify the stability of GWOA, the interval box diagram was drawn. The results of 30 experiments on 16 functions of each algorithm are analyzed. As can be seen from the box diagram in Figure 3, the box diagram of GWOA is lower and narrower. Therefore, it can be concluded that GWOA stability is good on the test function.
In the experimental data shown in Table 4 (100-dimensional), the comprehensive rank of the GWOA is 1, which can find the optimal solution on the test functions F1, F2, F3, F4 and F8, and its accuracy is better than that of the comparison algorithm on the functions F5, F6, F9, F10 and F11. Finally, the GWOA has little difference in comparison with its own 30-dimensional data, and its stability is good. At the same time, the GWOA also shows some advantages in solving high latitude problems.
According to the analysis of convergence accuracy in Figure 4 (30-dimensional), the improved whale optimization algorithm shows certain optimization effects in terms of optimization accuracy and convergence speed. The optimal solution is found in the seven functions F1, F2, F3, F4, F8, F14, F15 and F16. The convergence accuracy of GWOA in F5 is not as high as that of PSO. However, in F6, F7, F9, F10, F11 and F12 functions, when the convergence speed of each algorithm is similar, the GWOA has better stability and more accurate solutions compared to the original whale optimization algorithm when the convergence speed of each algorithm is similar. The convergence diagrams of each benchmark test function are shown in Figure 4:
Finally, in order to verify the role of the three strategies proposed in this article in the optimization process, this article combines the basic Whale Optimization Algorithm (WOA), Whale Optimization Algorithm with nonlinear time-varying factors (WOA-1), Whale Optimization Algorithm with a combination of nonlinear time-varying factors and inertial weights (WOA-2), and Whale Optimization Algorithm with a combination of the WOA-2 and gravity balance strategy (WOA-3). Experimental comparisons and analyses were conducted between the Whale Optimization Algorithm (WOA-4), which combines the WOA-2 with the Rebirth Strategy, and the Improved Whale Optimization Algorithm, which combines all strategies. Experiment-related parameters were kept the same, the population was 30, the fitness function problem dimension was 30, and there was a calculated maximum of 15,000 assessments. The experimental data and evaluation criteria are shown in Table 5. The optimal indicator data are displayed in bold. The number of optimal indicators for each algorithm were counted.
From the analysis in Table 5, it can be seen that the GWOA performs well in all 16 test functions, performs average among 14 functions, and slightly worse in the F2, F4, and F13 functions. However, the GWOA can achieve higher accuracy in F2 and F4 functions. Through comparative analysis of WOA-1, WOA-2, and the WOA, it is found that WOA-1, with only the addition of nonlinear time-varying factor strategy, has better accuracy and variance in solutions than the WOA only in F1, F2 and F7 functions. WOA-2, with the addition of nonlinear time-varying factor and inertia weight, has more F2, F3, F4, F7, F8, F9, F11, F14, F15, and F16 functions than WOA-1’s leading functions, indicating that the improvement strategy indeed has good effects; However, the WOA algorithm with inertia weight strategy is not good enough in terms of convergence speed. In the analysis of WOA-3 and WOA, it was found that WOA-3 has an additional lead in the F12 function ahead of WOA-2, but there is a significant improvement in the accuracy and standard deviation of the lead function solution compared to WOA-2 and the WOA. Although good results were obtained, there is still a problem of falling into local optima for F5 or other complex functions. At the same time, we can know that the full weight has a delaying effect on the convergence rate of the algorithm, and also that the accuracy of the algorithm is improved. This proves that the gravity balance strategy added on top of WOA-2 improves the algorithm’s development ability and stability compared with the WOA. WOA-4 shows that only F13 is slightly worse than the WOA on the basis of WOA-2, but the indicators under the other 15 functions are far better than for the WOA. From this, it can be seen that WOA-2, which integrates the regeneration mechanism, the exploration and development ability of the algorithm is retained, and the ability of the algorithm to escape local optima is also improved, which improves the accuracy of understanding and enhances the stability of the algorithm.
The average and standard deviation indicators cannot comprehensively evaluate the overall performance of the GWOA. Therefore, by incorporating hypothesis-testing experiments, the Wilcoxon rank sum test has the ability to evaluate algorithm similarity. This article uses the above method to validate the GWOA and determine the differences between the GWOA and the six mentioned algorithms. The significance level was set to 5%, and the test results are shown in Table 6. Those with high algorithm similarity are represented by NaN, while those that cannot be compared are replaced with “-”. The final data analysis showed significant differences between the GWOA and various algorithms, further verifying the uniqueness of the GWOA.
Finally, in order to verify the ability of the algorithm for global functions, we used the Friedman test of the average value of 30 simulation results obtained by seven algorithms [34], and the results were recorded in Table 7. As can be seen, GWOA ranks first, with 1.8125 in 30 dimensions and 1.0909 in 100 dimensions. Meanwhile, by consulting the table [35], for k = 7, the number of problems N is 16. When α = 0.05, Q α = 2.949 . The critical value CD = 2.252 is calculated according to C D = Q α k ( k + 1 ) / 6 N . The post-test CD line is the ranking plus the critical value CD = 4.0645. Therefore, it can be concluded that the GWOA is significantly superior to DECWOA, GWO, PSO and the WOA in 30 dimensions. In 100 dimensions, there are significant differences between GWO, MPA, PSO and the WOA.

5. Engineering Application Experiment Based on the GWOA

The GWOA shows good performance on the test functions. However, the ultimate purpose of the algorithm is to solve the actual optimization problem. Compared with the test function, the optimization problem in the real world has certain constraints and is more complex, which is more challenging for the GWOA. Meanwhile, the optimization problem in the real world is also used as an evaluation index to measure the comprehensive performance of the algorithm.

5.1. Robot Path Planning Experiment

There have been many previous experiments using swarm intelligence algorithms and robot path planning [36,37]. Path planning refers to a new technology for robots to independently achieve non-artificial navigation. In experimental environments with obstacles, based on evaluation criteria such as distance, time, energy consumption, and inflection points, the optimal collision-free path is found. This article selects the shortest path and inflection point as evaluation indicators. Therefore, on this basis, this article uses the GWOA to optimize the robot path-planning project, with the aim of verifying the applicability and feasibility of the GWOA. Therefore, the grid method [38] was used to simulate the robot’s walking path. A set of data was used to construct an abstract two-dimensional or three-dimensional experimental environment, with 0 and 1 representing the feasible and obstacle areas. The environmental information stored in each grid of the same type is the same, and the robot can plan its walking path in an area with a value of 0. The robot has eight options for moving in the current node direction; this is called octree search. The search direction diagram is shown in Figure 5. White represents possible routes and black represents obstacles. The orange arrow represents the direction of travel.
Its model is:
f x i = j = 1 D 1 ( x j + 1 x j ) 2 + ( y j + 1 y j ) 2
j is the dimension of the current whale. x j = a × m o d j , y a / 2 , y j = a × x + a / 2 c e i l ( j / x ) . The experimental environment is a two-dimensional grid map with x rows and y columns, where a is the pixel edge length of the grid map’s 1 * 1 grid, and ceil (n) is the smallest integer greater than or equal to n.
In order to verify the feasibility and applicability of the improved GWOA, the improved GWOA was compared with the WOA, DECWOA and PSO algorithms. The four algorithms were set with the same parameters, the total number was 30, the maximum number of iterations was 100 and the same plane grid map was used. The optimal path planning image of each algorithm is shown in Figure 6, where the red dot represents the beginning and the green dot represents the end. The performance indicators of each algorithm are shown in Table 8:
In order to reduce the impact of chance among algorithms, 30 simulation tests were carried out for each test algorithm, and the optimal value, worst value, average value and number of inflection points of the path were recorded. According to the analysis of Table 8, the four indexes of the GWOA have the best effect, followed by DECWOA. Obviously, the GWOA is better than the WOA and PSO in terms of search ability and stability. The number of inflection points represents that the robot needs to spend more energy on path planning; the GWOA has the least number of inflection points, which proves the GWOA’s optimization ability and better robustness.

5.2. Engineering Optimization

Engineering optimization problems have been studied by a large number of scholars. Therefore, in order to improve the efficiency of solving engineering optimization problems and reduce the cost of optimization, many scholars have tried various methods, such as the traditional gradient descent method and Newton method to solve engineering problems, and optimize and solve more complex engineering problems according to swarm intelligence algorithms, so as to improve efficiency and accuracy. In this paper, the improved whale optimization algorithm is applied to three classical engineering optimization problems through simulation experiments. The simulation platform was MATLAB 2020b. Each algorithm was independently run 30 times in the corresponding engineering problems, and the optimal value was obtained and recorded in Table 9, Table 10 and Table 11.

5.2.1. Subsubsection

This project optimization problem was proposed by Coello [39], and has been solved through research. The problem is that the beam is subject to vertical gravity, creating a beam balance effect in the groove, which ultimately minimizes the cost of welding the beam in the end. The engineering problem is subject to seven constraints, including the pressure of the wall on the beam, the disturbance, the welding of the head and the geometry. The variable x 1 is the thickness of the welded beam, x 2 is the height of the welded beam, the length is x 3 and the thickness of the steel bar is x 4 . The engineering drawing comes from the literature [3], the engineering model is shown in Figure 7, and its objective function can be expressed as [3]:
Minimize:
f x = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 )
Objective:
g 1 x = τ x τ m a x 0
g 2 x = σ x σ m a x 0
g 3 x = δ x δ m a x 0
g 4 x = x 1 x 4 0
g 5 x = p p c ( x ) 0
g 6 x = 0.125 x 1 0
g 7 x = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 14.0 + x 2 5 0
τ x = τ 2 + 2 τ τ x 2 2 R + τ 2
τ = P 2 x 1 x 2
τ = M R J
M = P ( L + x 2 2 )
R = x 2 2 4 + x 1 + x 3 2 2
J = 2 2 x 1 x 2 R 2
σ x = 6 P L x 4 x 3 2
δ x = 6 P L 3 E x 4 x 3 2
P c x = 4.013 E x 3 2 x 4 6 / 36 L 2 ( 1 x 3 2 L E 4 G )
P = 6000 l b
L = 14 i n
δ m a x = 0.25 i n
E = 30 × 10 6 p s i
G = 12 × 10 6 p s i
τ m a x = 13600 p s i
σ m a x = 30000 p s i
Range of variables:
0.1 x 1
x 4 2
0.1 x 2
x 3 10

5.2.2. Pressure Vessel Design

The pressure vessel engineering model, which consists of a cylindrical vessel and two hemispherical heads, is shown below [3]. The engineering problem is to minimize the cost of materials, molding and welding. The design problem of the pressure vessel is restricted by four constraints; including the thickness of the cylindrical shell T s ( x 1 ) , the thickness of the cover T h x 2 , the inner diameter of the head, the cylindrical shell R ( x 3 ) and the section length of the cylindrical shell L ( x 4 ) . Note that x 1 and x 2 must be multiples of 0.0625 inches, and that the other two variables, x 3 and x 4 , are continuous. The engineering model is shown in Figure 8. Its objective function can be expressed as [3]:
Minimize:
f x = 0.6224 x 1 x 2 x 3 + 1.7781 x 2 x 3 2 + 3.1661 x 4 x 1 2 + 19.84 x 3 x 1 2
Objective:
g 1 x = x 1 + 0.0193 x 3 0 g 2 x = x 2 + 0.00954 x 3 0 g 3 x = π x 3 2 4 3 π x 3 3 + 1296000 0 g 4 x = x 4 240 0
Range of variables:
x 1 , x 2 1 × 0.0625,2 × 0.0625,3 × 0.0625 , , 1600 × 0.0625 10 x 3 x 4 200

5.2.3. Tensile and Compression Spring Engineering Design

The purpose of tensile compression spring design engineering [40] is to minimize the weight of the tensile spring under a variety of constraints. The design problem is mainly affected by the winding degree of the tensile spring, the unit area of the tensile spring section under pressure (shear stress), the tensile length, the average diameter of the original coil thickness of the tensile spring and the average diameter of the original coil D. Therefore, the relative stress variable is set as the spring thickness x 2 , the average diameter of the section surface of the original spring coil is x 1 and the number of stretching spring coils is x 3 . Its engineering drawing is taken from the literature [3]. The engineering design problem model is shown in Figure 9 [3]:
Minimize:
f x = ( x 3 + 2 ) x 2 x 1 2
Objective:
g 1 x = 1 x 2 3 x 3 71785 x 1 4
g 2 x = 4 x 2 2 x 1 x 2 12566 x 1 3 x 2 x 1 4 + 1 5108 x 1 2 1
g 3 x = 1 140.45 x 1 x 2 2 x 3 0
g 4 x = x 1 + x 2 1.5 1 0
The experimental results of the welding beam engineering design problem in Table 8 are second only to DECWOA, and the value is only 10 × 10−8, with a very small gap. In Table 9, the results of the pressure vessel design experiment rank first, and the optimal value obtained by the improved algorithm is the smallest among the comparison algorithms; that is, the cost is the smallest. In Table 10 showing the tensile and compression spring engineering design problems, it ranks the third, but it has the same accuracy as the first two, and the difference in value is only 10 × 10−7, but the difference in magnitude is small. To sum up, the improved whale optimization algorithm has a certain competitiveness and universal applicability in engineering applications, and also has good performance in terms of optimization accuracy and optimization effect.

6. Conclusions

In this study, a whale optimization algorithm with improved gravity balance (GWOA) is proposed. The GWOA aims to improve the development and exploration ability in the optimization process, and make up for the fact that the WOA is prone to become trapped in local extremum stagnation in the late iterations. The GWOA was tested on 16 benchmark functions, and the experimental results were compared with six intelligent algorithms. Finally, the GOWA was applied to robot path planning and three classical engineering optimization problems, and experimental simulation results and test comparison data were obtained. The final analysis results are as follows:
  • The GWOA has better convergence accuracy and optimization results compared to the comparison algorithms, and can jump out of the local extremum when the original whale optimization algorithm falls into a local optimal function. Among the 16 functions tested in 30 dimensions, it ranked first for 12; among the 11 functions tested in 100 dimensions, it ranked first for 10.
  • The GWOA has good convergence performance, incorporating nonlinear time-varying factors and inertia weights to balance the development and exploration capabilities of the WOA. To enhance the selection of the best among the population, a gravitational balance strategy was added to protect the excellent solution while increasing the trend of inferior solutions approaching the excellent solution; finally, considering the contribution of whale death to the population, a rebirth mechanism was added to help the whale algorithm jump out of local stagnation.
  • The contribution experiment shows that, when the balance strategy and regeneration mechanism are added under the strategy of nonlinear time-varying factors and inertia weights, the whale optimization algorithm with balance strategy ranks first on seven functions, and the whale optimization algorithm with regeneration mechanism ranks first on eight functions. There is a performance improvement compared to the original whale algorithm; the GWOA can achieve a lead in 13 functions, but there are still shortcomings in three functions that need improvement.
  • In the robot path planning experiment, the comprehensive data ranked first, and in the three classic engineering problems, the ranking was 1, 2, and 3, respectively. The overall solution accuracy was slightly different from the optimal algorithm.
  • Overall, the GWOA has a good optimization performance and good robustness, and exhibits a certain uniqueness in rank sum testing. However, its optimization performance is not good enough in some functions (such as the F13 function), and its convergence speed is not fast enough in the F2, F4, F5, and F6 functions.
The nonlinear time-varying factor and inertial weight strategy were introduced to balance the development and exploration ability of the whale optimization algorithm. A novel gravity balance strategy was proposed in the whale search and encircling phases, which can effectively avoid the excellent solution from the inferior one in the random walk process, thus improving the quality of the solution. Adding a respawning mechanism at the end of the iteration helps the algorithm jump out of the local optimal. Next, the improved whale optimization algorithm was compared on 16 benchmark test functions and six comparison algorithms, and it is concluded that the overall performance of the improved GWOA is the best among the comparison algorithms. Finally, the improved algorithm was applied to robot path planning to verify the feasibility and applicability of the algorithm. The results show that the improved algorithm has a clear path and the least cost. It can be seen that the introduction of multiple strategies improves the accuracy and stability of the basic whale optimization algorithm.
However, in some functions, the GWOA is slightly less accurate than the comparison algorithm. Therefore, in the next stage, the aim will be to comprehensively improve the accuracy of the solution, and more complex applications will be sought as the evaluation criteria of the algorithm in the next stage.

Author Contributions

Conceptualization, C.O. and Y.G.; methodology, C.O. and Y.G.; software, Y.G.; validation, Y.G. and D.Z.; formal analysis, Y.G. and D.Z.; investigation, Y.G. and D.Z.; resources, C.O. and D.Z.; data curation, Y.G. and D.Z.; writing original draft preparation, Y.G.; writing review and editing, C.O. and D.Z.; visualization, Y.G. and D.Z.; supervision, C.O. and C.Z.; project administration, C.Z.; funding acquisition, C.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (Grant No.62272418 and 62002046).

Data Availability Statement

All data for this study are available from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, R.; Feng, Y. Evaluation research on green degree of equipment manufacturing industry based on improved particle swarm optimization algorithm. Chaos Solitons Fractals Interdiscip. J. Nonlinear Sci. Nonequilibrium Complex Phenom. 2020, 131, 109502. [Google Scholar] [CrossRef]
  2. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-Qaness, M.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022; prepublish. [Google Scholar] [CrossRef]
  3. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social Network Search for Solving Engineering Optimization Problems. Comput. Intell. Neurosci. 2021, 2021, 32. [Google Scholar] [CrossRef]
  4. Du, Y.; Yang, N. Analysis of image processing algorithm based on bionic intelligent optimization. Clust. Comput. 2018, 22, 3505–3512. [Google Scholar] [CrossRef]
  5. Chengtian, O.; Yaxian, Q.; Donglin, Z. Adaptive Spiral Flying Sparrow Search Algorithm. Sci. Program. 2021, 2021. [Google Scholar] [CrossRef]
  6. Rashid, A.S.; Mohamed, O.; Khalek, S.A.; Sayed, A.E.; Faihan, A.M. Optimal path planning for drones based on swarm intelligence algorithm. Neural Comput. Appl. 2022, 34, 10133–10155. [Google Scholar]
  7. Zhu, D.; Huang, Z.; Liao, S.; Zhou, C.; Yan, S.; Chen, G. Improved Bare Bones Particle Swarm Optimization for DNA Sequence Design. IEEE Trans. NanoBioscience 2022, 603–613. [Google Scholar] [CrossRef]
  8. Engy, E.-S.; Sallam, K.M.; Chakrabortty, R.K.; Abohany, A.A. A clustering based Swarm Intelligence optimization technique for the Internet of Medical Things. Expert Syst. Appl. 2021, 173, 114648. [Google Scholar]
  9. Mojgan, S.; FathollahiFard, A.M.; Kamyar, K.; Maziar, Y.; Mohammad, S. Selecting Appropriate Risk Response Strategies Considering Utility Function and Budget Constraints: A Case Study of a Construction Company in Iran. Buildings 2022, 12, 98. [Google Scholar]
  10. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Mhs95 Sixth International Symposium on Micro Machine & Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar]
  11. Dorigo, M. The ant system: An autocatalytic optimizing process. In Proceedings of the First European Conference on Artificial Life, Paris, France, 11–13 December 1991. [Google Scholar]
  12. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  13. Fathollahi-Fard, A.M.; Hajiaghaei-Keshteli, M.; Tavakkoli-Moghaddam, R. The Social Engineering Optimizer (SEO). Eng. Appl. Artif. Intell. 2018, 72, 267–293. [Google Scholar] [CrossRef]
  14. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  15. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  16. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  17. Xiong, G.; Zhang, J.; Shi, D.; He, Y. Parameter extraction of solar photovoltaic models using an improved whale optimization algorithm. Energy Convers. Manag. 2018, 174, 388–405. [Google Scholar] [CrossRef]
  18. Li, S.H.; Luo, X.H.; Wu, L.Z. An improved whale optimization algorithm for locating critical slip surface of slopes. Adv. Eng. Softw. 2021, 157, 103009. [Google Scholar] [CrossRef]
  19. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef]
  20. Yang, K.; Yang, K. Improved Whale Algorithm for Economic Load Dispatch Problem in Hydropower Plants and Comprehensive Performance Evaluation. Water Resour. Manag. 2022, 36, 5823–5838. [Google Scholar] [CrossRef]
  21. Shen, Y.; Zhang, C.; Gharehchopogh, F.S.; Mirjalili, S. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [Google Scholar] [CrossRef]
  22. Guo, W.; Liu, T.; Dai, F.; Xu, P. An improved whale optimization algorithm for forecasting water resources demand. Appl. Soft Comput. 2020, 86, 105925. [Google Scholar] [CrossRef]
  23. Oliva, D.; Abd El Aziz, M.; Hassanien, A.E. Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm. Appl. Energy 2017, 200, 141–154. [Google Scholar] [CrossRef]
  24. Ning, G.Y.; Cao, D.Q. Improved whale optimization algorithm for solving constrained optimization problems. Discret. Dyn. Nat. Soc. 2021, 2021, 1–13. [Google Scholar] [CrossRef]
  25. Zhang, J.; Wang, J.S. Improved whale optimization algorithm based on nonlinear adaptive weight and golden sine operator. IEEE Access 2020, 8, 77013–77048. [Google Scholar] [CrossRef]
  26. Yang, W.; Xia, K.; Fan, S.; Wang, L.; Li, T.; Zhang, J.; Feng, Y. A multi-strategy Whale optimization algorithm and its application. Eng. Appl. Artif. Intell. 2022, 108, 104558. [Google Scholar] [CrossRef]
  27. Li, Y.; Han, M.; Guo, Q. Modified whale optimization algorithm based on tent chaotic mapping and its application in structural optimization. KSCE J. Civ. Eng. 2020, 24, 3703–3713. [Google Scholar] [CrossRef]
  28. Jiang, T.; Zhang, C.; Zhu, H.; Gu, J.; Deng, G. Energy-efficient scheduling for a job shop using an improved whale optimization algorithm. Mathematics 2018, 6, 220. [Google Scholar] [CrossRef] [Green Version]
  29. Howard, A.; Mataric, M.; Sukhatme, G.S. Mobile sensor net-work potential field: A distributed scalable solution to the area deployment using coverage problem. Distrib. Auton. Robot. Syst. 2002, 299–308. [Google Scholar] [CrossRef] [Green Version]
  30. Liu, L.; Zhang, R. Multistrategy Improved Whale Optimization Algorithm and Its Application. Comput. Intell. Neurosci. 2022, 2022. [Google Scholar] [CrossRef] [PubMed]
  31. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm:A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  32. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  33. Ravber, M.; Liu, S.H.; Mernik, M.; Črepinšek, M. Maximum number of generations as a stopping criterion considered harmful. Appl. Soft Comput. 2022, 128, 109478. [Google Scholar] [CrossRef]
  34. Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  35. Veček, N.; Črepinšek, M.; Mernik, M. On the influence of the number of algorithms, problems, and independent runs in the comparison of evolutionary algorithms. Appl. Soft Comput. 2017, 54, 23–45. [Google Scholar] [CrossRef]
  36. Akka, K.; Khaber, F. Mobile robot path planning using an improved ant colony optimization. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418774673. [Google Scholar] [CrossRef] [Green Version]
  37. Qiuyun, T.; Hongyan, S.; Hengwei, G.; Ping, W. Improved particle swarm optimization algorithm for AGV path planning. Ieee Access 2021, 9, 33522–33531. [Google Scholar] [CrossRef]
  38. Zhang, H.; Zhuang, Q.; Li, G. Robot Path Planning Method Based on Indoor Spacetime Grid Model. Remote Sens. 2022, 14, 2357. [Google Scholar] [CrossRef]
  39. Coello, C.A.C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  40. Introduction to Optimum Design; Elsevier Inc.: Amsterdam, The Netherlands, 2012.
Figure 1. Flowchart of the WOA.
Figure 1. Flowchart of the WOA.
Axioms 12 00664 g001
Figure 2. Conceptual diagram of gravitational equilibrium.
Figure 2. Conceptual diagram of gravitational equilibrium.
Axioms 12 00664 g002
Figure 3. The box plots of each algorithm on the test functions.
Figure 3. The box plots of each algorithm on the test functions.
Axioms 12 00664 g003aAxioms 12 00664 g003b
Figure 4. Convergence curves on 16 benchmark functions.
Figure 4. Convergence curves on 16 benchmark functions.
Axioms 12 00664 g004aAxioms 12 00664 g004b
Figure 5. Octree search diagram.
Figure 5. Octree search diagram.
Axioms 12 00664 g005
Figure 6. Optimal path planning diagram of each calculation.
Figure 6. Optimal path planning diagram of each calculation.
Axioms 12 00664 g006
Figure 7. Schematic diagram of welded beam structure [3].
Figure 7. Schematic diagram of welded beam structure [3].
Axioms 12 00664 g007
Figure 8. Schematic diagram of pressure vessel structure [3].
Figure 8. Schematic diagram of pressure vessel structure [3].
Axioms 12 00664 g008
Figure 9. Tension/compression spring diagram [3].
Figure 9. Tension/compression spring diagram [3].
Axioms 12 00664 g009
Table 1. The details of benchmark functions.
Table 1. The details of benchmark functions.
FunctionDimRangeOptimum Value
F 1 ( x ) = i = 1 d x i 2 30/
100
[−100, 100]0
F 2 ( x ) = i = 1 d x i + i = 1 d x i 30/
100
[−10, 10]0
F 3 ( x ) = i = 1 d i = 1 d x j 2 30/
100
[−100, 100]0
F 4 ( x ) = m a x i x i , 1 i d 30/
100
[−100, 100]0
F 5 ( x ) = i = 1 d 100 x i + 1 x i 2 2 x i 1 2 30/
100
[−10, 10]0
F 6 ( x ) = i = 1 d ( x i + 0.5 ) 2 30/
100
[−30, 30]0
F 7 ( x ) = i = 1 d i x i 4 + r a n d o m 0,1 ) 30/
100
[−1.28, 1.28]0
F 8 ( x ) = i = 1 d x i s i n x i 30/
100
[−500, 500]−498.5 n
F 9 ( x ) = 20 e x p 0.2 1 d i = 1 d x i 2 e x p 1 d i = 1 d c o s ( 2 π x i ) + 20 + e 30/
100
[−32, 32]0
F 10 x = π d 10 sin π y i + i = 1 d 1 y i 1 2 1 + 10 s i n 2 π y i + 1 + y d 1 2 + i = 1 d u x i , 10,100,4 y i = 1 + x i + 1 4 ,   u x i , a , k , m = k x i a m x i > a 0 a < x i < a k x i a m   x i < a   30/
100
[−50, 50]0
F 11 x = 0.1 s i n 2 3 π x 1 + i = 1 d x i 1 2 1 + s i n 2 3 π x 1 + 1 + x n 1 2 [ 1 + s i n 2 s i n 2 2 π x n ] + i = 1 d u ( x i , 10,100,4 ) 30/
100
[0, π ]0
F 12 x = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) ) 1 2[−65, 65]1
F 13 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × [ 30   + 2 x 1 3 x 2 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F 14 x = i = 1 4 X a i X a i T + c i 1 4[0, 10]−10.1532
F 15 x = i = 1 7 X a i X a i T + c i 1 4[0, 10]−10.4028
F 16 x = i = 1 10 X a i X a i T + c i 1 4[0, 10]−10.5363
Table 2. The parameters of each algorithm.
Table 2. The parameters of each algorithm.
AlgorithmParameter Setting
WOA N = 30 ,   a = 2 t × ( 2 / t m a x )
DECWOA N = 30 ,   t m a x = 500 ,   w = 0.5 + e x p ( f f i t ( x ) / u ) t
GWO N = 30 ,   a = 2 t × ( 2 / t m a x )
PSO N = 30 ,   w m a x = 0.9 ,   w m i n = 0.4 ,   c 1 = c 2 = 2.0
MPA N = 30 ,   P = 0.5 ,   P f = 0.2
HHO N = 30
GWOA N = 30 ,   w = p i × t a n ( p i / 4 × ( t / M a x i t e r )
Table 3. Experimental results of each optimization algorithm (30 D/50,000 fes).
Table 3. Experimental results of each optimization algorithm (30 D/50,000 fes).
FunctionAlgorithmMeanStdBestWorstRank
F1DECWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
GWO1.109 × 10−1001.760 × 10−1005.883 × 10−1056.970 × 10−1004
HHO4.980 × 10−551.847 × 10−542.140 × 10−669.961 × 10−545
MPA6.532 × 10−432.324 × 10−421.581 × 10−451.305 × 10−416
PSO9.451 × 10−101.336 × 10−94.020 × 10−115.412 × 10−97
WOA1.159 × 10−2010.000 × 1001.382 × 10−2473.478 × 10−2003
GWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
F2DECWOA1.038 × 10−2810.000 × 1004.690 × 10−2891.510 × 10−2802
GWO1.802 × 10−583.299 × 10−583.987 × 10−601.723 × 10−574
HHO4.471 × 10−281.831 × 10−275.553 × 10−361.016 × 10−266
MPA1.798 × 10−242.762 × 10−246.237 × 10−281.364 × 10−235
PSO1.067 × 1019.286 × 1008.323 × 10−73.000 × 1017
WOA1.766 × 10−1720.000 × 1001.665 × 10−1875.258 × 10−1713
GWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
F3DECWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
GWO1.241 × 10−264.816 × 10−268.402 × 10−352.395 × 10−254
HHO2.361 × 10−391.040 × 10−381.402 × 10−595.788 × 10−383
MPA3.677 × 10−111.532 × 10−105.647 × 10−218.471 × 10−105
PSO1.925 × 1019.203 × 1009.355 × 1005.321 × 1016
WOA8.803 × 1037.316 × 1033.430 × 1022.726 × 1047
GWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
F4DECWOA3.418 × 10−2390.000 × 1008.379 × 10−2527.898 × 10−2382
GWO1.583 × 10−242.883 × 10−241.276 × 10−261.374 × 10−234
HHO2.839 × 10−271.032 × 10−266.385 × 10−345.218 × 10−263
MPA9.160 × 10−177.741 × 10−171.150 × 10−173.151 × 10−165
PSO9.300 × 10−12.608 × 10−14.862 × 10−11.577 × 1006
WOA3.432 × 1013.190 × 1017.631 × 10−38.751 × 1017
GWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
F5DECWOA7.587 × 10−21.097 × 10−12.745 × 10−54.682 × 10−13
GWO2.652 × 1017.037 × 10−012.527 × 1012.796 × 1015
HHO3.959 × 10−25.344 × 10−22.327 × 10−12.135 × 10−42
MPA2.558 × 1016.968 × 10−12.450 × 1012.748 × 1014
PSO1.785 × 1025.416 × 1025.711 × 1003.034 × 1037
WOA2.655 × 1016.984 × 10−12.595 × 1012.872 × 1016
GWOA5.542 × 10−31.031 × 10−21.539 × 10−54.511 × 10−21
F6DECWOA1.047 × 10−22.142 × 10−22.228 × 10−51.200 × 10−15
GWO5.726 × 10−13.101 × 10−15.436 × 10−61.004 × 1007
HHO8.034 × 10−41.490 × 10−36.621 × 10−86.556 × 10−33
MPA6.638 × 10−22.771 × 10−22.557 × 10−21.342 × 10−16
PSO1.174 × 10−91.511 × 10−99.635 × 10−126.856 × 10−91
WOA7.657 × 10−31.318 × 10−21.914 × 10−37.494 × 10−24
GWOA2.213 × 10−42.728 × 10−45.203 × 10−88.539 × 10−42
F7DECWOA8.528 × 10−59.926 × 10−56.381 × 10−64.341 × 10−42
GWO5.375 × 10−42.586 × 10−41.258 × 10−41.184 × 10−34
HHO3.872 × 10−43.588 × 10−45.965 × 10−61.845 × 10−33
MPA7.656 × 10−43.933 × 10−42.328 × 10−41.766 × 10−35
PSO5.598 × 1005.362 × 1002.693 × 10−21.882 × 1017
WOA2.162 × 10−31.150 × 10−31.952 × 10−54.614 × 10−36
GWOA2.105 × 10−51.736 × 10−56.791 × 10−76.585 × 10−51
F8DECWOA−1.093 × 1041.556 × 103−1.256 × 104−7.419 × 1034
GWO−6.043 × 1037.729 × 102−7.901 × 103−4.666 × 1037
HHO−1.255 × 1048.996 × 101−1.257 × 104−1.207 × 1042
MPA−8.633 × 1035.135 × 102−9.670 × 103−7.842 × 1035
PSO−7.004 × 1036.751 × 102−8.304 × 103−5.265 × 1036
WOA−1.122 × 1041.609 × 103−1.257 × 104−8.155 × 1033
GWOA−1.257 × 1045.298 × 10−3−1.257 × 104−1.257 × 1041
F9DECWOA1.717 × 10−151.503 × 10−158.882 × 10−164.441 × 10−153
GWO1.072 × 10−143.393 × 10−154.441 × 10−151.510 × 10−146
HHO8.882 × 10−160.000 × 1008.882 × 10−168.882 × 10−162
MPA4.086 × 10−151.066 × 10−158.882 × 10−164.441 × 10−155
PSO4.697 × 10−055.923 × 10−053.723 × 10−062.387 × 10−047
WOA3.612 × 10−152.371 × 10−158.882 × 10−167.994 × 10−154
GWOA8.882 × 10−160.000 × 1008.882 × 10−168.882 × 10−161
F10DECWOA1.761 × 10−11.223 × 10−14.751 × 10−24.687 × 10−17
GWO3.382 × 10−21.810 × 10−26.540 × 10−37.931 × 10−26
HHO1.337 × 10−21.514 × 10−23.056 × 10−66.628 × 10−25
MPA1.654 × 10−31.139 × 10−33.977 × 10−45.364 × 10−32
PSO6.911 × 10−32.586 × 10−21.771 × 10−131.037 × 10−014
WOA2.197 × 10−33.159 × 10−32.711 × 10−41.392 × 10−23
GWOA6.194 × 10−61.370 × 10−56.630 × 10−107.023 × 10−51
F11DECWOA9.795 × 10−12.554 × 10−12.336 × 10−11.450 × 1007
GWO4.945 × 10−12.342 × 10−19.743 × 10−21.120 × 1006
HHO4.249 × 10−45.236 × 10−45.594 × 10−111.954 × 10−32
MPA3.575 × 10−22.176 × 10−25.093 × 10−39.639 × 10−24
PSO3.296 × 10−35.035 × 10−31.775 × 10−111.099 × 10−23
WOA1.061 × 10−11.075 × 10−18.354 × 10−33.793 × 10−15
GWOA6.110 × 10−54.245 × 10−53.741 × 10−82.120 × 10−41
F12DECWOA9.980 × 10−10.000 × 1009.980 × 10−19.980 × 10−11
GWO3.874 × 1003.724 × 1009.980 × 10−11.267 × 1017
HHO2.119 × 1001.646 × 1009.980 × 10−15.929 × 1005
MPA9.980 × 10−10.000 × 1009.980 × 10−19.980 × 10−11
PSO3.463 × 1002.478 × 1009.980 × 10−11.076 × 1016
WOA2.079 × 1002.451 × 1009.980 × 10−11.076 × 1014
GWOA9.980 × 10−10.000 × 1009.980 × 10−19.980 × 10−11
F13DECWOA3.000 × 1002.153 × 10−43.000 × 1003.001 × 1006
GWO3.000 × 1003.260 × 10−63.000 × 1003.000 × 1003
HHO3.000 × 1001.271 × 10−53.000 × 1003.000 × 1004
MPA3.000 × 1001.240 × 10−153.000 × 1003.000 × 1001
PSO3.000 × 1001.347 × 10−153.000 × 1003.000 × 1005
WOA3.000 × 1001.236 × 10−53.000 × 1003.000 × 1004
GWOA3.000 × 1003.507 × 10−33.000 × 1003.015 × 1007
F14DECWOA−5.146 × 1002.326 × 100−9.879 × 100−2.435 × 1007
GWO−9.646 × 1001.520 × 100−1.015 × 101−5.055 × 1003
HHO−5.739 × 1001.614 × 100−1.011 × 101−4.924 × 1006
MPA−1.015 × 1015.299 × 10−15−1.015 × 101−1.015 × 1011
PSO−6.708 × 1002.936 × 100−1.015 × 101−2.630 × 1005
WOA−9.303 × 1001.900 × 100−1.015 × 101−5.055 × 1004
GWOA−1.015 × 1016.019 × 10−05−1.015 × 101−1.015 × 1012
F15DECWOA−5.217 × 1002.150 × 100−1.002 × 101−2.325 × 1007
GWO−1.040 × 1019.969 × 10−5−1.040 × 101−1.040 × 1013
HHO−5.237 × 1009.564 × 10−1−1.038 × 101−4.937 × 1006
MPA−1.040 × 1011.376 × 10−15−1.040 × 101−1.040 × 1011
PSO−8.738 × 1002.575 × 100−1.040 × 101−2.766 × 1005
WOA−8.830 × 1002.658 × 100−1.040 × 101−2.766 × 1004
GWOA−1.040 × 1015.405 × 10−5−1.040 × 101−1.040 × 1012
F16DECWOA−6.657 × 1002.796 × 100−1.053 × 101−3.362 × 1006
GWO−1.036 × 1019.707 × 10−1−1.054 × 101−5.128 × 1003
HHO−5.180 × 1001.011 × 100−9.945 × 100−2.382 × 1007
MPA−1.054 × 1011.020 × 10−14−1.054 × 101−1.054 × 1011
PSO−9.637 × 1002.012 × 100−5.128 × 100−1.054 × 1014
WOA−9.589 × 1002.121 × 100−1.054 × 101−3.835 × 1005
GWOA−1.054 × 1018.819 × 10−5−1.054 × 101−1.054 × 1012
Table 4. Experimental results of each optimization algorithm (100 D/50,000 fes).
Table 4. Experimental results of each optimization algorithm (100 D/50,000 fes).
FunctionAlgorithmMeanStdBestWorstRank
F1DECWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
GWO4.767 × 10−526.152 × 10−521.652 × 10−532.762 × 10−514
HHO4.289 × 10−512.253 × 10−501.536 × 10−631.256 × 10−495
MPA7.552 × 10−372.216 × 10−361.013 × 10−391.063 × 10−356
PSO4.476 × 1002.353 × 1001.966 × 1001.466 × 1017
WOA8.745 × 10−2490.000 × 1003.406 × 10−2812.367 × 10−2473
GWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
F2DECWOA2.121 × 10−2800.000 × 1001.711 × 10−2885.598 × 10−2792
GWO3.760 × 10−312.429 × 10−316.812 × 10−329.339 × 10−314
HHO5.974 × 10−272.200 × 10−261.261 × 10−329.778 × 10−265
MPA1.433 × 10−212.219 × 10−219.374 × 10−231.165 × 10−206
PSO1.330 × 1023.058 × 1017.062 × 1012.022 × 1027
WOA6.285 × 10−1730.000 × 1007.623 × 10−1869.895 × 10−1723
GWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
F3DECWOA3.284 × 10−3000.000 × 1000.000 × 1009.850 × 10−2992
GWO8.879 × 10−22.704 × 10−15.568 × 10−81.099 × 1005
HHO2.418 × 10−226.512 × 10−225.585 × 10−593.628 × 10−213
MPA3.422 × 10−34.402 × 10−33.841 × 10−81.647 × 10−24
PSO1.402 × 1043.849 × 1038.340 × 1033.029 × 1046
WOA6.921 × 1051.518 × 1053.080 × 1051.010 × 1067
GWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
F4DECWOA4.888 × 10−2380.000 × 1002.822 × 10−2501.264 × 10−2362
GWO1.798 × 10−64.767 × 10−65.242 × 10−101.988 × 10−55
HHO1.672 × 10−261.034 × 10−256.348 × 10−334.837 × 10−253
MPA6.422 × 10−144.993 × 10−141.003 × 10−142.731 × 10−134
PSO1.047 × 1011.067 × 1007.704 × 1002.631 × 1016
WOA7.355 × 1012.539 × 1011.574 × 1019.677 × 1017
GWOA0.000 × 1000.000 × 1000.000 × 1000.000 × 1001
F5DECWOA7.731 × 10−21.680 × 10−19.121 × 10−68.441 × 10−12
GWO9.757 × 1017.703 × 10−19.608 × 1019.844 × 1015
HHO1.032 × 10−11.659 × 10−13.784 × 10−68.328 × 10−13
MPA9.762 × 1016.992 × 10−19.609 × 1019.848 × 1016
PSO1.089 × 1041.644 × 1042.919 × 1039.666 × 1047
WOA9.713 × 1015.503 × 10−19.641 × 1019.818 × 1014
GWOA6.354 × 10−38.316 × 10−38.213 × 10−53.236 × 10−21
F6DECWOA7.564 × 1002.046 × 1002.046 × 1001.548 × 1014
GWO9.311 × 1009.871 × 10−17.506 × 1001.151 × 1016
HHO1.600 × 10−31.563 × 10−32.680 × 10−65.413 × 10−32
MPA8.759 × 1001.125 × 1006.888 × 1001.110 × 1015
PSO1.271 × 1014.203 × 1001.641 × 1001.835 × 1017
WOA6.353 × 10−12.295 × 10−12.935 × 10−11.179 × 1003
GWOA6.107 × 10−47.569 × 10−42.193 × 10−39.506 × 10−71
F7DECWOA1.074 × 10−41.202 × 10−43.821 × 10−65.318 × 10−42
GWO1.367 × 10−36.785 × 10−44.672 × 10−43.082 × 10−36
HHO3.580 × 10−43.410 × 10−41.990 × 10−51.467 × 10−33
MPA1.241 × 10−35.527 × 10−43.081 × 10−42.415 × 10−35
PSO2.465 × 1021.131 × 1025.989 × 1016.061 × 1027
WOA1.162 × 10−31.860 × 10−38.025 × 10−59.878 × 10−34
GWOA8.503 × 10−55.969 × 10−52.406 × 10−63.279 × 10−41
F8DECWOA−3.594 × 1047.935 × 103−4.190 × 104−1.530 × 1043
GWO−1.674 × 1042.410 × 103−2.007 × 104−6.180 × 1037
HHO−4.151 × 1041.693 × 103−4.190 × 104−3.254 × 1042
MPA−2.054 × 1041.297 × 103−2.354 × 104−1.800 × 1046
PSO−2.122 × 1041.871 × 103−2.488 × 104−1.633 × 1045
WOA−3.504 × 1046.199 × 103−4.190 × 104−2.708 × 1044
GWOA−4.190 × 1042.467 × 10−2−4.190 × 104−4.190 × 1041
F9DECWOA1.480 × 10−151.324 × 10−154.441 × 10−158.882 × 10−163
GWO3.393 × 10−144.118 × 10−152.576 × 10−143.997 × 10−146
HHO8.882 × 10−160.000 × 1008.882 × 10−168.882 × 10−161
MPA4.441 × 10−150.000 × 1004.441 × 10−154.441 × 10−155
PSO3.117 × 1001.421 × 1002.134 × 1001.054 × 1017
WOA3.730 × 10−152.321 × 10−158.882 × 10−167.994 × 10−154
GWOA8.882 × 10−160.000 × 1008.882 × 10−168.882 × 10−161
F10DECWOA1.361 × 10−14.666 × 10−27.870 × 10−22.776 × 10−15
GWO2.441 × 10−16.136 × 10−21.489 × 10−14.127 × 10−16
HHO1.065 × 10−51.547 × 10−53.754 × 10−86.632 × 10−52
MPA1.256 × 10−13.040 × 10−28.727 × 10−22.134 × 10−14
PSO2.223 × 1008.594 × 10−18.134 × 10−14.035 × 1007
WOA7.379 × 10−31.122 × 10−22.556 × 10−36.692 × 10−23
GWOA2.117 × 10−62.512 × 10−64.615 × 10−91.084 × 10−51
F11DECWOA3.791 × 1001.362 × 1001.841 × 1007.195 × 1004
GWO6.154 × 1003.653 × 10−15.562 × 1006.847 × 1005
HHO2.725 × 10−42.417 × 10−44.121 × 10−79.645 × 10−42
MPA8.861 × 1001.438 × 1004.728 × 1009.704 × 1006
PSO5.850 × 1011.642 × 1012.893 × 1018.736 × 1017
WOA1.016 × 1004.636 × 10−13.253 × 10−12.274 × 1003
GWOA5.125 × 10−57.072 × 10−53.475 × 10−92.589 × 10−41
Table 5. Test results of each strategy.
Table 5. Test results of each strategy.
FunctionIndexWOAWOA-1WOA-2WOA-3WOA-4GWOA
F1Mean3.046 × 10−711.797 × 10−1048.514 × 10−2480.000 × 1000.000 × 1000.000 × 100
Std1.639 × 10−706.244 × 10−1040.000 × 1000.000 × 1000.000 × 1000.000 × 100
Best5.624 × 10−865.817 × 10−1151.234 × 10−2740.000 × 1000.000 × 1000.000 × 100
Worst9.135 × 10−702.846 × 10−1031.252 × 10−2460.000 × 1000.000 × 1000.000 × 100
F2Mean1.810 × 10−512.917 × 10−676.176 × 10−1360.000 × 1005.771 × 10−2232.616 × 10−227
Std4.928 × 10−511.224 × 10−661.830 × 10−1350.000 × 1000.000 × 1000.000 × 100
Best3.145 × 10−577.767 × 10−755.150 × 10−1500.000 × 1005.765 × 10−2551.203 × 10−270
Worst2.668 × 10−505.625 × 10−668.052 × 10−1350.000 × 1001.076 × 10−2217.847 × 10−226
F3Mean4.390 × 1042.893 × 1047.764 × 10−1790.000 × 1000.000 × 1000.000 × 100
Std1.312 × 1041.488 × 1040.000 × 1000.000 × 1000.000 × 1000.000 × 100
Best1.659 × 1041.385 × 1033.409 × 10−2170.000 × 1000.000 × 1000.000 × 100
Worst7.255 × 1045.585 × 1041.553 × 10−1770.000 × 1000.000 × 1000.000 × 100
F4Mean4.861 × 1018.027 × 1013.220 × 10−1080.000 × 1005.683 × 10−2301.586 × 10−233
Std2.601 × 1011.516 × 1011.403 × 10−1070.000 × 1000.000 × 1000.000 × 100
Best5.627 × 1003.321 × 1014.010 × 10−1200.000 × 1001.100 × 10−2464.056 × 10−277
Worst8.861 × 1019.271 × 1016.438 × 10−1070.000 × 1001.108 × 10−2284.130 × 10−232
F5Mean2.803 × 1012.879 × 1012.877 × 1012.877 × 1014.630 × 10−25.944 × 10−3
Std3.640 × 10−12.571 × 10−22.166 × 10−22.244 × 10−26.285 × 10−26.671 × 10−3
Best2.749 × 1012.874 × 1012.873 × 1012.871 × 1017.328 × 10−62.331 × 10−7
Worst2.876 × 1012.885 × 1012.880 × 1012.882 × 1012.307 × 10−12.815 × 10−2
F6Mean3.828 × 10−12.477 × 1007.878 × 10−16.386 × 10−14.702 × 10−32.553 × 10−3
Std2.616 × 10−13.087 × 10−13.985 × 10−12.817 × 10−12.611 × 10−33.258 × 10−3
Best8.880 × 10−21.941 × 1001.840 × 10−17.272 × 10−21.170 × 10−49.584 × 10−6
Worst1.169 × 1002.965 × 1001.765 × 1009.993 × 10−17.972 × 10−31.484 × 10−2
F7Mean4.958 × 10−32.235 × 10−37.719 × 10−56.720 × 10−53.636 × 10−42.713 × 10−5
Std6.545 × 10−31.988 × 10−35.770 × 10−55.441 × 10−53.005 × 10−42.563 × 10−5
Best4.967 × 10−51.018 × 10−48.626 × 10−67.830 × 10−83.213 × 10−51.628 × 10−6
Worst3.094 × 10−26.387 × 10−31.787 × 10−41.910 × 10−49.762 × 10−48.928 × 10−5
F8Mean−1.044 × 104−6.838 × 103−1.183 × 104−1.257 × 104−1.257 × 104−1.257 × 104
Std1.745 × 1031.988 × 1027.802 × 1021.192 × 10−19.278 × 10−21.932 × 10−1
Best−1.257 × 104−1.184 × 104−1.257 × 104−1.257 × 104−1.257 × 104−1.257 × 104
Worst−6.624 × 103−3.409 × 103−1.033 × 104−1.257 × 104−1.257 × 104−1.257 × 104
F9Mean2.901 × 10−153.908 × 10−158.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
Std2.543 × 10−152.814 × 10−150.000 × 1000.000 × 1000.000 × 1000.000 × 100
Best8.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
Worst7.994 × 10−157.994 × 10−158.882 × 10−168.882 × 10−168.882 × 10−168.882 × 10−16
F10Mean2.246 × 10−23.384 × 10−11.155 × 10−13.522 × 10−23.009 × 10−54.793 × 10−6
Std1.458 × 10−22.090 × 10−18.382 × 10−22.210 × 10−22.730 × 10−56.325 × 10−6
Best5.130 × 10−39.040 × 10−21.430 × 10−21.202 × 10−29.999 × 10−77.593 × 10−8
Worst5.779 × 10−26.623 × 10−13.012 × 10−18.957 × 10−29.420 × 10−52.805 × 10−5
F11Mean6.729 × 10−11.581 × 1005.807 × 10−14.446 × 10−12.019 × 10−48.990 × 10−5
Std2.858 × 10−15.461 × 10−13.515 × 10−12.270 × 10−12.366 × 10−41.910 × 10−4
Best1.035 × 10−16.989 × 10−19.693 × 10−27.115 × 10−21.752 × 10−61.175 × 10−7
Worst1.254 × 1002.590 × 1001.336 × 1009.332 × 10−17.285 × 10−49.061 × 10−4
F12Mean2.278 × 1007.246 × 1002.256 × 1002.977 × 1009.980 × 10−19.980 × 10−1
Std2.448 × 1004.901 × 1007.676 × 10−14.936 × 10−10.000 × 1000.000 × 100
Best9.980 × 10−19.983 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−1
Worst1.076 × 1011.550 × 1013.093 × 1003.835 × 1009.980 × 10−19.980 × 10−1
F13Mean3.000 × 1003.000 × 1003.000 × 1003.000 × 1003.569 × 1003.014 × 100
Std8.684 × 10−58.360 × 10−31.175 × 10−31.326 × 10−47.881 × 10−15.108 × 10−2
Best3.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 100
Worst3.000 × 1003.038 × 1003.004 × 1003.000 × 1005.842 × 1003.275 × 100
F14Mean−8.278 × 100−7.556 × 100−9.327 × 100−9.599 × 100−1.015 × 101−1.015 × 101
Std2.681 × 1001.364 × 1009.704 × 10−16.138 × 10−11.130 × 10−55.371 × 10−5
Best−1.015 × 101−1.040 × 101−1.040 × 101−1.015 × 101−1.015 × 101−1.015 × 101
Worst−2.627 × 100−4.287 × 100−6.550 × 100−8.635 × 100−1.015 × 101−1.015 × 101
F15Mean−7.213 × 100−6.511 × 101−9.591 × 101−9.885 × 100−1.040 × 101−1.040 × 101
Std3.015 × 1001.996 × 1001.091 × 1002.506 × 1001.078 × 10−58.550 × 10−5
Best−1.040 × 101−1.040 × 101−1.040 × 101−1.974 × 101−1.040 × 101−1.040 × 101
Worst−2.766 × 100−4.187 × 100−7.032 × 100−5.053 × 100−1.040 × 101−1.040 × 101
F16Mean−7.214 × 100−7.377 × 100−10.09 × 101−9.949 × 100−1.054 × 101−1.054 × 101
Std3.206 × 1002.069 × 1007.744 × 1007.981 × 10−19.72 × 10−59.409 × 10−5
Best−1.054 × 101−1.054 × 101−1.054 × 101−1.054 × 101−1.054 × 101−1.054 × 101
Worst−1.859 × 100−3.405 × 100−7.107 × 100−8.468 × 100−1.054 × 101−1.054 × 101
Optimal number of
indicators
1127813
Table 6. Wilcoxon rank sum test results.
Table 6. Wilcoxon rank sum test results.
AlgorithmDECWOAWOAGWOPSOHHOMPA
F11.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F23.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−11
F34.56 × 10−111.21 × 10−121.21 × 10−122.36 × 10−121.21 × 10−121.21 × 10−12
F43.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−11
F53.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−116.76 × 10−53.01 × 10−11
F63.01 × 10−113.01 × 10−115.07 × 10−102.31 × 10−62.83 × 10−43.15 × 10−10
F72.12 × 10−41.15 × 10−77.77 × 10−93.01 × 10−112.68 × 10−45.53 × 10−8
F82.12 × 10−43.68 × 10−113.01 × 10−113.01 × 10−115.57 × 10−103.01 × 10−11
F94.29 × 10−107.46 × 10−71.10 × 10−121.21 × 10−125.34 × 10−61.19 × 10−12
F102.12 × 10−43.01 × 10−113.01 × 10−112.38 × 10−86.51 × 10−93.01 × 10−11
F113.01 × 10−113.01 × 10−112.15 × 10−61.59 × 10−32.19 × 10−73.68 × 10−11
F123.01 × 10−111.10 × 10−39.03 × 10−46.67 × 10−52.19 × 10−71.36 × 10−11
F139.21 × 10−54.80 × 10−76.73 × 10−92.66 × 10−114.19 × 10−102.57 × 10−11
F143.33 × 10−112.87 × 10−103.52 × 10−71.88 × 10−33.01 × 10−113.01 × 10−11
F153.01 × 10−116.06 × 10−115.46 × 10−63.54 × 10−43.01 × 10−113.01 × 10−11
F163.01 × 10−117.38 × 10−112.00 × 10−63.47 × 10−43.01 × 10−113.01 × 10−11
Table 7. Friedman statistical results.
Table 7. Friedman statistical results.
30−dim
AlgorithmDECWOAGWOHHOMPAPSOWOAGWOA
Rank4.18754.68753.90633.71885.12504.56251.8125
100−dim
AlgorithmDECWOAGWOHHOMPAPSOWOAGWOA
Rank2.77275.36362.86365.18186.63644.09091.0909
Table 8. Performance indicators of each algorithm.
Table 8. Performance indicators of each algorithm.
Map SizeIndexGWOADECWOAWOAPSO
12 × 12Shortest15.556315.556318.384821.2132
Worse21.213224.041626.870129.6985
Average16.440218.031221.655224.6604
Inflection points7.37508.21889.406311.0625
Rank 1234
Table 9. Optimal results of welding beam engineering problems.
Table 9. Optimal results of welding beam engineering problems.
Algorithm x 1 x 2 x 3 x 4 f ( x ) Rank
GWOA0.173625294.20179269.547387920.2057051.7248523232
WOA0.202347293.58354219.038360350.3873051.7350217937
DECWOA0.205729643.47048879.036623910.2057381.7248523091
GWO0.205650673.47225349.036822930.2057401.7250039043
PSO0.211252533.46563468.767822710.2185371.7558881656
HHO0.205238063.48096899.037334420.2057391.7251320825
MPA0.205720943.47153049.037036570.2057281.7250185764
Table 10. Optimization results of pressure vessel problems.
Table 10. Optimization results of pressure vessel problems.
Algorithm x 1 x 2 x 3 x 4 f x Rank
GWOA0.786629740.401155240.6259534195.77915953.7058541
WOA0.778801500.456631640.3196388199.99976061.2694125
DECWOA0.837171910.448026643.2230683163.18196021.2442092
GWO0.933340710.454393847.6220701118.40646044.8890254
PSO0.785478150.437878940.5516213196.79546062.6064026
HHO0.835747010.403912342.1031221176.57876107.9182957
MPA0.786685470.423677240.5305527197.08436032.5128743
Table 11. Tensile and compression spring engineering design results.
Table 11. Tensile and compression spring engineering design results.
Algorithm x 1 x 2 x 3 f x Rank
GWOA0.051434150.350616211.655856490.0126652513
WOA0.063277650.70489883.2859103650.0126863057
DECWOA0.051713970.357327611.253302130.0126652442
GWO0.056057080.47119036.7758926240.0126672035
PSO0.055347130.45127367.3298394030.0126676906
HHO0.057429520.51119175.8455198370.0126661564
MPA0.054209610.42044098.3411128090.0126652431
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ouyang, C.; Gong, Y.; Zhu, D.; Zhou, C. Improved Whale Optimization Algorithm Based on Fusion Gravity Balance. Axioms 2023, 12, 664. https://doi.org/10.3390/axioms12070664

AMA Style

Ouyang C, Gong Y, Zhu D, Zhou C. Improved Whale Optimization Algorithm Based on Fusion Gravity Balance. Axioms. 2023; 12(7):664. https://doi.org/10.3390/axioms12070664

Chicago/Turabian Style

Ouyang, Chengtian, Yongkang Gong, Donglin Zhu, and Changjun Zhou. 2023. "Improved Whale Optimization Algorithm Based on Fusion Gravity Balance" Axioms 12, no. 7: 664. https://doi.org/10.3390/axioms12070664

APA Style

Ouyang, C., Gong, Y., Zhu, D., & Zhou, C. (2023). Improved Whale Optimization Algorithm Based on Fusion Gravity Balance. Axioms, 12(7), 664. https://doi.org/10.3390/axioms12070664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop