Next Article in Journal
YOLO-S: A Lightweight and Accurate YOLO-like Network for Small Target Detection in Aerial Imagery
Previous Article in Journal
Development and Investigation of High-Temperature Ultrasonic Measurement Transducers Resistant to Multiple Heating–Cooling Cycles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Path Planning for Unmanned Delivery Robots Based on EWB-GWO Algorithm

1
Key Laboratory of Optoelectronic Information Sensing and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
2
School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(4), 1867; https://doi.org/10.3390/s23041867
Submission received: 11 January 2023 / Revised: 20 January 2023 / Accepted: 24 January 2023 / Published: 7 February 2023
(This article belongs to the Section Sensors and Robotics)

Abstract

:
With the rise of robotics within various fields, there has been a significant development in the use of mobile robots. For mobile robots performing unmanned delivery tasks, autonomous robot navigation based on complex environments is particularly important. In this paper, an improved Gray Wolf Optimization (GWO)-based algorithm is proposed to realize the autonomous path planning of mobile robots in complex scenarios. First, the strategy for generating the initial wolf pack of the GWO algorithm is modified by introducing a two-dimensional Tent–Sine coupled chaotic mapping in this paper. This guarantees that the GWO algorithm generates the initial population diversity while improving the randomness between the two-dimensional state variables of the path nodes. Second, by introducing the opposition-based learning method based on the elite strategy, the adaptive nonlinear inertia weight strategy and random wandering law of the Butterfly Optimization Algorithm (BOA), this paper improves the defects of slow convergence speed, low accuracy, and imbalance between global exploration and local mining functions of the GWO algorithm in dealing with high-dimensional complex problems. In this paper, the improved algorithm is named as an EWB-GWO algorithm, where EWB is the abbreviation of three strategies. Finally, this paper enhances the rationalization of the initial population generation of the EWB-GWO algorithm based on the visual-field line detection technique of Bresenham’s line algorithm, reduces the number of iterations of the EWB-GWO algorithm, and decreases the time complexity of the algorithm in dealing with the path planning problem. The simulation results show that the EWB-GWO algorithm is very competitive among metaheuristics of the same type. It also achieves optimal path length measures and smoothness metrics in the path planning experiments.

1. Introduction

Initially, mobile robots were widely used in high-risk areas in industrial and military industries. Unmanned Aerial Vehicles (UAVs) [1] and Autonomous Underwater Vehicles (AUVs) [2] have gradually become the essential backbone of national armies due to their lower cost of use, easy logistical support, and lower safety risk factor. Rail-Guided Vehicles (RGVs) [3] and Automated Guided Vehicles (AGV) [4] are becoming an indispensable part of the automation industry chain in the industrial field due to their advantages of simple and convenient operation, strong autonomy, and high work efficiency. With the rise of Industry 4.0 and emerging technologies, mobile robots are playing an important role in search and rescue [5], cargo transportation [6], unmanned services [7], geological exploration, and cultural entertainment [8]. Advances in autonomous navigation technology for mobile robots have made it possible to build unmanned delivery networks on the ground. The unmanned delivery robot developed by JD entered the road test phase in 2016 and will be put into operation soon. The technology makes it possible to achieve a large-scale and contactless delivery service of goods during an epidemic to meet the livelihood needs of a large number of consumers who are unable to travel, as shown in Figure 1. Currently, unmanned delivery robots are still severely limited in urban areas, especially in heavy and congested traffic environments. A significant portion of delivery robots are limited to areas with more fixed environments, such as campuses and communities. This still does not take advantage of unmanned delivery robots to replace people in the current epidemic environment for efficient, fast, and contactless delivery services. Therefore, the reasonable construction of unmanned distribution networks, the large-scale application of mobile robots for efficient cargo transportation, and the solution of the problem of “the last mile of cargo delivery” will bring great convenience to consumers.
Unmanned delivery robots are not the same as traditional AGVs in terms of navigation methods, movement modes, and usage functions. Traditional AGVs are based on a differential drive model with radar and laser SLAM navigation [9]. AGVs move slowly and have a lifting capacity of tons, enabling factories to move heavyweight goods while meeting the needs of distribution safety. Although unmanned delivery robots have fast braking, GPS, and a vehicle motion model design structure to enhance the adaptability of road transportation to some extent, they still face serious challenges in the delivery tasks of large and complex maps.
Path planning, as one of the key technologies for the autonomous navigation of unmanned delivery robots, aims to achieve a short, efficient, and safe walking route for mobile robots to quickly plan a path from the initial position to the target area in complex work scenarios. In the new industrial era, with the introduction of intelligent industries, the operational efficiency of the current robot path planning algorithm and the quality of the generated path no longer meet the requirements of the times. For mobile robots performing unmanned delivery work in large-scale complex environments, the path planning of mobile robots is a key factor affecting work efficiency. Therefore, to improve the efficiency of delivery robot operations, it is necessary to shorten the path length and reduce the running time of the path planning algorithm while satisfying the path feasibility and autonomous obstacle avoidance of the mobile robot. The authors in [10] state that traditional path planning algorithms for mobile robots lead to an exponential increase in planning difficulty when dealing with complex environments with a large number of uncertainties. Finding a path between two nodes that satisfies the constraints of the mobile robot is understood as an NP problem, and there is no universal solution. Therefore, in this paper, in order to solve the above problem, an improved metaheuristic intelligent search algorithm is introduced to abstract the path planning problem of the delivery robot into an optimization problem with movement space constraints. Efficient autonomous path planning for unmanned delivery robots in complex environments with large maps is achieved through the ability of the metaheuristic algorithm to handle highly nonlinear problems.

2. Related Work

In recent years, many scholars have researched and developed path planning techniques for mobile robots under complex maps in order to guarantee the safety and efficiency of robots in the process of movement.
The graph search technique of grid discretization [11] is popular as one of the most intuitive computational processes among path planning algorithms for its simplicity of operation, ease of map construction, and flexibility of map discretization intervals. In the article [12], weighting methods for three different strategies, weight-based recurrence (WBR), weight-based coordinate distance (WBCD), and weight-based travel cost (WBTC), are proposed and fused with heuristic functions of A*, Bi-A*, and Jump Point Search (JPS) algorithms to further improve the computational efficiency of path planning algorithms based on discrete grid map search techniques. It is a simple and effective way to reduce the computational complexity of the algorithm by adding weights to strengthen the guidance of the path planning algorithm in the search process, reducing the detection of redundant nodes by the algorithm. However, the efficiency of the algorithm will decrease significantly as the density of obstacles and the complexity of the map increase, and the limitation of the search direction of the algorithm due to the discretized grid will make the generated path of the algorithm differ from the actual optimal path [13]. In the article [14], a Theta* algorithm based on selective parameter optimization is proposed, and an operational trajectory conforming to the kinematic constraints of the vehicle is accomplished by eliminating the path-redundant nodes. The Theta* algorithm [15] adds an any-angle pathfinding strategy to the traditional A* algorithm and prunes the redundant nodes in the path through line-of-sight (LOS) reachability detection [16,17] to achieve the purpose of shortening the path. However, it also has the disadvantage that the time complexity of the algorithm is too large and not conducive to the efficient and fast execution of delivery tasks by mobile robots.
Path planning algorithms based on probabilistic sampling techniques usually require sampling the nodes of the mobile robot’s workspace in a random way and finding the optimal path by controlling the search direction and search step of the algorithm. The authors in [18] proposed a Rapidly exploring Random Trees (RRT) path planning algorithm based on low-cost valley and cost-space saddle-point strategies, which biases the random selection of path nodes to low-cost regions in space to achieve the low-cost motion of the mobile robot. The path planning algorithm based on the probabilistic sampling technique reduces to some extent the limitation of the search direction caused by the discretized grid map, but it is worth noting that the path planning algorithm based on sampling technique provides only weak probabilistic completeness and the generated paths are not optimal in general. Based on the above problems, [19] proposed a pseudo-random sampling strategy based on spatial principal axis as reference to further optimize the path planning method based on the sampling technique. This article improves the computational efficiency of the algorithm and the quality of the generated paths by setting the distance threshold between nodes and using a two-way incremental approach for path collision detection. In [20], a neural RRT* algorithm is proposed to guide the sampling location of RRT* and speed up the convergence of the algorithm by training the convolutional neural network (CNN) model for a large number of successful path planning cases for a more accurate prediction of the probability distribution of the optimal path.
The ability of intelligent search techniques based on metaheuristic algorithms to solve highly nonlinear and complex problems has led to widespread interest in their use in the field of path planning. The authors in [21] presented a path planning method based on an improved particle swarm optimization (PSO) algorithm. This paper solves the shortcomings of the PSO algorithm by adding adaptive fractional-order velocity to the iterative process of the PSO algorithm and applying local perturbation to the particle swarm through the evolutionary state of the PSO algorithm to solve the shortcomings of the PSO algorithm, which is prone to fall into a local optimum and premature maturity of the algorithm, and thus realizes the path planning design of mobile robots. The authors in [22] proposed a Mayfly (MF) algorithm based on an adaptive Cauchy variational operator and exponentially decreasing inertia weighting strategy to achieve two-dimensional autonomous pathfinding for UAVs. In addition to perfect algorithmic scalability in finding high-quality solutions to many complex problems, metaheuristics are favored by many researchers for their ability to excel when traditional exact optimization algorithms fail to provide satisfactory results. In 2014, Seyedali Mirjalili proposed the Gray Wolf Optimization (GWO) algorithm in his article [23], which is based on the multilayer social structure of gray wolf packs and uses the social behavior of wolves to find and round up prey to determine the best location for rounding up prey. The results of the benchmark function tests in the article show that the GWO algorithm is more competitive with other metaheuristics of the same type. However, according to no-free-lunch theorem for optimization theory [24], it is known that no optimization algorithm can outperform any other algorithm under any metric for all possible problems, and the same is true for the GWO algorithm. The authors in [25] pointed out that the GWO algorithm suffers from the defects of the slow convergence of the algorithm and an easy to fall into local optimum, resulting in a premature algorithm and the poor progress of optimization results when dealing with high-dimensional complex functions.
In recent years, many scholars have proposed many optimization strategies for the GWO algorithm with the intention of further improving the search efficiency, convergence accuracy, and performance balance of the GWO algorithm. Article [26] proposed an improved GWO algorithm (I-GWO) based on the dimensional-learning hunting search (DLH) strategy, which inherits the hunting behavior of individual wolves in nature. The DLH strategy uses a different approach to construct a hunting domain for each wolf, in which neighboring information can be shared among individual wolves to enhance the performance balance between local and global search capabilities and further maintain the diversity of the population. The DLH strategy enables the I-GWO algorithm to have excellent global convergence and local optimal escape capability. Article [27] proposed a hybrid algorithm GWO-CS based on the Cuckoo Search (CS) algorithm and the GWO algorithm. The article further optimizes the global search process of the GWO algorithm by using the location update formula of the CS algorithm. It optimizes the strongly oriented global search process in the GWO algorithm by using the random wandering of the CS algorithm and the global search method of Lévy flight to update the nest location, and improves the local optimal stagnation and prematureness of the algorithm due to the weak global search performance of the GWO algorithm. Article [28] proposed an exponential convergence factor improvement strategy to better fit the actual search process of gray wolves, improving the control parameters to balance the global exploration and local exploitation capabilities of the algorithm while incorporating dynamic weighting factors to reduce the possibility of the GWO algorithm falling into a local optimum. However, from the experimental results, the GWO algorithm still has the problem of the low accuracy of optimization results when dealing with high-dimensional functions.
The GWO algorithm is also often used in the process of metaheuristic algorithm improvement and fusion to solve the parameter setting of other kinds of metaheuristic algorithms and selective optimization problems in the running process. Article [29] proposed a hybrid algorithm combining PSO and GWO. The algorithm is not designed with the same idea as GWO-CS. The article is developed without changing the general operation of the PSO algorithm and GWO algorithm. The authors use the excellent detection capability of the GWO algorithm to replace the randomness drift of particles in the original PSO algorithm by directing some particles to the partially improved positions obtained by the GWO algorithm, further preventing the PSO algorithm from falling into the risk of the local optimal inability to escape; however, it is undeniable that the binary heuristic algorithm will bring a surge in time complexity. The paper points out that the increase in the time complexity of the algorithm is tolerable from a practical engineering point of view, such as the leather nesting problem (LNP).
Based on the above problems and inspired by related works, this paper improves the path planning design of unmanned delivery robots by improving the GWO algorithm to ensure the safe and efficient pathfinding of the robots in large and complex map scenarios. The contributions made in this paper are shown below.
  • The strategy of generating the initial wolf pack based on the GWO algorithm is modified by introducing a two-dimensional Tent–Sine coupled chaotic mapping based on the two-dimensionality of the mobile robot path solution. Improving the randomness between the two-dimensional state variables (x-axis coordinate positions and y-axis coordinate positions) of the path nodes while guaranteeing the population diversity of the generated GWO algorithm;
  • An improved GWO algorithm (EWB-GWO) is proposed, which combines an adaptive nonlinear inertia weighting strategy, opposition-based learning method based on an elite strategy, and a random wandering law of the BOA algorithm to balance the search performance of the GWO algorithm and improve the convergence accuracy of the algorithm;
  • The EWB-GWO algorithm is combined with mobile robot path planning to enhance the rationalization of the initial population generation through the line-of-sight detection technique based on Bresenham’s line algorithm, reduce the number of iterations of the EWB-GWO algorithm, and decrease the time complexity of the algorithm;
  • The EWB-GWO algorithm and its three sub-algorithms are tested against other metaheuristics (PSO, GA, BOA, ABC) based on the CEC benchmark function test set in a cross-sectional comparison, and the results are analyzed to further validate the rationality of the algorithm improvement strategy;
  • Experiments on mobile robot path planning based on the public dataset of Moving AI Lab and a cross-sectional comparison with other path planning algorithms to verify the superiority of EWB-GWO algorithm in handling large-scale complex environment maps.

3. Gray Wolf Optimization Algorithm and Its Improvement Strategy

The GWO algorithm is a swarm intelligence (SI) search algorithm among the natural metaheuristics, which is inspired by the roundup and hunting behavior of the gray wolf pack. Three high-ranking gray wolf individuals lead the pack in the optimal direction of the prey, which makes the method a successful algorithm for fast convergence and avoiding local minima [25]. However, the GWO algorithm suffers from the same lack of population diversity, an uneven exploration and exploitation strategy, and the premature convergence of the algorithm under high-dimensional complexity functions. Next, this paper will give the GWO algorithm and the detailed steps to change the method.

3.1. Standardized GWO Algorithm

The GWO algorithm was proposed by Seyedali Mirjalili, an academic at Griffith University, Australia, and others, and was inspired by the hunting activity behavior of gray wolf packs. The GWO algorithm mathematically simulates how a gray wolf pack searches, surrounds and attacks its prey, and strictly follows the social hierarchy of the pack, with three leading wolves leading the overall action of the pack. The social hierarchy of the gray wolf pack divides the wolves into four different species, which play a role of mutual dominance in the hierarchy. Among them, Alpha ( α ) wolves, Beta ( β ) wolves, and Delta ( δ ) wolves are considered as pack leaders with excellent leadership ability, while Omega ( ω ) wolves are subordinate wolves following the leaders, as shown in Figure 2.
A mathematical modeling of the social hierarchy of wolves is required in the design of the GWO algorithm. Usually, during the iteration of the algorithm, the best solution is considered to be Wolf α , the second and third best solutions are Wolf β and Wolf δ , respectively, while all other alternatives are considered to be ω . During the search and exploration tasks performed by the GWO algorithm, the overall wolf pack’s search direction is dominated by α , β , and δ , while ω follows. The gray wolf is divided into three main stages in the hunting process: finding the target and approaching the prey, surrounding the prey and harassing it until the target stops moving, and finally attacking the prey.

3.1.1. Approaching and Encircling Behavior of Wolves towards Their Prey

Gray wolves will surround the prey in the hunting process, and when the current position of the gray wolf pack is expressed as X ( t ) and the position information of the prey is expressed as X p ( t ) , the hunting behavior of the gray wolf pack is abstracted into a mathematical model, as shown in Equations (1) and (2)
D = | C X p ( t ) X ( t ) |
X ( t + 1 ) = X p ( t ) A D ,
where D denotes the distance between the prey and the wolf individuals and t is the number of current iterations. In Equation (2), X ( t + 1 ) represents the location update information of the gray wolf pack, and the movement of the pack will be affected by its distance from the current prey. C and A in the formula indicate the vector coefficients, whose expressions are shown specifically in Equations (3) and (4).
A = 2 a r 1 a
C = 2 r 2
Among them, the parameter a plays a role in the GWO algorithm to balance global search with local mining. The value of a decreases linearly from 2 to 0 during the overall iteration of the algorithm. A larger value of a in the initial stage of the algorithm iteration helps the global convergence of the GWO algorithm and guides the wolf pack quickly to the region where the optimal solution is located. The value of a decreases linearly in the later iterations of the algorithm, which helps the GWO algorithm to mine finely in the optimal region and improve the convergence accuracy of the algorithm. r 1 and r 2 are random vectors satisfying r 1 , r 2 [ 0 , 1 ] . From Equations (1) and (2), it is known that the gray wolf can make a prediction of its moving direction based on the current position information and the possible orientation of the prey. By adjusting the values of variables A and C in Equations (1) and (2), the gray wolf can reach different locations around the prey to achieve encirclement.

3.1.2. Hunting Behavior of Wolves

The gray wolf pack has the ability to identify the location of potential prey and mainly relies on the guidance of wolf α , wolf β , and wolf δ in the process of searching for prey. In order to simulate the hunting behavior of gray wolves at the mathematical level, the GWO algorithm needs to keep the three wolves ( α , β , and δ ) with the best adaptation in the current pack during each iteration and update the positions of other wolves based on their position information. The mathematical model of wolf pack leaders tracking prey is represented as shown in Equation (5).
D α = | C 1 X α X | D β = | C 2 X β X | D δ = | C 3 X δ X |
where D α , D β , and D δ denote the distance difference between wolf α , wolf β , and wolf δ and other individuals, respectively. X α , X β , and X δ denote the current locations of wolf α , wolf β , and wolf δ , respectively, X denotes the current wolf pack location, and C 1 , C 2 , and C 3 are random vectors and satisfy the constraints of Equation (4). Based on the distance difference between the wolf leader and the wolf pack, the traction direction of the overall wolf pack, i.e., the information on the moving orientation of the wolf pack, can be calculated as shown in Equations (6) and (7).
X 1 = X α A 1 ( D α ) X 2 = X β A 2 ( D β ) X 3 = X δ A 3 ( D δ )
X ( t + 1 ) = X 1 + X 2 + X 3 3
where A 1 , A 2 , and A 3 are random vectors satisfying the constraints of Equation (3); X 1 , X 2 , and X 3 denote the traction directions of the three leading wolves; and X ( t + 1 ) denotes the overall moving position of the wolf pack. As shown in Figure 3, it can be seen that the final orientation of the wolves will be located at a random position within the circle defined by the positions of α , β , and δ in the search space.
Now, this paper shows the calculation steps of the basic Gray Wolf Optimization algorithm and the pseudo-code as follows (see Algorithm 1).
  • Initialize the number of wolf pack individuals ( P o p ), the number of iterations of the algorithm ( M a x I t e r ), the dimension of the solution space ( D i m ), and the boundary conditions ( u b , l b ) in the GWO algorithm;
  • Randomly generate the initial population of gray wolves in the solution space, then calculate the fitness value of each individual and initialize to Alpha, Beta, and Delta wolves according to the fitness function;
  • Calculate the distance between the wolf pack and the leading wolf according to Equation (5), and obtain the migration direction of the wolf pack according to Equation (6);
  • Re-update Alpha, Beta, and Delta wolves based on migrated wolf individuals;
  • If the number of iterations of the algorithm reaches the termination condition, output the order cycle and the optimal individual, otherwise update the parameter vector a and return to Step 3.
Algorithm 1 Pseudocode for GWO algorithm
Sensors 23 01867 i001

3.2. EWB-GWO Algorithm

The GWO algorithm achieves superiority-seeking by leading the wolf pack to the prey region during the iterative process, which makes the GWO algorithm have more efficient and fast convergence properties than other metaheuristic algorithms in dealing with high-dimensional complex optimization problems. However, as the complexity and dimensionality of the problem being solved increases, the convergence speed and accuracy of the GWO algorithm decreases significantly. At the same time, the GWO algorithm still has the disadvantage of low population diversity and an easy to fall into local optimum when dealing with complex multimodal functions. Therefore, in this paper, the GWO algorithm is modified by using Tent–Sine two-dimensional coupled chaotic mapping and combining three improvement strategies.

3.2.1. Tent–Sine Two-Dimensional Coupled Chaotic Mapping

The generation of the initial population is usually an important factor to be considered by the GWO algorithm in solving optimization problems. In general, a random approach is used to generate the initial population; however, this approach can easily result in an uneven distribution of the initial solutions and distributed aggregation, making the search capability of the algorithm poor. Chaotic mappings are used to generate chaotic sequences, and their characteristics of nonlinearity, insensitivity to initial values, and high randomness can help the GWO algorithm to generate initial solutions and further maintain the diversity of populations.
Tent mapping, also known as tent mapping, produces a random sequence with uniform probability distribution density and data correlation, and the mathematical expression is shown in Equation (8).
x k + 1 = x k a x k [ 0 , a ) ( 1 x k ) 1 a x k [ a , 1 ]
when a = 0.5 , it is called the central Tent mapping, and the mapping sequence obtained at this time has a uniform distribution characteristic. However, according to Equation (8), when a = 0.5 , the Tent mapping will inevitably have a short-period characteristic in the iterative sequence, for example, when x k = 0.2 , it will lead to x k + 1 = 0.4 , x k + 2 = 0.8 , and x k + 3 = 0.2 , and thus fall into a cycle. This is shown in Figure 4. In order to jump out of the small cycle and form a two-dimensional initial population that satisfies the mathematical model of path planning, this paper uses a two-dimensional coupled Sine mapping approach to improve the Tent chaotic mapping.
The Sine mapping is a single-peaked chaotic mapping with a value range between −1 and 1. The mathematical expression of the Sine mapping is shown in Equation (9).
x k + 1 = b sin ( π x k ) b 0 , 1
where b is the control parameter and the highest randomness of the resulting chaotic sequence as b approaches 1. However, the sequence generated by a single Sine mapping has sinusoidal properties, resulting in a low equilibrium of the generated random solutions. In this paper, the two mapping methods are coupled and the sequence generated by the Tent mapping is forced to achieve a cycle escape by the small perturbation provided by the Sine mapping, as shown in Equations (10) and (11).
x k + 1 = ( λ x k a + ( 1 λ ) sin ( π y k ) )   x k 0 , a ( λ 1 x k 1 a + ( 1 λ ) sin ( π y k ) ) x k a , 1
y k + 1 = ( λ y k a + ( 1 λ ) sin ( π x k + 1 ) ) x k 0 , a ( λ 1 x k 1 a + ( 1 λ ) sin ( π x k + 1 ) ) x k a , 1
where λ is the system parameter and satisfies λ ( 0 , 1 ) . From Equations (10) and (11), it can be seen that the two-dimensional sequence x k + 1 and y k + 1 have a strong coupling property between them, and the two-dimensional sequence generated when a = 0.5 and λ tends to 1 has a good uniformity distribution, as shown in Figure 5.

3.2.2. Adaptive Nonlinear Inertia Weights

Among metaheuristic algorithms, inertia weights can be used to regulate the balance of the global search and local mining ability of the algorithm, which has a crucial impact on the convergence speed of the metaheuristic algorithm. When the value of inertia weight is larger, the global search ability of the algorithm is stronger and the search range is wider, which is beneficial for the algorithm to jump out of local optimum and improve the convergence speed. When the value of inertia weights is small, it is beneficial to the algorithm for fine mining in the optimal domain and improving the convergence accuracy of the algorithm. The linear inertia weights proposed by Shi and Eberhart in [30] are often used in metaheuristic algorithms due to their simple structure, ease of use, and better convergence of the algorithm. However, the inertia weights that change linearly with the number of iterations do not accurately reflect the actual optimization process of the GWO algorithm with high nonlinearity, which is not conducive to the global convergence of the algorithm and easily causes the phenomenon of premature algorithm.
Therefore, in this paper, nonlinear inertia weights with adaptive features are introduced in the gray wolf pack hunting process of the GWO algorithm. The performance balance between global search and local mining of GWO algorithm is optimized to accelerate the convergence speed of the algorithm and improve the convergence accuracy, as shown in Equations (12) and (13).
W 1 = 1 t M a x I t e r 3 W 2 = F i t t , i F i t t , i + F i t t , g
W G W O = W 1 W 2
where W G W O denotes the inertia weight of the current gray wolf pack. F i t t , i denotes the fitness value of the i th gray wolf during the t th iteration. F i t t , g denotes the fitness value of the optimal gray wolf during the t th iteration. From Equations (12) and (13), it can be seen that the inertia weight W G W O of the gray wolf pack is expressed as the product of inertia parameters W 1 and W 2 . The value range of W 1 is between [0, 1] and will show a decreasing trend of deceleration with the increase in the number of iterations. Larger values of weights in the early iteration of the algorithm facilitate the GWO algorithm to quickly focus on the optimal region and accelerate the global convergence of the algorithm. Smaller weight values in the late iteration of the algorithm are beneficial to the deep mining of the algorithm in the optimal region, improving the convergence accuracy of the GWO algorithm. The inertia parameter W 2 is related to the fitness of individual gray wolves and is automatically adjusted based on the interpolation between the current individual and the optimal individual. The improved wolf hunting expression of GWO algorithm is shown in Equation (14).
D α = | C 1 X α W G W O X | D β = | C 2 X β W G W O X | D δ = | C 3 X δ W G W O X | ,   X 1 = X α A 1 ( D α ) X 2 = X β A 2 ( D β ) X 3 = X δ A 3 ( D δ ) X ( t + 1 ) = X 1 + X 2 + X 3 3

3.2.3. Random Wandering Law of the Butterfly Optimization Algorithm

The Butterfly Optimization Algorithm (BOA) is a novel metaheuristic algorithm proposed by Sankalap Arora et al., in 2019, which is based on the foraging strategy of butterflies and uses the olfactory sensory organs of butterflies to determine the optimal location of food. Among BOA, scent pheromone is a key determinant of the butterfly’s locomotor state. In the article [31], Sankalap Arora described the relationship between olfactory stimuli and the scent modality of individual butterflies by introducing stimulus intensity ( I ), perceptual modality ( c ), and power index ( a ) as shown in Equation (15).
f = c I a
where f denotes the intensity at which the scent emitted by the current individual butterfly can be perceived by other butterflies. I denotes the stimulus intensity of the scent produced by the current butterfly, i.e., the current butterfly’s adaptation. c denotes the sensory modality, i.e., simulates the effect of current natural environmental factors on the scent. a denotes the power exponential constant dependent on the scent.
During the iterative calculation of the BOA algorithm, the butterfly’s movement state is influenced by the intensity of its own scent and the intensity of the scent emitted by other individual butterflies. When an individual butterfly senses more scent emitted by another butterfly, it follows and approaches, and this phase is called the global search phase of the BOA algorithm, as shown in Equation (16).
x i t + 1 = x i t + ( r 2 × g * x i t ) × f i
where x i t denotes the i th individual butterfly during the t th iteration. f i denotes the intensity of the scent pheromone emitted by the i th butterfly. g * denotes the fitness that the optimal butterfly individual has in the current iteration. r is denoted as a random number satisfying r [ 0 , 1 ] . When a butterfly cannot perceive a scent larger than itself, the individual butterfly will adopt a random wandering strategy. By the influence of other butterfly individuals in the contemporary population, a state of random movement is presented, and this behavior is called the local search of the BOA algorithm, as shown in Equation (17).
x i t + 1 = x i t + ( r 2 × x z t x k t ) × f i
where x z t and x k t denote the z th and k th butterfly individuals of the solution space, respectively.
The GWO algorithm has the disadvantages of higher mining capacity, faster convergence, and smaller memory space occupation in the process of finding the best results [25]. However, the process of iterating the gray wolf population to the optimal area is determined by the positions of Alpha wolves, Beta wolves, and Delta wolves, and then all other wolf packs are guided to follow. The global search by following will lead to a gradual decrease in population diversity and a weakening of the global search capability in the later iterations of the algorithm. Therefore, this paper further improves the local search ability of the GWO algorithm and enhances the population diversity of the algorithm in the late iteration by introducing the scent pheromone and the random wandering law of the BOA algorithm to guide the overall wolf population by random other species of wolves in the pack with a certain probability. In this paper, the key pseudo-code of the random wandering law based on the BOA Algorithm 2:
Algorithm 2 GWO algorithm based on the randomized wandering law
Sensors 23 01867 i002

3.2.4. Opposition-Based Learning Method Based on the Elite Strategy

The opposition-based learning method is an intelligent computing approach that relies on the reverse-learning thinking proposed by Tizhoosh in [32]. At present, this method has been widely used in population intelligence algorithms, such as the genetic algorithm [33], particle swarm algorithm [34], artificial bee colony algorithm [35], and differential evolution algorithm [36], to improve the search performance of the algorithm. Qian et al. optimized the initial population generation strategy of the sparrow search algorithm by using a opposition-based learning strategy in [37]. From the experimental results, opposition-based learning has a very good effect on improving the global search efficiency of the population quality enhancement algorithm of the search algorithm.
This paper is based on the integration of elite strategies with opposition-based learning. By selecting elites, the opposite wolf pack is merged with the current pack, and the best individuals are selected to enter the next iteration cycle for searching. That is, while maintaining the diversity of wolf packs to a certain extent and preventing the algorithm from falling into local optimum, it also fully absorbs the beneficial information of the elite individuals in the current wolf packs to accelerate the global convergence speed of the algorithm. Suppose x i ( t ) and x i * ( t ) are the current individual and the reverse individual during the t th iteration, respectively, let F i t i , j ( t ) and F i t i , j * ( t ) be the adaptation values on the j th dimension corresponding to x i ( t ) and x i * ( t ) , respectively. In this paper, m ( 2 m N ) elite individuals are represented as shown in Equation (18).
{ e 1 ( t ) , e 2 ( t ) , , e m ( t ) } { x 1 ( t ) , x 2 ( t ) , , x N ( t ) }
Then, the reverse solution F is defined as shown in Equation (19).
F i t i , j * ( t ) = η ( a j ( t ) + b j ( t ) ) F i t i , j ( t ) a j ( t ) = min ( e 1 , j ( t ) , e 2 , j ( t ) , , e m , j ( t ) ) b j ( t ) = max ( e 1 , j ( t ) , e 2 , j ( t ) , , e m , j ( t ) )
where η is a random number and satisfies η ( 0 , 1 ) .

3.2.5. EWB-GWO Algorithm Flow

In this paper, we combine Tent–Sine two-dimensional chaotic mapping, nonlinear adaptive inertia weights, a random wandering strategy based on the BOA algorithm, and an opposition-based learning method based on elite strategy to improve the shortcomings of the GWO algorithm in processing high-dimensional complex functions with slow convergence speed, low convergence accuracy, and an algorithm prone to falling into a local optimum, and we propose the EWB-GWO algorithm (see Algorithm 3), whose pseudo-code is shown below.
Algorithm 3 Pseudocode for EWB-GWO algorithm
Sensors 23 01867 i003

4. Path Planning Design Based on EWB-GWO Algorithm

For the path planning technology of mobile robots performing unmanned delivery tasks, the path length metrics, obstacle avoidance capability, and the operating speed of the algorithm need to be comprehensively considered [38]. The path planning algorithm needs to generate reliable paths under the condition of ensuring the safety and timeliness of the mobile robot. Therefore, this paper will use the EWB-GWO algorithm and combine it with a line-of-sight (LOS)-based initial population strategy to implement an unmanned delivery task based on mobile robots. The flow of the path planning strategy used in this paper is shown in Figure 6 below.

4.1. Initial Population Generation Strategy Based on LOS Algorithm

The generation quality of the initial gray wolf population has an important impact on the convergence speed and computational accuracy of the EWB-GWO algorithm in dealing with the path planning problem. The use of random sampling of path interruption points leads to the generation of a large number of infeasible solutions, which reduces the population diversity of the algorithm and increases the computational pressure of the EWB-GWO algorithm.
In this paper, we propose an initial population generation strategy based on LOS detection, which further reduces the number of path nodes while outputting the diversity of feasible solutions and ensures the quality and effectiveness of the initial population. LOS detection is usually performed by the Bresenham’s line algorithm. The Bresenham’s line algorithm is used in computer bitmap inspection, and it is used to detect the individual pixels that pass between two pixel nodes. LOS detection is used in a discrete 2D grid map to determine whether two nodes are directly accessible and free from obstructions. The specific steps of the initial population generation strategy proposed in this paper are shown below.
  • The workspace of the mobile robot is modeled as a discrete slice through an equal-area grid. The grid volume is set to 1.2 to 1.5 times the volume of the mobile robot, as shown in Figure 7;
  • The feasible path nodes are generated by Tent–Sine 2D chaotic mapping, and the LOS reachability between all generated nodes and their parents and target nodes is detected by the LOS algorithm;
  • If LOS is reached between the current node and its parent and target nodes simultaneously, the sub-individual is output and the next individual is generated. If the current node has LOS reachable only to the parent node, the current node is kept as the parent node and the path node continues to be generated. If there is no reachability between the current node and the parent node, the current node is directly discarded, as shown in Figure 8;
  • The initial population is generated by iteration until all wolves are viable to end the algorithm.

4.2. Mathematical Model Based on EWB-GWO Path Planning

This section is not mandatory but can be added to the manuscript if the discussion is unusually long or complex. We denote the starting point of the mobile robot as x s t a r t , the target point as x e n d , and the path interruption point as x i . Then, the expression of the path corresponding to the gray wolf individual i is shown in Equation (20).
p a t h i = { x s t a r t , x 1 , x 2 , x 3 , , x k , , x n | x k P f r e e } n [ 0 , max ( l e n g t h x , l e n g t h y ) ]
where P f r e e denotes the barrier-free area where the mobile robot can travel. max ( l e n g t h x , l e n g t h y ) indicates the maximum number of grids occupied by the x-axis and y-axis of the working interval of the mobile robot.
During the iterative computation of the EWB-GWO algorithm, the quality of the generated paths is judged by the fitness assessment of the path results represented by the wolf pack individuals through the fitness function. In this paper, the fitness function is established by using the Euclidean distance formula, as shown in Equation (21).
F i t ( p a t h i ) = ( x s t a r t . x x 1 . x ) 2 + ( x s t a r t . y x 1 . y ) 2 + k = 1 n 1 ( x k + 1 . x x k . x ) 2 + ( x k + 1 . y x k . y ) 2 + ( x n . x x e n d . x ) 2 + ( x n . y x e n d . y ) 2
The complete search path of the unmanned delivery robot based on the EWB-GWO algorithm is shown in Figure 9.

5. Algorithm Simulation and Experimental Design

In the algorithm simulation and experimental design section, 24 CEC benchmark functions designed from different articles are selected for algorithm testing in order to verify the actual operation of the EWB-GWO algorithm proposed in this paper. These benchmark functions can be found in the articles [23,31,38], as shown in Table 1. They contain the name, expression, variable dimension, and theoretical minimum of each function. All experiments in this paper were conducted on an i5-1135G7 minicomputer and using Matlab 2018b software. The experiments as well as the methods used in this paper are described below.
  • In order to verify the improved strategies proposed in this paper, as well as the actual outcome-seeking ability of the improved algorithms, this paper performs benchmark function tests for the GWO algorithm, the EWB-GWO algorithm, and its improved three-seeded strategy algorithm. The improved three sub-strategy algorithms are the GWO algorithm based on the nonlinear adaptive inertia weighting strategy (W-GWO), the GWO algorithm based on the BOA free-roaming strategy (B-GWO), and the GWO algorithm based on the opposition-based learning method of the elite strategy (E-GWO). Three unimodal functions, three multimodal functions, and three multimodal functions with fixed dimensionality are selected for testing in the cases of dimensionality (Dim) 50 and 100 to verify the effectiveness of the improvement strategy proposed in this paper;
  • In this paper, the proposed EWB-GWO algorithm is tested against the extended algorithm that has been optimized and improved for the GWO algorithm in recent years under benchmark functions. In the comparison experiments, the I-GWO algorithm, GWO-CS algorithm, PSO-GWO algorithm, and the original GWO algorithm, which have better support for the benchmark function test, are selected to participate in the tests in order to analyze the superiority of the optimization method of the proposed GWO algorithm over the improved algorithms of the same type;
  • In this paper, four metaheuristic algorithms, namely the PSO algorithm, genetic algorithm (GA), artificial bee colony (ABC) algorithm, and butterfly optimization algorithm (BOA), which are often used to solve complex optimization problems, are selected for cross-sectional comparison experiments with our proposed EWB-GWO algorithm. Additionally, we further analyze the performance of the improved algorithm proposed in this paper compared with the mainstream intelligent optimization algorithms;
  • This paper conducts mobile robot path planning experiments based on the Moving AI Lab public map dataset. This experiment will use different sizes of map data to simulate the real environment. In this experiment, the EWB-GWO algorithm will be compared with the A* and Jump Point Search (JPS) algorithms based on graph search technique, RRT algorithm based on probabilistic sampling, BOA algorithm based on metaheuristic technique, and the Theta* algorithm based on an any-angle path planning strategy.

5.1. Benchmark Function Test of EWB-GWO Algorithm and Its Sub-Algorithms

In this experiment, three unimodal functions ( f 1 , f 3 , f 8 ), three multimodal functions ( f 10 , f 12 , f 14 ), and six fixed-dimension multimodal functions ( f 18 , f 19 , f 20 , f 21 , f 23 , f 24 ) are randomly selected from Table 1 to compare the GWO algorithm, the EWB-GWO algorithm, and the three improved sub-algorithms in Dim = 50 and Dim = 100 dimensions, respectively. The initial number of wolves in the experimental design is set to 300, and the maximum number of iterations is set to 1000. The experimental results are shown in Figure 10, Figure 11 and Figure 12. In order to better demonstrate the variability in convergence speed between the algorithms, the experimental images are enlarged to different degrees in this paper.
It is clear from the three different sets of convergence tests that the EWB-GWO algorithm as well as the W-GWO algorithm converge the fastest in the tests of f 3 and f 8 functions under the single-mode functions, far outperforming the other improved algorithms and the original GWO algorithm, and still leading when the dimensionality rises to Dim = 100. In the test of f 1 function, the convergence speed of the GWO algorithm and E-GWO algorithm is faster; however, with the increase in dimensionality, the gap of the convergence speed of the algorithm grows gradually smaller from Dim = 50 to Dim = 100, and the shortcomings of the GWO algorithm in dealing with high-dimensional problems are gradually magnified. From the perspective of multimode functions, the advantage of the convergence speed of the EWB-GWO algorithm and W-GWO algorithm is more obvious as the complexity of the benchmark function increases. In the tests of the f 10 , f 12 , and f 14 functions, the EWB-GWO algorithm and W-GWO algorithm add adaptive nonlinear inertia weights in the process of global search to speed up the convergence of the global search, meaning the algorithm can quickly converge to the global optimum. The EWB-GWO algorithm converges fastest with the W-GWO algorithm under the f 18 , f 21 , and f 23 functions during the testing of the six fixed-dimensional multimode functions. However, it is undeniable that the EWB-GWO algorithm performs poorly under the f 19 and f 20 functions, while there is a small difference in the convergence speed of each algorithm in the f 24 function test. From the overall experimental results of the 18 groups based on an nonlinear adaptive weighting strategy, to a certain extent there is an acceleration of the global convergence of the algorithm, which means the algorithm can quickly focus on the optimal region for mining, ensuring the search efficiency of the algorithm. In this paper, the optimal (Best), average (Avg), and standard deviation (Std) of the fitness of all the benchmark functions designed in this experiment after 30 runs are shown in Table 2, Table 3 and Table 4, and the search accuracy of the algorithm is further analyzed.
From Table 2 and Table 3, it can be seen that the convergence accuracy is relatively good during the testing of the unimodal functions ( f 1 , f 3 , f 8 ). Among them, the E-GWO algorithm has the highest convergence accuracy during the testing of the f 1 function, and the GWO algorithm, B-GWO algorithm, and EWB-GWO algorithm have a more similar convergence accuracy. In the f 3 and f 8 function tests, the EWB-GWO algorithm, on the other hand, shows excellent convergence accuracy and is closer to the theoretical optimum than other algorithms in the same dimensional conditions. The E-GWO algorithm and the EWB-GWO algorithm have the best convergence accuracy in the test of the multimodal function ( f 10 , f 12 , f 14 ), which basically coincides with the theoretical best value. The convergence accuracy of the original GWO algorithm and the B-GWO algorithm is relatively low, and the depth mining ability for the optimal region is insufficient. From Table 4, the difference of the five algorithms for the multimodal functions with fixed dimensions in terms of their ability to find the best result is small and consistent with the theoretical optimum.
From the results of this experiment, it is observed that the nonlinear adaptive inertia weighting strategy proposed in this paper makes the B-GWO algorithm and EWB-GWO algorithm perform well in global convergence speed and far outperform other improved strategies in the tests of unimodal functions, multimodal functions, and multimodal functions with fixed dimensions.

5.2. Comparison of Benchmark Functions of Other Improved GWO Algorithms of EWB-GWO Algorithm

In this experiment, we will select other benchmark functions from Table 1 that are not involved in the test in Experiment 5.1, which are five unimodal functions ( f 2 , f 4 , f 5 , f 6 , f 7 ), five multimodal functions ( f 9 , f 11 , f 13 , f 15 , f 16 ), and two fixed-dimension multimodal functions ( f 17 , f 22 ) for the cross-sectional comparison verification among GWO-improved algorithms. Since the GWO algorithm performs better with multi-peaked functions of fixed dimensions, only fewer functions are selected for testing and justification in this experiment to avoid a repetition of the conclusions. According to the principle of experimental design fairness, this experiment will use dimension Dim = 50, the number of initial wolf packs is 300, and the maximum number of iterations is 1000. The cross-sectional comparison is verified based on GWO, I-GWO, GWO-CS, PSO-GWO, and EWB-GWO algorithms. The experimental results are shown in Figure 13.
From the convergence curves of each GWO-improved algorithm shown in Figure 13, it can be seen that the GWO-CS algorithm and the EWB-GWO algorithm converge faster compared with the other algorithms. The GWO-CS algorithm converges to the optimum first in the tests of f 2 , f 5 , f 6 , f 7 , and f 9 functions. The GWO-CS algorithm benefits from the global search method of the Cuckoo Search algorithm by a Lévy flight update of the nest location, which further optimizes the global search performance of the GWO algorithm and further improves the population superiority during the iteration of the algorithm. From the experimental process, it can be found that the initial population quality of the GWO-CS algorithm, as well as the quality of individuals during the population iteration, exceeds other algorithms. The proposed EWB-GWO algorithm converges fastest in the tests of f 11 , f 13 , and f 16 functions, and the convergence speed of f 4 and f 15 functions is basically the same as that of the GWO-CS algorithm. The convergence speed of the PSO-GWO algorithm is the slowest during the experiment, while the convergence speed of the I-GWO algorithm is not much different from the original GWO algorithm; moreover, even the convergence speed is slower than the original GWO algorithm in the f 5 , f 7 , and f 9 function tests. For the optimization of the global convergence speed, the I-GWO algorithm and the PSO-GWO algorithm are not as effective. In this paper, the convergence results of each algorithm during the experiments are shown in Table 5 and are compared and illustrated in terms of the mean (Avg), standard deviation (Std), optimal value (Best), and running time (Time) under the same parameter settings.
From the experimental results in Table 5, the EWB-GWO algorithm obtains the convergence results closest to the theoretical optimum with the f 4 , f 5 , f 7 , f 11 , f 13 , and f 15 functions in the comparative experimental tests of the five algorithms, and takes the shortest computation time among the four improved GWO algorithms. The EWB-GWO algorithm has excellent test results in the tests of multi-peak functions as well as multi-peak functions with fixed dimensions. Except for the f 9 function, all the tested functions basically reach the theoretical optimal value. The convergence accuracy of the PSO-GWO algorithm is more unsatisfactory, and the standard deviation of the function test results shows that the volatility of the PSO-GWO algorithm solution is large, meaning the algorithm is not stable. The EWB-GWO algorithm is highly competitive among GWO algorithms of the same type in terms of convergence speed and convergence accuracy in the test of high-dimensional complex functions.

5.3. Comparison of EWB-GWO Algorithm with Other Metaheuristic Algorithms

In this experiment, GWO, B-GWO, W-GWO, E-GWO, EWB-GWO, PSO, GA, BOA, and ABC algorithms are selected for cross-sectional comparison tests based on all the benchmark functions proposed in this paper. Where the unimodal and multimodal functions are performed in the same dimension, i.e., Dim = 50, the test results will be the mean (Avg) and standard deviation (Std) of 30 experiments. In this paper, the parameter design of the cross-sectional comparison algorithm involved in the experiments is summarized in Table 6, and the articles referenced by the parameters are marked.
The parameters about the improved GWO algorithm have been explained in the paper. The convergence curves of some experiments are shown in Figure 14. In order to better demonstrate the variability of convergence speed between the algorithms, the experimental images are enlarged to different degrees in this paper.
It can be seen from the convergence results that the global convergence speed and the ability to jump out of the local optimum of the EWB-GWO algorithm are better compared with other metaheuristics of the same type. Except for f 17 , the EWB-GWO algorithm converges to the theoretical value faster than the other eight algorithms. From the testing process of f 5 , f 6 , f 11 , f 13 , f 18 , and f 19 , it can be found that the GA algorithm and ABC algorithm have different degrees of local optimal stagnation, which makes the algorithm fall into the local solution, becoming unable to escape because the accuracy of the algorithm is low. In particular, the GA algorithm fails to converge in the tests of complex multimodal functions f 13 , f 18 , and f 19 . The convergence of the BOA and PSO algorithms is more stable and can converge quickly to the theoretical optimum during the testing of single-mode functions; however, the convergence speed is still not satisfactory in the testing of multimode complex functions f 11 , f 17 , and f 19 . By analyzing the test of high-dimensional function f 14 , it is not difficult to find that the EWB-GWO algorithm and W-GWO algorithm benefit from the nonlinear adaptive weighting strategy, which enhances the global convergence speed of the algorithm in dealing with high-dimensional complex functions to a certain extent, meaning the algorithm can achieve a fast focus on the optimal interval. B-GWO improves the diversity of the algorithm population to some extent due to the inclusion of the local random wandering law in the iterative process of the algorithm; however, this comes at the expense of the global convergence speed of the algorithm. The convergence results of each algorithm are shown in Table 7.
From the convergence results of each algorithm, the test values of all benchmark functions of the EWB-GWO algorithm, except f 7 , f 11 , f 16 , and f 22 , are close to the theoretical optimum, and their convergence accuracy is much higher than other algorithms. Thanks to the backward-learning method based on the elite strategy, the E-GWO algorithm continuously optimizes the population quality by retaining elites in the process of iterative computation, which further improves the convergence accuracy of the algorithm. The convergence accuracy of the E-GWO algorithm exceeds that of the EWB-GWO algorithm in the test functions f 7 , f 11 , and f 16 , and it also has excellent convergence performance in the remaining functions. The gap between the convergence results of the GA and ABC and the theoretical optimum in the testing process of all high-dimensional functions is large, and the two algorithms exist to different degrees in the testing of high-dimensional functions from f 1 to f 17 in the phenomenon of falling into a local optimum that cannot be escaped. In contrast, PSO and BOA show greater volatility in the convergence tests of high-dimensional functions.
The EWB-GWO algorithm converges to the theoretical optimal value in most of the tested functions. The random wandering law of the BOA proposed in this paper enhances the population diversity of the EWB-GWO algorithm in the computational iteration process to some extent. It reduces the possibility of the EWB-GWO algorithm falling into a local optimum. Meanwhile, the opposition-based learning method based on the elite strategy proposed in this paper further improves the quality of elite individuals in the process of population turnover and strengthens the local mining ability of the EWB-GWO algorithm. Finally, the balance of the global exploration and local mining ability of the EWB-GWO algorithm is optimized by the adaptive nonlinear inertia weight strategy. We will rank and order the algorithms designed during the experiment according to the Wilcoxon test and give the tied rank of the algorithms, as shown in Table 8. The p-value of Frideman test is also used to further determine the similarity between each metaheuristic algorithm, as shown in Table 9. The analysis method of similarity can be found in article [22].
As can be seen in Table 7, the EWB-GWO algorithm obtained the best rank in 19 out of 24 sets of benchmark function tests, showing the best convergence accuracy. The E-GWO and B-GWO algorithms are the second and third most effective, followed by the W-GWO, GWO, BOA, PSO, ABC, and GA, respectively. As can be seen in Table 8, there are 13 groups of experiments with p-values less than 0.05 in the benchmark function test for high-dimensional unimodal and multimodal functions ( f 1 ~ f 16 ). In most of the high-dimensional functions tested, there was significant variability among the metaheuristics involved in the cross-sectional comparisons. However, the p-value of the multimodal function ( f 17 ~ f 24 ) in fixed dimensions shows less variability among the algorithms.
The rank classification and Friedman test supported the results and analysis of this experiment. That is, the improved strategy proposed in this paper makes the EWB-GWO algorithm exhibit excellent global convergence speed and local convergence accuracy in high-dimensional complex functions, which are highly competitive among metaheuristics of the same type.

5.4. EWB-GWO Algorithm-Based Path Planning Experiment for Mobile Robot

In the experiments of the EWB-GWO algorithm for mobile robot path planning, six game maps (simulating complex field environments) and six city maps from the public map dataset provided by Moving AI Lab (https://movingai.com/, accessed on 10 January 2023) were used for the algorithm comparison tests. In this paper, the information of the selected maps is organized as shown in Table 10. In the process of algorithm comparison, the A* and Jump Point Search (JPS) algorithms based on graph search technique, RRT algorithm based on probabilistic sampling, BOA algorithm based on metaheuristic technique, and Theta* algorithm based on any-angle path planning strategy were selected for this experiment to conduct cross-sectional comparison experiments among different techniques. From the perspective of fairness regarding the experimental design, the heuristic functions of the A*, Theta*, and JPS algorithms used in the path planning experiments adopted the Euclidean distance formula. The step size of the RRT algorithm was set to 1 grid distance length, and the same Bresenham’s line algorithm as the EWB-GWO algorithm was used for collision detection. The experimental results of the path planning are shown and illustrated according to the algorithm’s operation time (time complexity), path length, memory occupation size of the algorithm operation (space complexity), the number of path nodes, and path smoothness. The path smoothness was defined as the sum of the total turning angles of the path. The experimental procedure and results are shown in Figure 15.
The results of the path planning test in Figure 15 show that the six path planning algorithms can accomplish the path finding of the mobile robot in different environments and maps of different complexity. Among them, the length of the paths generated by the A* algorithm based on graph search technique and the JPS algorithm are basically the same, and the routes show similarity or symmetry. Since the JPS algorithm adopts a jump search method to make the number of nodes in the search process drop significantly, the JPS algorithm has an advantage over the A* algorithm in terms of time complexity. Additionally, JPS algorithm has lower memory usage than the A* algorithm due to the decrease in the number of search nodes. However, it is undeniable that the paths generated by the A* and JPS algorithms are only based on the shortest paths under the discrete grid map. Both algorithms are simultaneously limited by the search direction due to the gridization of the workspace in the path finding process, resulting in paths that are not practically optimal. The Theta* algorithm, based on any-angle path planning technology, breaks this limitation. From the experimental process, it can be found that the Theta* algorithm is no longer limited by the search direction, and the obtained paths are closer to the actual optimum. However, the presence of a large number of LOS reachability verifications and angle-value transfer calculations leads to the low operational efficiency of the Theta* algorithm. The RRT algorithm based on probabilistic sampling technique has the advantages of minimal memory usage, simple algorithm structure, and flexible parameter tuning due to its algorithm. However, its path finding by probabilistic sampling leads to an uncontrollable path finding time and generated path quality. In general, the RRT algorithm has a poor quality of generated paths. The BOA algorithm based on the metaheuristic algorithm and the EWB-GWO algorithm both used the initial population generation strategy based on the Bresenham’s line algorithm proposed in this paper during the experiment. From the experimental figure, it can be seen that the path nodes of both algorithms reach the minimum, which substantially reduces the computational difficulty and complexity of the algorithms. At the same time, the two metaheuristics are not limited by the search direction, which makes the metaheuristics have an advantage over graph search techniques and probabilistic sampling techniques in terms of path quality. In this paper, we present the data results of this path planning experiment in Table 11 and Table 12.
In this experiment, path length, path smoothness (total path turning angle), computation time (time complexity), memory consumption (space complexity), and the number of nodes included in the path generated by the algorithm are used as comparison indicators. In terms of path length, the Theta* algorithm is more advantageous than the BOA algorithm and EWB-GWO algorithm based on metaheuristic techniques. The generated path lengths of the three algorithms are relatively similar during most of the experiments. The EWB-GWO algorithm achieves the optimal path length metric in eight sets of experiments, and the Theta* algorithm outperforms the EWB-GWO algorithm in the remaining four sets of experiments. In terms of the path smoothness metric, the metaheuristic algorithm has the same advantage in terms of the total turning angle of the generated paths because it has the lowest number of nodes among the six algorithms. In terms of time complexity and space complexity, the EWB-GWO algorithm, on the other hand, is at a medium level.
From the overall experimental results, the excellent algorithm convergence accuracy and convergence speed possessed by the EWB-GWO algorithm make it more advantageous in terms of path length and total path turning angle. It is undeniable that the EWB-GWO algorithm requires a higher running time than the JPS algorithm; however, the time complexity and space complexity of the EWB-GWO algorithm are lower than the Theta* algorithm, with better generated path quality.

6. Conclusions

For mobile robots performing unmanned delivery tasks, the autonomous navigation of the mobile robot will be a decisive factor in determining the success of the delivery task. Path planning technology based on the autonomous decision making of mobile robots has received a lot of attention as a key technology for robot navigation. Under the condition of ensuring the safety of mobile robot operation, a shorter and smoother path length will significantly reduce the robot execution time and improve distribution efficiency. Based on the above objectives, this paper implements a path planning design for unmanned delivery robots through an improved GWO algorithm to accomplish delivery tasks in complex urban and suburban environments while ensuring the safety of mobile robots. From the results of the path planning experiments, the stability of the path solutions generated by the EWB-GWO algorithm is not high, and the time complexity and space complexity of the algorithm need to be further reduced. In future work, we will investigate the situation of dynamic obstacles in the workspace of mobile robots, improving the autonomous decision making of mobile robots.

Author Contributions

Data curation, Q.Q; Investigation, Q.Q. and Y.Z.; Resources, Y.L. and Y.Z.; Software, Q.Q. and Z.H.; Supervision, Y.Z.; Writing—original draft, Z.H.; Writing—review & editing, Q.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Research Project of China Disabled Persons’ Federation—on assistive technology: 2022CDPFAT-01.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: [https://movingai.com/, accessed on 10 January 2023].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tisdale, J.; Kim, Z.; Hedrick, J.K. Autonomous UAV path planning and estimation. IEEE Robot. Autom. Mag. 2009, 16, 35–42. [Google Scholar] [CrossRef]
  2. Panda, M.; Das, B.; Pati, B.B. Grey wolf optimization for global path planning of autonomous underwater vehicle. In Proceedings of the Third International Conference on Advanced Informatics for Computing Research, Gurugram, India, 18–19 December 2019; pp. 1–6. [Google Scholar]
  3. Cho, J.-H.; Kim, Y.-T. Design of pid controller for magnetic levitation RGV using genetic algorithm based on clonal selection. J. Korean Inst. Intell. Syst. 2012, 22, 239–245. [Google Scholar]
  4. Wang, C.; Mao, J. Summary of agv path planning. In Proceedings of the 2019 3rd International Conference on Electronic Information Technology and Computer Engineering (EITCE), Xiamen, China, 18–20 October 2019; pp. 332–335. [Google Scholar]
  5. Zhao, J.; Gao, J.; Zhao, F.; Liu, Y. A search-and-rescue robot system for remotely sensing the underground coal mine environment. Sensors 2017, 17, 2426. [Google Scholar] [CrossRef] [PubMed]
  6. Nouri, H.E.; Driss, O.B.; Ghédira, K. Hybrid metaheuristics for scheduling of machines and transport robots in job shop environment. Appl. Intell. 2016, 45, 808–828. [Google Scholar] [CrossRef]
  7. Kim, M.; Kim, S.; Park, S.; Choi, M.-T.; Kim, M.; Gomaa, H. Service robot for the elderly. IEEE Robot. Autom. Mag. 2009, 16, 34–45. [Google Scholar] [CrossRef]
  8. Kang, H.; Li, H.; Zhang, J.; Lu, X.; Benes, B. Flycam: Multitouch gesture controlled drone gimbal photography. IEEE Robot. Autom. Lett. 2018, 3, 3717–3724. [Google Scholar] [CrossRef]
  9. Li, G.; Liao, X.; Huang, H.; Song, S.; Liu, B.; Zeng, Y. Robust stereo visual slam for dynamic environments with moving object. IEEE Access 2021, 9, 32310–32320. [Google Scholar] [CrossRef]
  10. Yang, L.; Qi, J.; Xiao, J.; Yong, X. A literature review of UAV 3D path planning. In Proceedings of the 11th World Congress on Intelligent Control and Automation, Shenyang, China, 29 June–4 July 2014; pp. 2376–2381. [Google Scholar]
  11. Lüttgens, L.; Jurgelucks, B.; Wernsing, H.; Roy, S.; Büskens, C.; Flaßkamp, K. Autonomous navigation of ships by combining optimal trajectory planning with informed graph search. Math. Comput. Model. Dyn. Syst. 2022, 28, 1–27. [Google Scholar] [CrossRef]
  12. Algfoor, Z.A.; Sunar, M.S.; Abdullah, A. A new weighted pathfinding algorithms to reduce the search time on grid maps. Expert Syst. Appl. 2017, 71, 319–331. [Google Scholar] [CrossRef]
  13. Luo, Y.; Lu, J.; Qin, Q.; Liu, Y. Improved JPS Path Optimization for Mobile Robots Based on Angle-Propagation Theta* Algorithm. Algorithms 2022, 15, 198. [Google Scholar] [CrossRef]
  14. Gao, Z.-z.; Wan, L.-j.; Cai, M.; Xu, X.-y. Route Planning Based on Grid-Point Optimization Lazy Theat* Algorithm. In Proceedings of the 2022 10th China Conference on Command and Control, Beijing, China, 7–9 July 2022; pp. 443–454. [Google Scholar]
  15. Nash, A.; Daniel, K.; Koenig, S.; Felner, A. Theta^*: Any-angle path planning on grids. Proc. AAAI 2007, 7, 1177–1183. [Google Scholar]
  16. Meng, B.-b. UAV path planning based on bidirectional sparse A* search algorithm. In Proceedings of the 2010 International Conference on Intelligent Computation Technology and Automation, Changsha, China, 11–12 May 2010; pp. 1106–1109. [Google Scholar]
  17. De Filippis, L.; Guglieri, G.; Quagliotti, F. Path planning strategies for UAVS in 3D environments. J. Intell. Robot. Syst. 2012, 65, 247–264. [Google Scholar] [CrossRef]
  18. Jaillet, L.; Cortes, J.; Simeon, T. Transition-based RRT for path planning in continuous cost spaces. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 2145–2150. [Google Scholar]
  19. Li, Q.; Xu, Y.; Bu, S.; Yang, J. Smart Vehicle Path Planning Based on Modified PRM Algorithm. Sensors 2022, 22, 6581. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, J.; Chi, W.; Li, C.; Wang, C.; Meng, M.Q.-H. Neural RRT*: Learning-based optimal path planning. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1748–1758. [Google Scholar] [CrossRef]
  21. Song, B.; Wang, Z.; Zou, L. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021, 100, 106960. [Google Scholar] [CrossRef]
  22. Wang, X.; Pan, J.-S.; Yang, Q.; Kong, L.; Snášel, V.; Chu, S.-C. Modified Mayfly Algorithm for UAV Path Planning. Drones 2022, 6, 134. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  24. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  25. Zhang, X.-M.; Wang, X.; Kang, Q.; Cheng, J.-F. Hybrid Grey Wolf Optimizer with Artificial Bee Colony and Its Application to Clustering Optimization. Tien Tzu Hsueh Pao Acta Electron. Sin. 2018, 46, 2430–2442. [Google Scholar] [CrossRef]
  26. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  27. Abushawish, A.; Jarndal, A. Hybrid GWOCS Optimization Based Parameter Extraction Method Applied to GaN Devices. In Proceedings of the 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Lansing, MI, USA, 9–11 August 2021; pp. 1035–1039. [Google Scholar]
  28. Liu, Z.; He, L.; Yuan, L.; Zhang, H. Path Planning of Mobile Robot Based on TGWO Algorithm. Hsi-Chiao Tung Ta Hsueh/J. Xi’an Jiaotong Univ. 2022, 56, 49–60. [Google Scholar] [CrossRef]
  29. Şenel, F.A.; Gökçe, F.; Yüksel, A.S.; Yiğit, T. A novel hybrid PSO–GWO algorithm for optimization problems. Eng. Comput. 2019, 35, 1359–1373. [Google Scholar] [CrossRef]
  30. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE international conference on evolutionary computation proceedings. IEEE world congress on computational intelligence (Cat. No. 98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  31. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  32. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  33. Wu, T.; Liu, Y.; Tang, W.; Li, X.; Yu, Y. Constraint genetic algorithm and its application in sintering proportioning. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2017; p. 012022. [Google Scholar]
  34. Lv, Q.; Yang, D. Multi-target path planning for mobile robot based on improved PSO algorithm. In Proceedings of the 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 12–14 June 2020; pp. 1042–1047. [Google Scholar]
  35. Jiang, D.; Yue, X.; Li, K.; Wang, S.; Guo, Z. Elite opposition-based artificial bee colony algorithm for global optimization. Int. J. Eng. 2015, 28, 1268–1275. [Google Scholar]
  36. Li, C.; Deng, L.; Qiao, L.; Zhang, L. An efficient differential evolution algorithm based on orthogonal learning and elites local search mechanisms for numerical optimization. Knowl.-Based Syst. 2022, 235, 107636. [Google Scholar] [CrossRef]
  37. Qian, M.; Huang, H.; Fan, Q. Chaotic Sparrow Search Algorithm Based on Opposition-Based Strategy. Comput. Integr. Manuf. Syst. 2022, 39, 333–339+487. [Google Scholar]
  38. Luo, Y.; Lu, J.; Zhang, Y.; Zheng, K.; Qin, Q.; He, L.; Liu, Y. Near-Ground Delivery Drones Path Planning Design Based on BOA-TSAR Algorithm. Drones 2022, 6, 393. [Google Scholar] [CrossRef]
  39. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  40. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  41. Xu, X.; Yu, X.; Zhao, Y.; Liu, C.; Wu, X. Global path planning of mobile robot based on improved gentic algorithm. Comput. Integr. Manuf. Syst. 2022, 6, 393. [Google Scholar]
Figure 1. Unmanned delivery robots enable fast, efficient, and contactless delivery services. (a) Meituan delivery robots deliver living supplies to epidemic-stricken areas; (b) JD delivery robot cargo transportation test; (c) UISEE delivery robot enables unmanned delivery service at KAUST.
Figure 1. Unmanned delivery robots enable fast, efficient, and contactless delivery services. (a) Meituan delivery robots deliver living supplies to epidemic-stricken areas; (b) JD delivery robot cargo transportation test; (c) UISEE delivery robot enables unmanned delivery service at KAUST.
Sensors 23 01867 g001
Figure 2. Hierarchy of the Gray Wolf Pack (dominance decreases from top down).
Figure 2. Hierarchy of the Gray Wolf Pack (dominance decreases from top down).
Sensors 23 01867 g002
Figure 3. Position update of wolf groups in GWO algorithm.
Figure 3. Position update of wolf groups in GWO algorithm.
Sensors 23 01867 g003
Figure 4. The small period nature of Tent chaotic mapping. (a) At a = 0.5 and initial values of 0.2, 0.4, and 0.8, the Tent chaotic mapping falls into a small loop, resulting in a sequence that is not chaotic in nature. (b) The random sequence generated by the Tent chaos mapping with a = 0.45 and initial values of 0.2, 0.3, and 0.15.
Figure 4. The small period nature of Tent chaotic mapping. (a) At a = 0.5 and initial values of 0.2, 0.4, and 0.8, the Tent chaotic mapping falls into a small loop, resulting in a sequence that is not chaotic in nature. (b) The random sequence generated by the Tent chaos mapping with a = 0.45 and initial values of 0.2, 0.3, and 0.15.
Sensors 23 01867 g004
Figure 5. Distribution of generated sequences for Tent–Sine chaotic mapping with different λ parameters. (a) Tent–Sine chaotic mapping at a = 0.5, λ = 0.7, and initial value of 0.2. (b) Tent–Sine chaotic mapping at a = 0.5, λ = 0.8, and initial value of 0.2. (c) Tent–Sine chaotic mapping at a = 0.5, λ = 0.95, and initial value of 0.2.
Figure 5. Distribution of generated sequences for Tent–Sine chaotic mapping with different λ parameters. (a) Tent–Sine chaotic mapping at a = 0.5, λ = 0.7, and initial value of 0.2. (b) Tent–Sine chaotic mapping at a = 0.5, λ = 0.8, and initial value of 0.2. (c) Tent–Sine chaotic mapping at a = 0.5, λ = 0.95, and initial value of 0.2.
Sensors 23 01867 g005
Figure 6. Workflow for mobile robot path planning based on EWB-GWO algorithm.
Figure 6. Workflow for mobile robot path planning based on EWB-GWO algorithm.
Sensors 23 01867 g006
Figure 7. Mobile robot workspace based on grid slice.
Figure 7. Mobile robot workspace based on grid slice.
Sensors 23 01867 g007
Figure 8. Determine if the current node needs to be retained by LOS detection.
Figure 8. Determine if the current node needs to be retained by LOS detection.
Sensors 23 01867 g008
Figure 9. Complete path example for unmanned delivery robot path planning.
Figure 9. Complete path example for unmanned delivery robot path planning.
Sensors 23 01867 g009
Figure 10. Comparison of convergence curves of GWO, E−GWO, W−GWO, B−GWO, and EWB−GWO with Dim = 50.
Figure 10. Comparison of convergence curves of GWO, E−GWO, W−GWO, B−GWO, and EWB−GWO with Dim = 50.
Sensors 23 01867 g010
Figure 11. Comparison of convergence curves of GWO, E−GWO, W−GWO, B−GWO, and EWB−GWO with Dim = 100.
Figure 11. Comparison of convergence curves of GWO, E−GWO, W−GWO, B−GWO, and EWB−GWO with Dim = 100.
Sensors 23 01867 g011
Figure 12. Comparison of convergence curves of GWO, E−GWO, W−GWO, B−GWO, and EWB−GWO in fixed-dimensional multimodal function tests.
Figure 12. Comparison of convergence curves of GWO, E−GWO, W−GWO, B−GWO, and EWB−GWO in fixed-dimensional multimodal function tests.
Sensors 23 01867 g012
Figure 13. Comparison of convergence curves of GWO, I−GWO, GWO−CS, PSO−GWO, and EWB−GWO.
Figure 13. Comparison of convergence curves of GWO, I−GWO, GWO−CS, PSO−GWO, and EWB−GWO.
Sensors 23 01867 g013aSensors 23 01867 g013b
Figure 14. Comparison of convergence curves of GWO, B−GWO, W−GWO, E−GWO, EWB−GWO, GA, PSO, BOA, and ABC.
Figure 14. Comparison of convergence curves of GWO, B−GWO, W−GWO, E−GWO, EWB−GWO, GA, PSO, BOA, and ABC.
Sensors 23 01867 g014aSensors 23 01867 g014b
Figure 15. Simulation diagram of drone path planning experiment.
Figure 15. Simulation diagram of drone path planning experiment.
Sensors 23 01867 g015
Table 1. The benchmark functions used in this article.
Table 1. The benchmark functions used in this article.
No.NameFormulaDimRange f min
Unimodal benchmark functions.
f 1 Exponential f ( x ) = exp ( 0.5 i = 1 n x i ) 50[−10, 10]0
f 2 Quartic f ( x ) = i = 1 n i x i 4 + r a n d ( 0 , 1 ) 50[−1.28, 1.28]0
f 3 Step f ( x ) = i = 1 n ( x i + 0.5 ) 2 50[−10, 10]0
f 4 Sum square f ( x ) = i = 1 n i x i 2 50[−10, 10]0
f 5 Zakharov f ( x ) = i = 1 n x i 2 + ( i = 1 n 0.5 i x i ) 2 + ( i = 1 n 0.5 i x i ) 4 50[−1, 1]0
f 6 Rosenbrock f ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 50[−10, 10]0
f 7 Schwefel 1.2 f ( x ) = i = 1 n ( j = 1 i x j ) 2 50[−100, 100]0
f 8 Cigar f ( x ) = x 1 2 + 10 6 i = 2 n x i 2 50[−1, 1]0
Multimodal benchmark functions
f 9 Schwefel 2.26 f ( x ) = i = 1 n [ x i sin ( x i ) ] 50[−500, 500]−12,569.5
f 10 Rastrigin f ( x ) = i = 1 n ( x i 2 10 cos ( 2 π x i ) + 10 ) 50[−5.12, 5.12]0
f 11 Ackley f ( x ) = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos ( 2 π x i ) + 20 + exp ( 1 ) 50[−32, 32]0
f 12 Bohachevsky f ( x ) = i = 1 n 1 [ x i 2 + 2 x i + 1 2 0.3 cos ( 3 π x i ) ] 50[−10, 10]0
f 13 Solomon f ( x ) = 1 cos 2 π i = 1 n x i 2 + 0.1 i = 1 n x i 2 50[−100, 100]0
f 14 Griewank f ( x ) = 1 4000 i = 1 n x i 2 i = 1 30 cos ( x i i ) + 1 50[−500, 500]0
f 15 NCRastrigin f ( x ) = i = 1 n [ y i 2 10 cos ( 2 π y i ) + 10 ] 50[−5.12, 5.12]0
y i = x i x i < 0.5 r o u n d ( 2 x i ) / 2 x i > 0.5
f 16 Alpine f ( x ) = i = 1 n x i sin ( x i ) + 0.1 x i 50[−10, 10]0
Fixed-dimension multimodal benchmark functions
f 17 Cross-in-Tray f ( x ) = 0.0001 ( | sin ( x 1 ) sin ( x 2 ) exp ( | 100 x 1 2 + x 2 2 π | ) | + 1 ) 0.1 2[−10, 10]−2.06261
f 18 Drop-Wave f ( x ) = 1 + cos ( 12 x 1 2 + x 2 2 ) 0.5 ( x 1 2 + x 2 2 ) + 2 2[−5.12, 5.12]−1
f 19 Eggholder f ( x ) = ( x 2 + 47 ) sin ( | x 2 + x 1 2 + 47 | ) x 1 sin ( | x 1 ( x 2 + 47 ) | ) 2[−512, 512]−959.641
f 20 Holder Table f ( x ) = | sin ( x 1 ) cos ( x 2 ) exp ( | 1 x 1 2 + x 2 2 π | ) | 2[−10, 10]−19.2085
f 21 Schaffer N.4 f ( x ) = 0.5 + cos 2 ( sin ( | x 1 2 x 2 2 | ) ) 0.5 [ 1 + 0.001 ( x 1 2 + x 2 2 ) ] 2 2[−100, 100]0
f 22 Levy N.13 f ( x ) = sin 2 ( 3 π x 1 ) + ( x 1 1 ) 2 [ 1 + sin 2 ( 3 π x 2 ) ] + ( x 2 1 ) 2 [ 1 + sin 2 ( 2 π x 2 ) ] 2[−10, 10]0
f 23 Schaffer N.2 f ( x ) = 0.5 + sin 2 ( x 1 2 x 2 2 ) 0.5 [ 1 + 0.001 ( x 1 2 + x 2 2 ) ] 2 2[−100, 100]0
f 24 Shubert f ( x ) = ( i = 1 5 i cos ( ( i + 1 ) x 1 + i ) ) ( i = 1 5 i cos ( ( i + 1 ) x 2 + i ) ) 2[−10, 10]−186.731
Table 2. Test results of the algorithm at Dim = 50.
Table 2. Test results of the algorithm at Dim = 50.
FunctionsGWOB-GWOW-GWOE-GWOEWB-GWO
f 1 Avg6.955 × 10−1001.934 × 10−1021.733 × 10−771.491 × 10−1053.087 × 10−102
Std1.237 × 10−992.802 × 10−1023.466 × 10−772.955 × 10−1056.157 × 10−102
Best3.302 × 10−1091.766 × 10−1051.732 × 10−943.153 × 10−1095.015 × 10−107
f 3 Avg6.102 × 10−018.754 × 10−013.101 × 10009.996 × 10−029.806 × 10−02
Std3.719 × 10−013.710 × 10−015.146 × 10−011.999 × 10−011.959 × 10−01
Best2.499 × 10−014.970 × 10−012.250 × 10006.503 × 10−064.672 × 10−06
f 8 Avg2.079 × 10−762.287 × 10−798.434 × 10−761.325 × 10−1182.514 × 10−139
Std1.302 × 10−764.574 × 10−791.246 × 10−752.650 × 10−1183.219 × 10−139
Best4.806 × 10−776.646 × 10−883.605 × 10−776.180 × 10−1401.241 × 10−148
f 10 Avg0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Std0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
f 12 Avg9.611 × 10−186.771 × 10−761.325 × 10−1282.521 × 10−1399.621 × 10−140
Std1.922 × 10−179.590 × 10−762.650 × 10−1283.224 × 10−1391.920 × 10−139
Best2.007 × 10−773.605 × 10−776.180 × 10−1401.415 × 10−1471.325 × 10−147
f 14 Avg0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Std0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Table 3. Test results of the algorithm at Dim = 100.
Table 3. Test results of the algorithm at Dim = 100.
FunctionsGWOB-GWOW-GWOE-GWOEWB-GWO
f 1 Avg1.587 × 10−1881.052 × 10−1851.077 × 10−1199.641 × 10−1914.069 × 10−181
Std0.000 × 10000.000 × 10002.153 × 10−1190.000 × 10000.000 × 1000
Best3.559 × 10−1961.925 × 10−2031.550 × 10−1434.826 × 10−1992.116 × 10−194
f 3 Avg4.473 × 10005.816 × 10001.167 × 10014.388 × 10001.564 × 1000
Std2.294 × 10−019.534 × 10−017.229 × 10−016.501 × 10−012.234 × 10−01
Best4.204 × 10004.735 × 10001.075 × 10013.720 × 10001.231 × 1000
f 8 Avg1.514 × 10−487.696 × 10−373.172 × 10−382.528 × 10−487.847 × 10−86
Std1.454 × 10−481.366 × 10−376.343 × 10−383.170 × 10−481.569 × 10−85
Best3.418 × 10−495.485 × 10−373.470 × 10−982.607 × 10−495.068 × 10−99
f 10 Avg6.422 × 10−130.000 × 10000.000 × 10007.276 × 10−140.000 × 1000
Std9.060 × 10−130.000 × 10000.000 × 10008.794 × 10−140.000 × 1000
Best0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
f 12 Avg9.051 × 10−405.933 × 10−273.172 × 10−382.931 × 10−477.891 × 10−86
Std1.810 × 10−393.291 × 10−276.343 × 10−385.340 × 10−471.578 × 10−85
Best3.418 × 10−498.430 × 10−373.470 × 10−382.607 × 10−495.068 × 10−99
f 14 Avg0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Std0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Table 4. Test results of the algorithm in fixed-dimensional multimodal function tests.
Table 4. Test results of the algorithm in fixed-dimensional multimodal function tests.
FunctionsGWOB-GWOW-GWOE-GWOEWB-GWO
f 18 Avg−1.000 × 1000−1.000 × 1000−1.000 × 1000−1.000 × 1000−1.000 × 1000
Std0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best−1.000 × 1000−1.000 × 1000−1.000 × 1000−1.000 × 1000−1.000 × 1000
f 19 Avg−9.596 × 1002−9.596 × 1002−9.596 × 1002−9.596 × 1002−9.596 × 1002
Std0.000 × 10+000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best−9.596 × 1002−9.596 × 1002−9.596 × 1002−9.596 × 1002−9.596 × 1002
f 20 Avg−1.921 × 1001−1.921 × 1001−1.921 × 1001−1.921 × 1001−1.921 × 1001
Std0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best−1.921 × 1001−1.921 × 1001−1.921 × 1001−1.921 × 1001−1.921 × 1001
f 21 Avg2.926 × 10−012.926 × 10−012.926 × 10−012.926 × 10−012.926 × 10−01
Std0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best2.926 × 10−012.926 × 10−012.926 × 10−012.926 × 10−012.926 × 10−01
f 23 Avg0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Std0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
f 24 Avg−1.867 × 1002−1.867 × 1002−1.867 × 1002−1.867 × 1002−1.867 × 1002
Std0.000 × 10000.000 × 10000.000 × 10000.000 × 10000.000 × 1000
Best−1.867 × 1002−1.867 × 1002−1.867 × 1002−1.867 × 1002−1.867 × 1002
Table 5. Test results of the GWO-improved algorithms.
Table 5. Test results of the GWO-improved algorithms.
FunctionsGWOI-GWOGWO-CSPSO-GWOEWB-GWO
f 2 Avg1.057 × 10−052.447 × 10−067.562 × 10−062.088 × 10−052.981 × 10−06
Std3.851 × 10−069.850 × 10−071.935 × 10−062.072 × 10−055.594 × 10−07
Best7.139 × 10−061.225 × 10−064.964 × 10−063.222 × 10−062.306 × 10−06
Time3.352 × 10−017.030 × 10+006.236 × 10+006.470 × 10+001.973 × 10+00
f 4 Avg2.073 × 10−788.802 × 10−791.070 × 10−802.408 × 10−018.261 × 10−121
Std2.436 × 10−789.312 × 10−795.351 × 10−813.405 × 10−015.791 × 10−121
Best2.048 × 10−792.478 × 10−805.074 × 10−812.124 × 10−442.106 × 10−122
Time2.110 × 10+001.097 × 10+011.192 × 10+011.068 × 10+013.616 × 10+00
f 5 Avg4.329 × 10−392.338 × 10−325.938 × 10−412.284 × 10+001.754 × 10−50
Std4.904 × 10−391.216 × 10−325.987 × 10−413.211 × 10+008.684 × 10−51
Best2.380 × 10−401.098 × 10−326.222 × 10−424.707 × 10−267.574 × 10−51
Time2.305 × 10+001.178 × 10+011.186 × 10+011.284 × 10+013.809 × 10+00
f 6 Avg4.546 × 10−014.024 × 10−013.164 × 10−014.819 × 10−011.648 × 10+00
Std1.113 × 10−022.577 × 10−032.705 × 10−013.388 × 10−021.720 × 10+00
Best4.426 × 10−013.991 × 10−011.223 × 10−014.473 × 10−014.023 × 10−01
Time2.179 × 10+001.126 × 10+011.216 × 10+011.173 × 10+013.746 × 10+00
f 7 Avg1.257 × 10−225.937 × 10−139.554 × 10−245.874 × 10−037.805 × 10−42
Std1.581 × 10−226.696 × 10−137.650 × 10−246.662 × 10−035.491 × 10−42
Best8.025 × 10−245.653 × 10−151.167 × 10−241.080 × 10−158.996 × 10−43
Time5.552 × 10+001.846 × 10+011.524 × 10+011.595 × 10+011.348 × 10+01
f 9 Avg−1.016 × 10+04−1.244 × 10+04−2.095 × 10+04−1.267 × 10+04−1.129 × 10+04
Std1.178 × 10+033.257 × 10+031.843 × 10−011.295 × 10+031.849 × 10+03
Best−1.167 × 10+04−1.534 × 10+04−2.095 × 10+04−1.385 × 10+04−1.267 × 10+04
Time2.585 × 10+001.352 × 10+011.347 × 10+011.345 × 10+015.322 × 10+00
f 11 Avg1.510 × 10−141.391 × 10−141.510 × 10−146.163 × 10+005.994 × 10−15
Std0.000 × 10+001.675 × 10−150.000 × 10+008.705 × 10+002.828 × 10−15
Best1.510 × 10−141.155 × 10−141.510 × 10−145.773 × 10−141.994 × 10−15
Time2.293 × 10+001.170 × 10+011.243 × 10+011.317 × 10+013.908 × 10+00
f 13 Avg1.999 × 10−011.999 × 10−011.999 × 10−013.952 × 10−012.000 × 10−07
Std0.000 × 10+000.000 × 10+000.000 × 10+008.146 × 10−022.087 × 10−07
Best1.999 × 10−011.999 × 10−011.999 × 10−012.938 × 10−011.348 × 10−08
Time2.216 × 10+001.163 × 10+011.216 × 10+011.278 × 10+013.822 × 10+00
f 15 Avg0.000 × 10+000.000 × 10+000.000 × 10+002.185 × 10+010.000 × 10+00
Std0.000 × 10+000.000 × 10+000.000 × 10+003.090 × 10+010.000 × 10+00
Best0.000 × 10+000.000 × 10+000.000 × 10+002.220 × 10−160.000 × 10+00
Time2.603 × 10+001.212 × 10+011.247 × 10+011.328 × 10+015.649 × 10+00
f 16 Avg2.156 × 10−244.105 × 10−051.631 × 10−453.616 × 10+001.902 × 10−69
Std3.050 × 10−243.767 × 10−051.690 × 10−451.380 × 10+001.318 × 10−69
Best4.470 × 10−402.328 × 10−061.169 × 10−461.856 × 10+004.990 × 10−70
Time2.384 × 10+001.129 × 10+011.216 × 10+011.299 × 10+013.915 × 10+00
f 17 Avg−2.063 × 10+00−2.063 × 10+00−2.063 × 10+00−2.063 × 10+00−2.063 × 10+00
Std0.000 × 10+000.000 × 10+000.000 × 10+000.000 × 10+000.000 × 10+00
Best−2.063 × 10+00−2.063 × 10+00−2.063 × 10+00−2.063 × 10+00−2.063 × 10+00
Time2.001 × 10−016.308 × 10+002.925 × 10+005.237 × 10+002.597 × 10+00
f 22 Avg9.201 × 10−101.350 × 10−319.208 × 10−094.439 × 10−093.475 × 10−08
Std4.977 × 10−100.000 × 10+007.000 × 10−095.726 × 10−094.496 × 10−08
Best2.166 × 10−101.350 × 10−311.092 × 10−091.873 × 10−111.091 × 10−12
Time1.786 × 10−016.120 × 10+002.819 × 10+003.247 × 10−012.434 × 10+00
Table 6. The parameter setting of each algorithm.
Table 6. The parameter setting of each algorithm.
NameSourceParameter
EWB-GWO[23] a [ 0 , 2 ] ; c = 0.01 ; a = 0.1 ; p = 0.8 . Elite individuals select 10 percent of the total population.
PSO[39]Learning factor of PSO algorithm c 1 = 1.5 , c 2 = 1.5 ; the maximum value of inertia weight ω M a x = 0.8 ; the minimum value of inertia weight ω M i n = 0.7 .
BOA[31]Perceptual modality c = 0.01 ; power index a = 0.1 ; conversion probability p = 0.8 .
ABC[40] T l i m i t = S N × D , where T l i m i t denotes the maximum number of update-free iterations of the ABC algorithm for detecting bee solutions, S N denotes the number of food sources, and D denotes the problem dimension.
GA[41]Crossover probability p c = 0.9 ; variance probability p m = 0.1 .
Table 7. BOA-SA, BOA-AIW, BOA-RV, BOA-TSAR, GA, PSO, GWO, and ABC test results on the 50 dimensional functions in Experiment 2.
Table 7. BOA-SA, BOA-AIW, BOA-RV, BOA-TSAR, GA, PSO, GWO, and ABC test results on the 50 dimensional functions in Experiment 2.
FunctionsGWOB-GWOW-GWOE-GWOEWB-GWOBOAPSOABCGA
f 1 Avg1.35 × 10−1046.46 × 10−1038.26 × 10−843.40 × 10−1075.66 × 10−1071.33 × 10−285.61 × 10−731.79 × 10−1015.61 × 10−73
Std1.33 × 10−1045.12 × 10−1031.17 × 10−832.39 × 10−1077.06 × 10−1071.88 × 10−287.94 × 10−731.25 × 10−1017.93 × 10−73
f 2 Avg2.14 × 10−041.60 × 10−052.02 × 10−051.73 × 10−052.90 × 10−063.07 × 10−043.64 × 10−012.01 × 10001.38 × 1000
Std9.98 × 10−051.26 × 10−052.99 × 10−061.44 × 10−051.73 × 10−068.00 × 10−056.58 × 10−024.48 × 10−012.47 × 10−01
f 3 Avg6.38 × 10−011.00 × 10003.81 × 10007.03 × 10−013.98 × 10−019.18 × 10001.22 × 10003.43 × 10001.51 × 1001
Std1.61 × 10−014.11 × 10−014.97 × 10−017.06 × 10−027.55 × 10−021.93 × 10−012.99 × 10−024.31 × 10−011.13 × 1001
f 4 Avg1.39 × 10−784.02 × 10−591.11 × 10−1207.05 × 10−1322.15 × 10−1421.35 × 10−146.28 × 10004.35 × 10012.45 × 1002
Std7.00 × 10−792.62 × 10−591.07 × 10−1209.97 × 10−1323.04 × 10−1423.69 × 10−163.79 × 10004.73 × 10001.21 × 1002
f 5 Avg2.10 × 10−398.31 × 10−269.67 × 10−341.06 × 10−382.20 × 10−511.13 × 10−141.98 × 10−016.46 × 10009.92 × 1000
Std1.18 × 10−396.06 × 10−261.36 × 10−331.41 × 10−383.11 × 10−515.10 × 10−163.79 × 10−026.16 × 10−011.68 × 1000
f 6 Avg4.51 × 10014.62 × 10014.71 × 10014.44 × 10014.17 × 10014.89 × 10012.81 × 10025.23 × 10047.38 × 1003
Std8.40 × 10−011.77 × 10−026.93 × 10−015.34 × 10−011.19 × 10004.18 × 10−026.17 × 10011.85 × 10037.35 × 1003
f 7 Avg5.46 × 10−239.34 × 10−182.51 × 10−447.58 × 10−533.99 × 10−501.44 × 10−143.42 × 10018.20 × 10043.97 × 1004
Std5.32 × 10−231.21 × 10−173.29 × 10−531.01 × 10−525.64 × 10−501.62 × 10−164.71 × 10007.45 × 10036.11 × 1003
f 8 Avg1.78 × 10−762.70 × 10−565.70 × 10−771.38 × 10−1376.39 × 10−1381.18 × 10−143.90 × 10−018.48 × 10004.91 × 1000
Std2.08 × 10−763.11 × 10−564.41 × 10−779.54 × 10−1388.80 × 10−1386.86 × 10−161.10 × 10−012.19 × 10−011.69 × 1000
f 9 Avg−1.07 × 1004−1.20 × 1004−8.98 × 1003−1.02 × 1004−1.33 × 1004−6.68 × 1003−4.30 × 10032.19 × 10035.02 × 1003
Std1.45 × 10037.64 × 10026.41 × 10025.08 × 10021.29 × 10033.47 × 10021.19 × 10046.73 × 10031.32 × 1004
f 10 Avg0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10008.08 × 10014.55 × 10028.16 × 1001
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10009.20 × 10001.02 × 10014.82 × 1000
f 11 Avg1.48 × 10−142.27 × 10−147.66 × 10−151.76 × 10−168.06 × 10−156.55 × 10−122.79 × 10005.89 × 10005.14 × 1000
Std3.96 × 10−166.60 × 10−164.71 × 10−163.51 × 10−157.12 × 10−164.63 × 10−127.86 × 10−012.81 × 10−018.84 × 10−01
f 12 Avg0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10007.75 × 10−016.86 × 10−019.24 × 10003.13 × 1001
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10003.14 × 10−023.50 × 10−026.48 × 10−011.90 × 1001
f 13 Avg1.92 × 10−018.52 × 10−089.99 × 10−021.13 × 10−023.90 × 10−083.00 × 10−019.56 × 10−015.24 × 10001.42 × 1001
Std1.08 × 10−023.68 × 10−080.00 × 10007.36 × 10−034.98 × 10−083.78 × 10−044.17 × 10−022.32 × 10−013.56 × 1000
f 14 Avg0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10001.37 × 10005.47 × 10−023.05 × 10003.72 × 1000
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10001.94 × 10006.06 × 10−031.56 × 10−011.14 × 1000
f 15 Avg1.92 × 10−018.52 × 10−089.99 × 10−021.13 × 10−023.90 × 10−083.00 × 10−019.56 × 10−015.24 × 10001.42 × 1001
Std1.08 × 10−023.68 × 10−080.00 × 10007.36 × 10−034.98 × 10−083.78 × 10−044.17 × 10−022.32 × 10−013.56 × 1000
f 16 Avg6.16 × 10−382.98 × 10−075.17 × 10−404.76 × 10−808.64 × 10−661.72 × 10−153.89 × 10003.91 × 10011.20 × 1000
Std8.63 × 10−384.21 × 10−074.38 × 10−806.73 × 10−401.22 × 10−651.31 × 10−162.43 × 10−012.02 × 10−015.86 × 10−01
f 17 Avg−2.06 × 1000−2.06 × 1000−2.06 × 1000−2.06 × 1000−2.06 × 1000−2.06 × 1000−2.06 × 1000−2.06 × 1000−2.06 × 1000
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 1000
f 18 Avg−1.00 × 1000−1.00 × 1000−1.00 × 1000−1.00 × 1000−1.00 × 1000−9.96 × 10−01−1.00 × 1000−1.00 × 10002.91 × 10−01
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10001.24 × 10−030.00 × 10000.00 × 10009.13 × 10−01
f 19 Avg−9.60 × 1002−9.60 × 1002−9.60 × 1002−9.60 × 1002−9.60 × 1002−9.60 × 1002−9.38 × 1002−9.60 × 1002−9.28 × 1002
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10002.74 × 10−023.07 × 10010.00 × 10002.93 × 1001
f 20 Avg−1.92 × 1001−1.92 × 1001−1.92 × 1001−1.92 × 1001−1.92 × 1001−1.92 × 1001−1.92 × 1001−1.92 × 1001−1.92 × 1001
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10003.65 × 10−030.00 × 10000.00 × 10000.00 × 1000
f 21 Avg2.93 × 10−012.93 × 10−012.93 × 10−012.93 × 10−012.93 × 10−012.93 × 10−012.93 × 10−012.93 × 10−012.93 × 10−01
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10008.16 × 10−060.00 × 10000.00 × 10002.12 × 10−04
f 22 Avg1.59 × 10−096.26 × 10−092.06 × 10−091.45 × 10−093.58 × 10−081.22 × 10−051.35 × 10−311.35 × 10−317.62 × 10−16
Std2.89 × 10−103.99 × 10−097.96 × 10−101.10 × 10−094.62 × 10−086.84 × 10−060.00 × 10000.00 × 10005.11 × 10−16
f 23 Avg0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10005.92 × 10−160.00 × 10000.00 × 10002.10 × 10−03
Std0.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10003.77 × 10−160.00 × 10000.00 × 10002.39 × 10−03
f 24 Avg−1.87 × 1002−1.87 × 1002−1.87 × 1002−1.87 × 1002−1.87 × 1002−1.87 × 1002−1.87 × 1002−1.87 × 1002−1.87 × 1002
Std0.00 × 10002.22 × 10−038.82 × 10−030.00 × 10001.76 × 10031.30 × 10−020.00 × 10000.00 × 10000.00 × 1000
Table 8. Wilcoxon rank test order.
Table 8. Wilcoxon rank test order.
FunctionsRank
f 1 E - GWO ( 1 ) < EWB - GWO ( 2 ) < GWO ( 3 ) < B - GWO ( 4 ) < ABC ( 5 ) < W - GWO ( 6 ) < PSO ( 7.5 ) GA ( 7.5 ) < BOA ( 9 )
f 2 EWB - GWO ( 1 ) < B - GWO ( 2 ) < E - GWO ( 3 ) < W - GWO ( 4 ) < GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < GA ( 8 ) < ABC ( 9 )
f 3 EWB - GWO ( 1 ) < GWO ( 2 ) < E - GWO ( 3 ) < B - GWO ( 4 ) < PSO ( 5 ) < ABC ( 6 ) < W - GWO ( 7 ) < BOA ( 8 ) < GA ( 9 )
f 4 EWB - GWO ( 1 ) < E - GWO ( 2 ) < W - GWO ( 3 ) < GWO ( 4 ) < B - GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < ABC ( 8 ) < GA ( 9 )
f 5 EWB - GWO ( 1 ) < GWO ( 2 ) < E - GWO ( 3 ) < W - GWO ( 4 ) < B - GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < ABC ( 8 ) < GA ( 9 )
f 6 EWB - GWO ( 1 ) < E - GWO ( 2 ) < GWO ( 3 ) < B - GWO ( 4 ) < W - GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < GA ( 8 ) < ABC ( 9 )
f 7 E - GWO ( 1 ) < EWB - GWO ( 2 ) < W - GWO ( 3 ) < GWO ( 4 ) < B - GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < GA ( 8 ) < ABC ( 9 )
f 8 EWB - GWO ( 1 ) < E - GWO ( 2 ) < W - GWO ( 3 ) < GWO ( 4 ) < B - GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < GA ( 8 ) < ABC ( 9 )
f 9 EWB - GWO ( 1 ) < B - GWO ( 2 ) < GWO ( 3 ) < E - GWO ( 4 ) < W - GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < ABC ( 8 ) < GA ( 9 )
f 10 EWB - GWO ( 3.5 ) E - GWO ( 3.5 ) B - GWO ( 3.5 ) W - GWO ( 3.5 ) GWO ( 3.5 ) BOA ( 3.5 ) < PSO ( 7 ) < GA ( 8 ) < ABC ( 9 )
f 11 E - GWO ( 1 ) < W - GWO ( 2 ) < EWB - GWO ( 3 ) < GWO ( 4 ) < B - GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < GA ( 8 ) < ABC ( 9 )
f 12 EWB - GWO ( 3 ) B - GWO ( 3 ) E - GWO ( 3 ) W - GWO ( 3 ) GWO ( 3 ) < PSO ( 6 ) < BOA ( 7 ) < ABC ( 8 ) < GA ( 9 )
f 13 EWB - GWO ( 1 ) < B - GWO ( 2 ) < E - GWO ( 3 ) < W - GWO ( 4 ) < GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < ABC ( 8 ) < GA ( 9 )
f 14 EWB - GWO ( 3 ) EWB - GWO ( 3 ) B - GWO ( 3 ) E - GWO ( 3 ) W - GWO ( 3 ) GWO ( 3 ) < PSO ( 6 ) < BOA ( 7 ) < ABC ( 8 ) < GA ( 9 )
f 15 EWB - GWO ( 1 ) < B - GWO ( 2 ) < E - GWO ( 3 ) < W - GWO ( 4 ) < GWO ( 5 ) < BOA ( 6 ) < PSO ( 7 ) < ABC ( 8 ) < GA ( 9 )
f 16 E - GWO ( 1 ) < EWB - GWO ( 2 ) < W - GWO ( 3 ) < GWO ( 4 ) < BOA ( 5 ) < B - GWO ( 6 ) < GA ( 7 ) < PSO ( 8 ) < ABC ( 9 )
f 17 EWB - GWO ( 5 ) B - GWO ( 5 ) E - GWO ( 5 ) W - GWO ( 5 ) GWO ( 5 ) BOA ( 5 ) PSO ( 5 ) ABC ( 5 ) GA ( 5 )
f 18 EWB - GWO ( 4 ) B - GWO ( 4 ) E - GWO ( 4 ) W - GWO ( 4 ) GWO ( 4 ) PSO ( 4 ) ABC ( 4 ) < BOA ( 8 ) < GA ( 9 )
f 19 EWB - GWO ( 4 ) B - GWO ( 4 ) E - GWO ( 4 ) W - GWO ( 4 ) GWO ( 4 ) BOA ( 4 ) ABC ( 4 ) PSO ( 8 ) < GA ( 9 )
f 20 EWB - GWO ( 5 ) B - GWO ( 5 ) E - GWO ( 5 ) W - GWO ( 5 ) GWO ( 5 ) BOA ( 5 ) PSO ( 5 ) ABC ( 5 ) GA ( 5 )
f 21 EWB - GWO ( 5 ) B - GWO ( 5 ) E - GWO ( 5 ) W - GWO ( 5 ) GWO ( 5 ) BOA ( 5 ) PSO ( 5 ) ABC ( 5 ) GA ( 5 )
f 22 PSO ( 1.5 ) ABC ( 1.5 ) < GA ( 3 ) < E - GWO ( 4 ) < GWO ( 5 ) < W - GWO ( 6 ) < B - GWO ( 7 ) < EWB - GWO ( 8 ) < BOA ( 9 )
f 23 EWB - GWO ( 4 ) E - GWO ( 4 ) W - GWO ( 4 ) B - GWO ( 4 ) GWO ( 4 ) PSO ( 4 ) ABC ( 4 ) < BOA ( 8 ) < GA ( 9 )
f 24 EWB - GWO ( 5 ) B - GWO ( 5 ) E - GWO ( 5 ) W - GWO ( 5 ) GWO ( 5 ) BOA ( 5 ) PSO ( 5 ) ABC ( 5 ) GA ( 5 )
Table 9. Friedman test statistical results of nine algorithms.
Table 9. Friedman test statistical results of nine algorithms.
FunctionsSum of SquaresDegree of FreedomMean Squaresp-Value
f 1 119.25814.90630.0429
f 2 116815.470.0407
f 3 100813.330.1009
f 4 1208150.0424
f 5 1208150.0424
f 6 121816.3750.0404
f 7 117814.6250.0485
f 8 119814.8750.0443
f 9 888110.1635
f 10 84815.810.0452
f 11 102813.60.0928
f 12 99812.3750.0447
f 13 109813.6250.0489
f 14 97815.520.0498
f 15 109814.530.0489
f 16 1128140.0405
f 17 0801
f 18 64880.0424
f 19 68.2588.531250.0662
f 20 981.1250.4335
f 21 16820.4335
f 22 116814.50.0485
f 23 64880.0424
f 24 2583.1250.4335
Table 10. Map information selected for the dataset.
Table 10. Map information selected for the dataset.
Map NameMap Size (m)Number of ObstaclesMax Length (m)
City MapBoston256 × 25648,286363.4579
Denver256 × 25648,202375.5584
Milan256 × 25647,331362.6001
Moscow256 × 25648,101362.9137
New York256 × 25648,299362.9015
Shanghai256 × 25648,708347.7005
Warcraft IIIbattleground256 × 25623,067231.8498
harvestmoon256 × 25628,648283.9665
duskwood256 × 25631,807215.9625
frostsabre256 × 25622,845223.2813
golemsinthemist256 × 25627,707259.4772
plainsofsnow256 × 25626,458211.6589
Table 11. Experimental results of path planning under city maps.
Table 11. Experimental results of path planning under city maps.
MapAlgorithm TypePath Length (m)Total Turning Angle of the Path (°)Calculation Time (s)Memory Usage (kb)Number of Nodes
BostonA*267.8772990.00007.1389331852.921919
JPS267.8772135.00000.30881059.257819
RRT408.56653338.5940186.8712572.422994
Theta*256.603349.92767.37101899.17184
BOA297.7898120.99591.15641564.04214
EWB-GWO250.595548.17211.82181561.81304
DenverA*276.16151170.000011.25211951.9844219
JPS276.1615315.00000.46411077.710933
RRT436.75463034.621061.4231533.5322100
Theta*258.310452.797211.53351102.27326
BOA270.1912270.19272.52351585.81924
EWB-GWO258.828962.98282.79501582.10924
MilanA*270.83561620.000013.63092035.2344229
JPS270.8356495.00000.78131132.554729
RRT407.79563108.8480142.7865558.555793
Theta*258.186038.981013.91321190.12706
BOA262.691144.68692.13441578.70264
EWB-GWO255.944535.62271.92191574.61763
MoscowA*300.24471530.000013.00211710.5781229
JPS298.2447405.00000.61251089.578139
RRT405.31633039.8970132.5529555.383895
Theta*289.016183.134113.28451785.58016
BOA290.798446.56185.88871537.78203
EWB-GWO289.895746.32245.01681568.80933
New YorkA*310.43353960.000018.67812478.4688257
JPS310.4335675.00000.58721103.34330
RRT368.69663173.0940110.6314548.540087
Theta*285.456644.735819.82532501.420810
BOA283.395739.44421.42391540.54
EWB-GWO283.030434.71311.50101502.23
ShanghaiA*300.61732160.000019.67472105.7344236
JPS300.6173495.00000.55861061.578126
RRT366.19552851.829148.8274529.266686
Theta*282.750554.803119.98112180.34295
BOA302.7648102.02623.10151554.81934
EWB-GWO282.602924.09392.83591555.78493
Table 12. Experimental results of path planning under game maps.
Table 12. Experimental results of path planning under game maps.
MapAlgorithm TypePath Length (m)Total Turning Angle of the Path (°)Calculation Time (s)Memory Usage (kb)Number of Nodes
battlegroundA*176.7817630.00005.29772221.2031154
JPS176.7817171.66041.04841870.960927
RRT222.31851683.27080.1531522.313552
Theta*171.660489.77325.62482301.45326
BOA174.207893.38852.69841579.21334
EWB-GWO170.690790.12262.691015,787.78104
harvestmoonA*191.09551080.00004.60752089.2813165
JPS191.09551035.00000.77491705.710943
RRT263.81072049.29798.1736521.993261
Theta*186.7259656.66684.83062020.3452114
BOA182.878766.89382.99881556.76904
EWB-GWO183.722963.89773.02921554.12695
duskwoodA*216.77661170.00000.53111812.1250165
JPS216.7766585.00008.34221579.687539
RRT338.87712269.227045.0792673.266678
Theta*216.7398102.04718.57881880.432411
BOA217.2969104.77805.46161557.43684
EWB-GWO215.244395.21994.93281555.67904
frostsabreA*234.50462160.00007.11082303.5469179
JPS232.50461125.00001.17151891.992263
RRT285.84572162.905476.6554522.133867
Theta*211.160974.69277.37391384.344312
BOA230.492146.59425.00961576.93844
EWB-GWO228.736446.48784.05951574.67544
golemsinthemistA*199.45081170.000011.20862184.5938163
JPS199.4508225.00001.14251718.773427
RRT251.47622157.073095.4560545.008859
Theta*186.2028110.775011.70072231.485310
BOA196.022389.09165.65841558.12504
EWB-GWO196.763781.64723.75901556.60895
plainsofsnowA*209.53913330.000010.71682483.0938188
JPS209.5391585.00000.84991822.695324
RRT314.96672852.40599.2433587.438572
Theta*192.899282.972811.06572520.24327
BOA194.149060.24475.70591561.83623
EWB-GWO192.890160.10295.23941560.00213
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, Y.; Qin, Q.; Hu, Z.; Zhang, Y. Path Planning for Unmanned Delivery Robots Based on EWB-GWO Algorithm. Sensors 2023, 23, 1867. https://doi.org/10.3390/s23041867

AMA Style

Luo Y, Qin Q, Hu Z, Zhang Y. Path Planning for Unmanned Delivery Robots Based on EWB-GWO Algorithm. Sensors. 2023; 23(4):1867. https://doi.org/10.3390/s23041867

Chicago/Turabian Style

Luo, Yuan, Qiong Qin, Zhangfang Hu, and Yi Zhang. 2023. "Path Planning for Unmanned Delivery Robots Based on EWB-GWO Algorithm" Sensors 23, no. 4: 1867. https://doi.org/10.3390/s23041867

APA Style

Luo, Y., Qin, Q., Hu, Z., & Zhang, Y. (2023). Path Planning for Unmanned Delivery Robots Based on EWB-GWO Algorithm. Sensors, 23(4), 1867. https://doi.org/10.3390/s23041867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop