Next Article in Journal
Remaining Useful Life Prediction of Rolling Bearings Based on ECA-CAE and Autoformer
Next Article in Special Issue
An Agent-Based Model to Reproduce the Boolean Logic Behaviour of Neuronal Self-Organised Communities through Pulse Delay Modulation and Generation of Logic Gates
Previous Article in Journal
Design and Force/Angle Independent Control of a Bionic Mechanical Ankle Based on an Artificial Muscle Matrix
Previous Article in Special Issue
Solving the Combined Heat and Power Economic Dispatch Problem in Different Scale Systems Using the Imperialist Competitive Harris Hawks Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Whale Optimization with a Firefly Algorithm for Function Optimization and Mobile Robot Path Planning

1
College of Economics, Guangxi Minzu University, Nanning 530006, China
2
College of Electronic Information, Guangxi Minzu University, Nanning 530006, China
3
College of Artificial Intelligence, Guangxi Minzu University, Nanning 530006, China
4
School of Information Engineering, Chang’an University, Xi’an 710064, China
5
Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Selangor, Malaysia
6
Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning 530006, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2024, 9(1), 39; https://doi.org/10.3390/biomimetics9010039
Submission received: 23 October 2023 / Revised: 20 December 2023 / Accepted: 25 December 2023 / Published: 8 January 2024
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation: 2nd Edition)

Abstract

:
With the wide application of mobile robots, mobile robot path planning (MRPP) has attracted the attention of scholars, and many metaheuristic algorithms have been used to solve MRPP. Swarm-based algorithms are suitable for solving MRPP due to their population-based computational approach. Hence, this paper utilizes the Whale Optimization Algorithm (WOA) to address the problem, aiming to improve the solution accuracy. Whale optimization algorithm (WOA) is an algorithm that imitates whale foraging behavior, and the firefly algorithm (FA) is an algorithm that imitates firefly behavior. This paper proposes a hybrid firefly-whale optimization algorithm (FWOA) based on multi-population and opposite-based learning using the above algorithms. This algorithm can quickly find the optimal path in the complex mobile robot working environment and can balance exploitation and exploration. In order to verify the FWOA’s performance, 23 benchmark functions have been used to test the FWOA, and they are used to optimize the MRPP. The FWOA is compared with ten other classical metaheuristic algorithms. The results clearly highlight the remarkable performance of the Whale Optimization Algorithm (WOA) in terms of convergence speed and exploration capability, surpassing other algorithms. Consequently, when compared to the most advanced metaheuristic algorithm, FWOA proves to be a strong competitor.

1. Introduction

Mobile robots are widely used in aerospace, entertainment, agriculture, the military, mining, and rescue operations [1]. They have attracted the attention of many scholars. Meltem Eyuboglu [2] proposed a novel collaborative path planning algorithm for a 3-wheel omnidirectional Autonomous Mobile Robot, Arash Marashian [3] proposed a method for solving mobile robots’ path-planning and path-tracking in static and dynamic environments. Nina Majer [4] proposed a Game-Theoretic Trajectory Planning of Mobile Robots in Unstructured Intersection Scenarios, Guangxin Li [5] solved the path planning of a mobile robot by mixing the algorithms of ACO and ABC. Zhiheng Yu [6] proposed a path planning algorithm for mobile robots based on the water flow potential field method and the beetle antennae search algorithm. De Zhang [7] proposed Multi-objective path planning for mobile robots in nuclear accident environments based on improved ant colony optimization with modified A*, Patrick F. and Charles P. [8] proposed the kinematic modeling of wheeled mobile robots, and Junlin Ou proposed a hybrid path planning based on adaptive visibility graph initialization and edge computing for mobile robots [9]. In many fields of its application, path planning is the most important part. Path planning aims to find a collision-free, optimally safe path from the starting point to the target point in the environment with obstacles according to certain performance indicators, such as planning time, path smoothness, and walking convenience.
Many methods have been applied to mobile robot path planning (MRPP), such as those of Guodong Zhu and Peng Wei, who use dynamic geofencing to solve the path planning problem [10]. Elie Hermand [11] proposes a constrained control scheme to steer an UAV to the desired position while ensuring constraint satisfaction at all times. Joseph Kim and Ella Atkins [12] use airspace Geofencing to solve path planning problems. With the development of research, the swarm-based algorithm has been applied to the MRPP. Unlike traditional algorithms, swarm-based algorithms can perform many intelligent tasks accurately and robustly, which is due to the inspiration of biological intelligence. Therefore, the swarm-based algorithm improves the accuracy of the solution, and lots of scholars use the swarm-based algorithm to solve the MRPP. V. Sathiya [13] proposed a FIMOPSO to solve mobile robot path planning. A. Lazarowska [14] uses the Discrete Artificial Potential Field algorithm (DAPF) to solve the MRPP. Zhang Chungang [15] solved the mobile robot rolling path planning problem. Guangsheng Li [16] uses self-adaptive learning particle swarm optimization to solve the MRPP. These optimization methods show that MRPP has attracted the attention of many scholars (See Table 1).
A swarm-based algorithm is a kind of metaheuristic algorithm. Other metaheuristic optimization algorithms include biological evolution-based, Swarm-based, physical- and chemistry-based, and human-based algorithms. Swarm-based algorithm is a kind of classical algorithm, such as the Hunting search algorithm (HSA) [17], the Grasshopper optimisation Algorithm (GOA) [18], Cat Swarm Optimization (CSA) [19], particle swarm optimization (PSO) [20], Firefly algorithm (FA) [21], Salp Swarm Algorithm (SSA) [22], Whale optimization algorithm (WOA) [23], and gray wolf optimization algorithm (GWO) [24]. Because of its simple concept and remarkable performance, this kind of algorithm is widely studied and applied. Whale optimization algorithm (WOA) [23] is a famous swarm-based algorithm proposed by Mirjalili in 2016. The algorithm solves the problem by simulating the hunting behavior of whales. The hunting process is the optimization process. Because of its remarkable performance in solving problems, the algorithm has been widely studied in the academic community.
The main contributions of this paper are as follows: In order to improve the accuracy of MRPP and WOA’s performance and broaden the application of WOA, a hybrid whale-firefly optimization algorithm based on multi-population and Opposition-Based Learning is proposed in this paper. Firstly, to improve the exploration ability and balance exploitation and exploration, the multiple population mechanism is introduced for the division of labor and cooperation. Secondly, aim to solve the problem of poor accuracy of the algorithm by introducing the Opposition-Based Learning (OBL) and improving the optimization ability of the algorithm through symmetric mapping. The performance of the algorithm is improved by the above two methods. On this basis, in order to better conform to the biological mechanism, the Perception of the food population of whales is introduced to expand the search space and further improve the exploration ability.
The rest of this paper is set as follows: Section 2 introduces the classical whale optimization algorithm. Section 3 introduces the FWOA. Section 4 introduces the verification of FWOA, and Section 5 introduces the MRPP model. Section 6 describes the simulation results and analysis. Section 7 contains conclusions and future work.

2. Whale Optimization Algorithm

Whale optimization algorithm (WOA) is based on the hunting behavior of humpback whales. It mainly includes three phases: Encircling prey, Bubble-net attacking method (exploitation phase), and Search for prey.

2.1. Encircling Prey

Humpback whales can locate their prey and encircle them. The WOA defines the current best candidate solution as the best solution [23]. The other search agents will update the position toward the leader whales (the best solution defined); the equations of this behavior are expressed as follows:
D = | C X ( t ) X ( t ) |
X ( t + 1 ) = X ( t ) A D
where t is the current iteration, A and C are coefficient vectors, X indicates the position vector of the best solution obtained so far, X is the position vector, | | is the absolute value, and • is an element-by-element multiplication. Notes that X should be updated in each iteration if there is a better solution.
The vectors A and C are calculated as follows:
A = 2 a r a
C = 2 r
The a is linearly decreased from 2 to 0 over the course of iterations and r is a random vector in [0,1].

2.2. Bubble-Net Attacking Method (Exploitation Phase)

Bubble-net attacking method includes the shrinking encircling mechanism and the spiral updating position method. The search agents will choose a method to update their position.
Shrinking encircling mechanism: By decreasing the value of a in Equation (3), whales can shrink and encircle prey [23]. Spiral updating position: This mechanism firstly calculates the distance between the whale position and the prey position, then, by establishing an equation between the search agents and the prey, the update of position is achieved. To mimic this method, the equations are as follows:
D = | C X ( t ) X ( t ) |
X ( t + 1 ) = D e b l cos ( 2 π l ) + X ( t )
The D is the distance of the ith whale to the prey, b is a constant for defining the shape of the logarithmic spiral, l is a random number in [−1, 1], and is an element-by-element multiplication.
A parameter p is introduced to control the switch between the shrink encircling mechanism and the spiral updating position method. The equation is as follows:
X ( t + 1 ) = X ( t ) A D D e b l cos ( 2 π l ) + X ( t )         i f p < 0.5 i f p 0.5
The p is a random number in [0,1].

2.3. Search for Prey

For the exploration phase, agents update the position by randomly selecting whales. The random value of A that is greater than 1 or less than −1 can let them move far away from the prey. This mechanism and |A| < 1 together let the algorithm perform a global search. The equations are expressed as follows:
D = | C X ran d X |
X ( t + 1 ) = X r a n d ( t ) A D
where X r a n d ( t ) indicated a random position vector chosen from the current population.
The WOA algorithm starts with a random population and then updates the solution at each iteration. While the condition is satisfied, the algorithm concludes that the solution is the best solution.

3. The Proposed FWOA

Based on classical WOA, this section proposes a hybrid whale-firefly optimization algorithm based on multi-populations and Opposition-Based Learning. The FWOA has three improvements: multi-populations, hybrids with the firefly algorithm (FA), and the perception of food.

3.1. The Multi-Populations

Like primitive humans, all social creatures will divide and cooperate according to the task type. The division and cooperation of ants ensure the stability of their society, and the division and cooperation of wolves ensure the efficiency of hunting prey. Research shows that the whale will also carry out division and cooperation, dividing the total population into several subpopulations. Each subpopulation has its own task, and the entire whale population will predate in this way.
In order to make the algorithm more consistent with the natural mechanism and improve its performance while balancing its exploitation and exploration, this paper divides the initial whale population into two subpopulations: (1) the Search Population (SP) and (2) the Hunt Population (HP). The number of whales in each population accounts for half of the total population. Assign different tasks to different populations to achieve the goal of division of labor and cooperation.
The main task of the search population (SP) is to search (exploration). Through its fast exploration of the search space, it can find the region that is most likely to have the optimal solution. After each search, it will continue to look for other possible locations for the best solution. Through this mechanism, the exploration ability of the algorithm is greatly improved, which enables the algorithm to quickly find the location of the optimal solution.
The main task of the hunt population (HP) is to hunt (exploitation). After the search population has locked down the optimal value area, the hunt population will be exploited in this area to find the optimal value. This mechanism ensures the exploitation ability of the classic WOA. Furthermore, different tasks make the two populations focus on different aspects at the same time. The hunt population focuses on exploitation, and the search population focuses on exploration, realizing the balance between exploitation and exploration. The tasks of the hunt population and search population in different phases are described as follows:

3.1.1. Search Prey

In the search for prey phase, in order to reflect the independence between populations, two populations randomly select a leader whale from their own populations and update the position according to the position of their own leader whale. The position update method for the search population is as follows:
D s = | C X r , s X s |
X s ( t + 1 ) = X r , s ( t ) A D s
where D s is the distance between the current whale and the leader whale randomly selected in the search population, X r , s is the leader whale selected in the search population, and X s is the position of the whale in the search population.
The position update method for the hunt population is as follows:
D h = | C X r , h X h |
X s ( t + 1 ) = X r , h ( t ) A D h
where D h is the distance between the current whale and the leader whale randomly selected in the hunt population, X r , h is the leader whale selected in the hunt population, and X r , h is the position of the whale in the hunt population.

3.1.2. Encircling Prey

In the encircling prey phase, in order to reflect the cooperation of the population and to improve the efficiency of the algorithm, the search population and the hunt population in this phase are merged to form a combined population (CP). The search method for combined population (CP) is according to the phase of Encircling prey in classic WOA. In this phase, the combination of populations is realized, thus improving the computational efficiency of the algorithm. The position update method for the population is as follows:
D c = | C X ( t ) X c ( t ) |
X c ( t + 1 ) = X ( t ) A D c
where D c is the distance between the current whale and the best whale in the combined population, X is the position of the leader whale, and X c is the position of the current whale.

3.1.3. Bubble-Net Attacking Method (Exploitation Phase)

In contrast to the above phases, in the bubble-net attacking method, the two populations were assigned different tasks. To emphasize the exploration behavior, the search population first randomly selects a leader whale from the search population, and other whales in the population update the position of the whale to perform a search behavior to improve the exploration ability of the algorithm. The method is expressed as Equation (10).
The hunt population uses the position update method of classical WOA, and it is as follows:
D h = | X ( t ) X h ( t ) |
X h ( t + 1 ) = D h e b l cos ( 2 π l ) + X ( t )
where D h is the distance between the current whale and the best whale, X ( t ) is the best whale position, and X h ( t + 1 ) is the position of the hunt population.

3.2. Bubble-Net Attacking Method

3.2.1. The Perception of Food

Nature is full of magic. Spider sensing can help spiders avoid danger. Like spiders, studies have found that whales also have a perception, but it is the perception of food, which may be based on smell or temperature. This perception allows whales to quickly explore areas where food may exist when hunting, improve hunting efficiency, and provide more food for the whales. Compared with the excellent exploitation ability of classical WOA, its exploration ability is slightly inferior. Due to its unique exploration mechanism, WOA does not have a good search direction during exploration but randomly selects the direction. Although this exploration mechanism provides good randomness for the algorithm, it shows a relatively inferior ability in terms of efficiency. To improve this weakness, this section applies the whale’s perception ability to classical WOA to improve the exploration ability and efficiency of classic WOA.
The focus of food perception is to guide whales in the direction of predation, so after each iteration of the algorithm, the entire population will conduct a food perception. Through this perception, the optimal population search direction will be found. At the same time, in order to ensure the randomness of the algorithm and avoid getting stuck at local optimal, the perception direction of the optimal population will be compared with the current optimal searcher after each perception, and the best one of the two will be found, and this direction will become the position update direction of the entire population, so as to improve the exploration ability and enable the algorithm to quickly find the optimal value.

3.2.2. Firefly Algorithm (FA)

The Firefly algorithm (FA) [21] was proposed by Xin She Yang in 2008. It is an idealized behavior based on the flicker characteristics of fireflies [25]. There are several important parameters in the firefly algorithm: (1) Light intensity and attraction β ; (2) Firefly in horizontal position x; (3) Firefly in vertical position y i ; (4) Distance between firefly i and j r i j ; (5) Intensity of light source I s .
The brightness of the firefly at a certain position or position x (represented by I ) can be calculated as follows:
I ( x ) α f ( x )
The attraction must be adjusted as a function of absorption. Thus, the change in light intensity I ( r ) follows the inverse square law:
I ( r ) = I s r 2
Meanwhile, consider the static light absorption coefficient γ ; the intensity I of light varies with position or distance r , thus:
I = I 0 e γ r
where I 0 indicates the actual intensity of light.
The attraction of fireflies β can be approximated as:
β = β 0 ( 1 + γ r 2 )
where β 0 is the attractiveness level when r = 0 .
Then, the distance between two fireflies can be calculated. Let firefly i and firefly j be x i and x j on the horizontal axis, and on the vertical y i y j axis, the distance between them can be calculated as:
r i , j = ( x i x j ) 2 + ( y i y j ) 2
As mentioned above, the navigation of firefly i is attracted by another highly attractive firefly j, and the movement of firefly i towards firefly j is expressed mathematically as follows:
x i = x i + β 0 e γ r i , j 2 ( x j x i ) + α ( r a n d 1 2 )
where rand is a random number in the interval [0,1], α . The coefficient of the random displacement vector, γ the light absorption coefficient of the environment, r i , j and the Euclidean distance between two fireflies.
For the maximization problem, the brightness can be simply proportional to the objective function. Other forms of brightness can be defined in a way similar to the fitness function in a genetic algorithm or bacterial foraging algorithm (BFA) (Algorithm 1).
Algorithm 1 Pseudocode of the Firefly Algorithm
Define target function which is presented as: f ( x ) : x = ( x 1 , x 2 , , x d )
Generate or develop preliminary or pilot population of fireflies: x i ( i = 1 , 2 , , n )
Define expression for intensity of light (1) so that it is linked with I = f ( x )
Define the light adsorption, represented by y
While (t < maximum generation of light)
For  i = 1 : n (for all fireflies in the sample space)
  For  j = 1 : n (for all fireflies in the sample space)
   If ( I j > I I )
    Firefly i move towards firefly j
   End if
   Express attractiveness of firefly based on the separation point (r) distance exp ( y r 2 )
   Estimate original value and present the final value in terms of light intensity
  End for
End for
 Find the best possible firefly
End while

3.2.3. The Hybrid of WOA and FA

Similar to fireflies’ perception of light, food perception is a whale’s ability, so in the food perception phase, let the search agent perceive according to the FA method to find the optimal food direction. At the beginning of perception, everyone in the population randomly generates a positional perception of food. Position food is not perceived according to the current position of the individual, which is set to ensure the randomness of the individual population. In order to improve the performance of the algorithm, a probability selection is made during food perception to make the algorithm targeted. Therefore, a random number q is generated after the random food location is generated. If the value of the random number q is less than 0.5, the food perception is updated as follows:
x f = x f + β 0 e γ r i , j 2 ( x f , j x f , i ) + α ( q 1 2 ) i f q < 0.5
where r i , j is the Euclidean distance between food perception location i and food perception location j, and other parameters are as shown above.
If the value of q is greater than 0.5, a random position perception is performed to regenerate a new food position, so as to greatly improve the randomness of the algorithm while ensuring performance improvement and keeping the algorithm in a steadily improving state.

3.3. Opposition-Based Learning

Opposition-Based Learning is a strategy proposed by Hamid R. tizhoosh in 2005 [26]. The main idea of this strategy is: when people are solving the solution x of a given problem, they usually need to estimate a solution x ˜
In many cases, learning starts at random points (initialization of the population). In algorithms, it starts with a random population and moves the solution towards the optimal solution. Based on this thinking, it is beneficial to improve the efficiency of the algorithm if the opposite number x ˜ is calculated when searching for x
Suppose x R , x [ a , b ] . The opposite number x ˜ of x is calculated as follows:
x ˜ = a + b x
The formula is extended to the multi-dimensional case. x i R , x i [ a i , b i ] . Defined, the equation is as follows:
x ˜ i = a i + b i x i i = 1 , 2 , , n
When it comes to FWOA, a i which b i is the lower bound and the upper bound of the problem, and x i the search agents, the equation is as follows::
x ˜ p = l b + u b i x i i = 1 , 2 , , n
where x ˜ p is the opposite population of search agents.
Figure 1 shows the computer system for antisymmetric learning. Based on this mechanism, the detection ability of the algorithm can be improved, and the traversal of the search space by the algorithm can be increased.
The Pseudocode of the FWOA is as follow (Algorithm 2):
Algorithm 2 Pseudocode of the FWOA
Initialize the whale populations: The search population x s and The hunt population: x h
Calculate the fitness of each search agent
X = the best search agent
while (t < maximum number of iterations)
 for each search agent
  Update a , A , C , l , p
   if1 ( p < 0.5 )
    if2 ( | A | < 1 )
     The combined population x c updates the position of the current search agent by the Equation (15).
    else if2 ( | A | 1 )
     Search population x s selects a random search agent x r , s by Equation (10)
     Search population x s updates the position of the current search agent by the Equation (11).
     Hunt population x h selects a random search agent x r , h by Equation (12)
     Hunt population x h updates the position of the current search agent by the Equation (13)
   end if2
  else if1 ( p 0.5 )
   Search population x s selects a random search agent x r , s by Equation (10)
   Search population x s updates the position of the current search agent by the Equation (11)
   Hunt population x h updates the position of the current search agent by the Equation (17)
  end if1
end for
  Initialize the perception of food population x f
  Update q
   If3 ( p < 0.5 )
    The combined population x c updates the perception of food position by Equation (24)
    Else if3 ( p 0.5 )
    Initialize a new position of food
   end if3
 Find the opposite population x ˜ p by Equation (27)
 Check if any search agent goes beyond the search space and amend it
 Calculate the fitness of each search agent
 Update X if there is a better solution
t = t + 1
End while
Return X

4. Verification of FWOA

In this section, the FWOA algorithm has been tested on 23 benchmark functions. The 23 benchmark functions are classical functions used by many researchers [27,28,29,30,31]. Although these functions are simple, we chose them to compare our algorithm with the current metaheuristic method to verify the performance of FWOA. Table 2, Table 3 and Table 4 list these benchmark functions. Generally speaking, the reference functions used can be divided into three groups: Uni-modal functions, Multi-modal functions, and Fixed-dimension multi-modal functions. Table 2, Table 3 and Table 4 show these groups of functions, respectively. Different types of functions place different emphasis on performance. Dimension in the table represents the dimension of the function, Range is the boundary of the function search space and f min is the best value.

4.1. Experiment Setting

The maximum number of iterations of the algorithm is 1000, and the number of search agents is 100. Each algorithm runs independently on each benchmark function 30 times. In order to verify the results, the FWOA algorithm is compared with classic PSO [20], SSA [22], WOA [23], GWO [24], STOA [32], and SOA [33]. The statistical results (average, minimum, maximum, and standard deviation) are shown in Table 5, Table 6 and Table 7. The function graphs and algorithm convergence graphs are shown in Figure 2.

4.2. Exploitation Analysis

According to the results in Table 5, FWOA can provide very competitive results. This algorithm is superior to other algorithms in f 1 f 7 . It should be noted that unimodal functions focus on benchmark exploitation. Therefore, these results show that FWOA has better performance in finding the optimal value of the function. This is due to the food perception mechanism discussed earlier.

4.3. Exploration Analysis

Compared with unimodal functions, multi-modal functions have many local optimal values, and their complexity grows exponentially with the dimension, so the requirements for algorithm performance of multi-modal functions are stricter. Therefore, they are suitable for benchmarking the exploration abilities of algorithms.
According to the results in Table 6, FWOA can also provide very competitive results on Fixed-dimension multi-modal functions. The FWOA is superior to other algorithms in most functions f 8 f 12 , f 14 f 16 , f 20 f 23 . This phenomenon is reflected in the fact that FWOA can find the best value smaller than the results of all test algorithms, and the maximum value found by FWOA is also the smallest of all algorithms. In addition, compared with GWO and PSO, which have good exploration capabilities, FWOA shows remarkable performance and can often surpass them. These results show that the FWOA algorithm has certain research value.

4.4. The Standard Deviation Analysis

The standard deviation is the arithmetic square root of the variance. The standard deviation can reflect the degree of dispersion of a data set. It is most commonly used in probability statistics as a measure of the degree of statistical distribution. A large standard deviation represents a large difference between most values and their average values; a small standard deviation means that these values are close to the average value, so the difference between the data are small. The smaller the standard deviation in algorithm analysis, the better the stability and robustness of the algorithm.
According to the results in Table 5, Table 6 and Table 7, the standard deviation of FWOA is the smallest in most cases, which means that FWOA has strong stability and can provide a relatively stable calculation. This is due to the multi-population mechanism of the algorithm. The division and cooperation of different populations enable the algorithm to achieve the balance between development and detection, so it can also ensure its stability while maintaining good performance.

4.5. The Convergence Analysis

This section shows the convergence of the FWOA. According to Digalakis [28], in the initial step of optimization, the movement of search agents should undergo mutation, which helps metaheuristics widely explore the search space. Then, these changes should be reduced to emphasize exploitation at the end of optimization. To observe the convergence behavior of the FWOA algorithm, the convergence graph of the algorithm is shown in Figure 2. In most cases, FWOA converges first, due to the search population discussed before.
To sum up, compared with the well-known metaheuristic algorithm, the experimental results verify the performance of the FWOA algorithm in solving various benchmark functions. In order to further study the performance of the proposed algorithm, a practical problem (two different problem environments) is used in the following section.
The algorithm is compared with different well-known algorithms to verify its effectiveness.

5. Using FWOA to Solve the Mobile Robot Path Planning Problem

The mobile robot path planning problem (MRPP) is a famous research problem. There are lots of different methods to solve it, such as Zhang Z [34] who proposed a method based on A-star and Dijkstra Algorithm, Z Cen [35] who proposed a method based on genetic algorithms and the A* algorithm; Y Lü [36] who proposed a method based on a directional relationship with uncertain environmental information; and Y Cheng [37] who proposed a distributed snake algorithm for mobile robot path planning with curvature constraints. Kurihara K [38] proposed a mobile robot path planning method with the existence of moving obstacles; Msg A [39] proposed an intelligent approach for autonomous mobile robot path planning based on an adaptive neuro-fuzzy inference system; and Zhang Z [40] proposed a method based on the dynamic movement primitives library.
The experimental environment of this study is divided into two parts: (1) The irregular obstacle environment with no influence range; (2) The regular obstacle environment with influence range The irregular obstacle environment with no influence range simulates the shapes of different obstacles in the real environment, and the robot searches for the optimal path to avoid collision in this environment. The obstacle of the regular obstacle environment experiment with influence range is circular. This environment simulates the real environment in which objects of different shapes will produce an influence range. When the robot approaches, there may be different degrees of collision, resulting in different motion conditions. (1: Do not affect the robot’s motion. 2: Slightly affect the motion. 3: Collision to immovable). The influence range of obstacles in this environment is subject to the center of the circle. The influence decreases linearly with the distance between the robot position and the center of the circle, so as to simulate the real environment. The method of solving MRPP by FWOA is introduced as follows:

5.1. Irregular Obstacles Environment with None Influence Range

Mobile robot path planning is an important task of intelligent robot research. The first step is to model the environment. In an obstacle-free environment, this paper uses the grid method to model. The grid method decouples the workspace into several simple areas to establish an environment model that is convenient for compute path planning; in this way, the physical space is mapped into an abstract space. The free grid point is represented by 0 , and the obstacle point is represented by 1 . Through this mechanism, the modeling of irregular obstacles can be realized, and it is also convenient for computation.
On the two-dimensional map, as shown in Figure 3. in order to solve the path planning problem, we make the following assumptions: (1) The mobile robot only moves in the set search space; (2) There are n different-shape static irregular obstacles in the robot motion space, which are described by the grid method. The obstacles have no influence range, and the path is unavailable when the robot hits the obstacles. (3) Mobile robot is regarded as a particle [41], and its size is ignored. According to the above assumptions, the obstacles are expanded to R s . This R s is obtained by:
R s = R + σ
where σ is the safe distance, which is artificially selected to prevent the mobile robot from contacting obstacles.
The robot moves in eight directions, as shown in Figure 4. Through different moving directions, we can realize the path planning of the mobile robot when moving to any grid point in the search space [41]. The cost function of this model is the motion distance of the robot in two-dimensional space.

5.2. Regular Obstacle Environment with Influence Range

Due to the irregular shape of obstacles, their influence ranges are different. When a robot encounters an obstacle while moving in the environment, three situations may occur: (1) stop moving; (2) affect but do not stop moving; and (3) do not affect moving. Based on these situations, this section introduces the obstacle environment with a regular influence range that is more realistic.
The path planning problem in this environment is to find a connection between the starting point and the target point with the least threat, as shown in Figure 3. The point S is the starting point, and the point t is the target point. In order to simplify the problem, the general problem is divided into several sub-problems by using the deconstruction method. Thus, the starting point and the target point are connected by a line, and the connection is divided into 𝑚 segments. The path planning is carried out for each segment, and the path length is the sum of the subpaths.
In [42], an obstacle probability density model based on UAV movement is introduced. The model describes that the influence range of obstacles will not have a boundary when the UAV is moving but will decrease with the increase in distance between the UAV and the obstacle center, but will never be zero. Based on this theory, the probability density model is proposed as follows:
C inf l u e n c e = exp ( i = 1 n | | d i | | δ )
where δ is a parameter that controls the shape of the density function, | | d i | | indicates the distance from the moving object to the ith bstacle.
For the robot, it can be known that the impact of obstacles in the environment on the robot also decreases with the distance from the center of the obstacle, so the model can be used to model the robot path planning problem. However, collision has a great impact on the robot to a certain extent, which is much greater than the impact of obstacles on the UAV. But the probability density value drops too fast to truly simulate the environment. In order to solve this problem, the probability model is improved as follows. Figure 5 shows the improved probability density value, which is smoother than the original probability density curve:
C inf l i e n c e = exp ( i = 1 n | | d i | | δ )
The descent speed of the improved probability model becomes slower, which is more in line with the robot situation. Based on this probability density model and in combination with the path planning model in article [3], let the parameter D be the length of the motion path, the parameter S be the distance of the subproblem segment, and the parameter w be the weight. The following model is proposed to find the shortest distance while considering the influence range:
C = ( C inf l u e n c e w + D S ( 1 w ) )
Based on this objective function, the path planning problem can be modeled to solve this problem.

6. Simulation Results and Analysis

This section introduces the simulation experiment setting and the analysis of the experiment. The algorithm is tested in ten different mobile robot working environments, and the results show that FWOA is very competitive.

6.1. Experimental Setting

In order to evaluate the quality of the algorithm, FWOA is applied to various mobile robot working environments. The experiment was divided into two groups: (1) Irregular obstacles environment with no influence range; and (2) Regular obstacle environment with influence range.
As shown in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21, for the irregular obstacle environment with no influence range, five working environments are established: Environment 1, Environment 2, Environment 3, Environment 4, and Environment 5. The size of the map is set to 20 × 20, and the complexity of the map is gradual. The test is divided into three groups: (1) Environment 1 and Environment 2 are mainly used to test the existence of obstacles between the starting point and the map; (2) Environment 3 is used to test the existence of obstacles in the whole map; and (3) Environment 4 and Environment 5 are used to test the existence of obstacles in the middle of the map and near the target point. Complex maps are a challenge for mobile robots. Through the above tests, we can find the global optimal path and prove the excellent performance of FWOA. The starting point of the map is (0,0), represented by a red circle; the target point is (20,20), represented by a green square; and the outline of the obstacle is represented by a red rectangle. The number of iterations is set to 500, the number of population agents is set to 60, and the dimension is set to 30. The algorithm runs independently 30 times in each environment. Meanwhile, the algorithm is tested for a p-value to show the difference between the two algorithms.
As shown in Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30 and Figure 31, five working environments are established for the regular obstacle environments with influence range: Environment 6, Environment 7, Environment 8, Environment 9, and Environment 10, with the influence range of circular. The starting point is (0,0) (represented by a black *), the target point is (500,0) (represented by a hollow square), and the obstacle is represented by a circle. The influence of obstacles decreases with an increase in radius. The number of iterations is set to 500, the dimension is set to 30, and the population number is set to 60. The algorithm runs independently 30 times in each environment. In order to show the difference between the algorithms, they are also tested in an experimental environment.
Figure 6. Modified C i n f l u e n c e .
Figure 6. Modified C i n f l u e n c e .
Biomimetics 09 00039 g006
Figure 7. The convergence graph of environment 1.
Figure 7. The convergence graph of environment 1.
Biomimetics 09 00039 g007
Figure 8. The Boxplot of Environment 1.
Figure 8. The Boxplot of Environment 1.
Biomimetics 09 00039 g008
Figure 9. The convergence graph of environment 2.
Figure 9. The convergence graph of environment 2.
Biomimetics 09 00039 g009
Figure 10. The Boxplot of Environment 2.
Figure 10. The Boxplot of Environment 2.
Biomimetics 09 00039 g010
All simulations are implemented in MATLAB R2022a and run on an AMD Ryzen 9 5900HX with a Radeon Graphics CPU at 3.30 GHz and 16 GB of RAM under Windows 11.

6.2. Result Analysis

This section analyzes the results of the experiments. The experiments are divided into two groups for analysis: (1) The experimental results of mobile robots in the irregular obstacle environment with no influence range are shown in Table 8, Table 9, Table 10, Table 11 and Table 12; the algorithm convergence graphs and algorithm box-plot graphs are shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16; the path graphs are shown in Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21; and the p-value is shown in Table 13; (2) The experimental results of the mobile robot in the regular obstacle environments with influence range are shown in Table 14, Table 15, Table 16, Table 17 and Table 18; the algorithm convergence graphs and algorithm box-plot graphs are shown in Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30 and Figure 31; the path graphs are shown in Figure 32, Figure 33, Figure 34, Figure 35 and Figure 36; and the p-value is shown in Table 18. Through the analysis of ten working environments in two groups of experiments, we can know that FWOA has excellent performance and remarkable stability.

6.2.1. Irregular Obstacles with None Influence Range

The selectable path of a robot is inversely proportional to the density of obstacles, and obstacles at different positions have different effects on path selection. As shown in Figure 17 and Figure 18, because of the limitation of the robot’s moving direction, the irregular obstacles near the starting point are the main influence on the path, which determines the subsequent moving direction of the robot. However, because of the path-planning method of FWOA, this influence is reduced. Meanwhile, because of the balance between exploitation and exploration, FWOA can always find the best moving path. Figure 19 shows the working environment of global obstacles. In this environment, due to the dense obstacles, robot path planning needs to focus on the judgment of path legitimacy. Figure 20 and Figure 21 show the obstacle environment near the target point, which is relatively simple compared with environments 1 and 3.
The experimental results are shown in Table 8, Table 9, Table 10, Table 11 and Table 12. For the path planning problem of mobile robots, Zhang Zhen proposed a new neighborhood search strategy to improve the fitness value of the global optimal individual. This paper found a search method based on the search population through the inspiration of Ref. [43]. From the experimental results, this method has a significant effect. The best mobile path can be found in each mobile robot’s working environment. Meanwhile, after 30 independent experiments in five working environments, the average moving path length of FWOA is the minimum, and the optimal value found is also among the best. It can be seen from the standard deviation that FWOA is also very stable and robust.
It can be seen from the convergence graphs that FWOA always converges the earliest among the six algorithms, which means that FWOA has a strong exploration ability and can quickly traverse the search space to find the optimal path. Compared with other algorithms, FWOA converges faster than them. This is due to the food-perception mechanism of the algorithm. While different populations of searchers search for the best, the perception of the search space (neighborhood) improves the exploration ability of the algorithm. Thus, the performance of FWOA is competitive in the algorithm. It can be seen from the standard deviation graphs that the stability of FWOA is not inferior to that of other algorithms and is on the same level as that of other algorithms. In general, the stability of FWOA is remarkable. It can be seen from Table 13 that FWOA is significantly different from other algorithms in the irregular obstacle environment without influence range and can maintain its independence.
Figure 11. The convergence graph of environment 3.
Figure 11. The convergence graph of environment 3.
Biomimetics 09 00039 g011
Figure 12. The Boxplot of Environment 3.
Figure 12. The Boxplot of Environment 3.
Biomimetics 09 00039 g012
Figure 13. The convergence graph of environment 4.
Figure 13. The convergence graph of environment 4.
Biomimetics 09 00039 g013
Figure 14. The Boxplot of Environment 4.
Figure 14. The Boxplot of Environment 4.
Biomimetics 09 00039 g014
Figure 15. The convergence graph of environment 5.
Figure 15. The convergence graph of environment 5.
Biomimetics 09 00039 g015
Figure 16. The Boxplot of Environment 5.
Figure 16. The Boxplot of Environment 5.
Biomimetics 09 00039 g016
Figure 17. The Path of Environment 1.
Figure 17. The Path of Environment 1.
Biomimetics 09 00039 g017
Figure 18. The Path of Environment 2.
Figure 18. The Path of Environment 2.
Biomimetics 09 00039 g018
Figure 19. The Path of Environment 3.
Figure 19. The Path of Environment 3.
Biomimetics 09 00039 g019
Figure 20. The Path of Environment 4.
Figure 20. The Path of Environment 4.
Biomimetics 09 00039 g020
Figure 21. The Path of Environment 5.
Figure 21. The Path of Environment 5.
Biomimetics 09 00039 g021
To sum up, in the irregular obstacles with no influence range, FWOA has shown remarkable performance, and it is also very competitive.

6.2.2. Regular Obstacle Environment with Influence Range

This section describes the performance of FWOA in a regular Obstacle environment with an influence range. In order to study the efficiency of different algorithms and show the performance of FWOA, this paper compares FWOA with classical Swarm Optimization (PSO) [20], Firefly algorithm (FA) [21], WOA [23], Seagull Optimization Algorithm (SOA) [33], Particle Sooty Tern Optimization Algorithm (STOA) [42], and Harmony Search (HS) [44].
From the convergence graphs of the algorithm (Figure 22, Figure 24, Figure 26, Figure 28 and Figure 30), we can know that the FWOA algorithm has a better convergence rate than other algorithms. Due to its good exploration capability, FWOA can not only converge rapidly but also ensure accuracy. Compared with the classical WOA, the exploration capability of the FWOA is almost twice that of the classical WOA. Thus, we can know that the performance of FWOA is very competitive.
From the boxplot (Figure 23, Figure 25, Figure 29 and Figure 31), it can be seen that the length of the optimal path found by FWOA after 30 independent operations changes very little, which means that FWOA has remarkable stability, and the optimal value can be found in each operation. Combined with the convergence graphs of the algorithm, FWOA has remarkable accuracy, stability, and robustness compared with other algorithms, which is attributed to the multi-population mechanism of the algorithm, which realizes the balance between exploitation and exploration.
Figure 22. The convergence graph of environment 6.
Figure 22. The convergence graph of environment 6.
Biomimetics 09 00039 g022
Figure 23. The Boxplot of Environment 6.
Figure 23. The Boxplot of Environment 6.
Biomimetics 09 00039 g023
Figure 24. The convergence graph of environment 7.
Figure 24. The convergence graph of environment 7.
Biomimetics 09 00039 g024
Figure 25. The Boxplot of Environment 7.
Figure 25. The Boxplot of Environment 7.
Biomimetics 09 00039 g025
Figure 26. The convergence graph of environment 8.
Figure 26. The convergence graph of environment 8.
Biomimetics 09 00039 g026
Figure 27. The Boxplot of Environment 8.
Figure 27. The Boxplot of Environment 8.
Biomimetics 09 00039 g027
Figure 28. The convergence graph of environment 9.
Figure 28. The convergence graph of environment 9.
Biomimetics 09 00039 g028
Figure 29. The Boxplot of Environment 9.
Figure 29. The Boxplot of Environment 9.
Biomimetics 09 00039 g029
Figure 30. The convergence graph of environment 10.
Figure 30. The convergence graph of environment 10.
Biomimetics 09 00039 g030
Figure 31. The Boxplot of Environment 10.
Figure 31. The Boxplot of Environment 10.
Biomimetics 09 00039 g031
It can be seen from the path graphs (Figure 32, Figure 33, Figure 34, Figure 35 and Figure 36) that the path found by FWOA avoids all the obstacle-affected areas to a certain extent, especially in areas with dense obstacles such as Environment 3. FWOA can avoid all the obstacle-affected areas and reach the target point, which indicates that FWOA has strong optimization ability.
Figure 32. The Path of Environment 6.
Figure 32. The Path of Environment 6.
Biomimetics 09 00039 g032
Figure 33. The Path of Environment 7.
Figure 33. The Path of Environment 7.
Biomimetics 09 00039 g033
Figure 34. The Path of Environment 8.
Figure 34. The Path of Environment 8.
Biomimetics 09 00039 g034
Figure 35. The Path of Environment 9.
Figure 35. The Path of Environment 9.
Biomimetics 09 00039 g035
Figure 36. The Path of Environment 10.
Figure 36. The Path of Environment 10.
Biomimetics 09 00039 g036
The experimental results (Table 14, Table 15, Table 16, Table 17 and Table 18) show that FWOA has won first place in each experiment, and the path length is significantly smaller than other algorithms. Meanwhile, the worst case of the experimental results of FWOA is also better than the best case of other algorithms. It can be seen from Table 19 that FWOA is significantly different from other algorithms in the regular obstacle environment with influence range and can maintain its independence. The experimental results show that FWOA has remarkable performance.
In conclusion, FWOA can balance exploitation and exploration and has strong stability. It has demonstrated its competitiveness in experiments and has remarkable performance in solving practical problems, which can be applied to more complex practical problems.

7. Conclusions and Future Work

This paper intends to verify the performance of FWOA and its ability to deal with the MRPP by comparing it with other intelligent algorithms. In the MRPP, the traditional algorithm has the weakness of easily falling into local optima and exhibits slow convergence [45]. For the above reasons, this paper proposes FWOA to solve these problems. FWOA has the characteristics of fast convergence, remarkable exploration, and strong optimization ability. The algorithm is studied on 23 benchmark functions to analyze the exploitation, exploration, and convergence behavior of the algorithm, and WOA is found to be sufficiently competitive with other metaheuristic algorithms. Meanwhile, this paper experiments with the algorithm in two different environments and analyzes its ability to solve practical problems. The experiment results show that the algorithm has made significant progress, which indicates that FWOA has great advantages in solving MRPP. In the future, applying FWOA to complex, large-scale practical application problems will be meaningful.

Author Contributions

Conceptualization, methodology, T.T. and Z.L.; software, Z.L.; writing—original draft preparation, Y.W.; writing—review and editing, Y.Z. and Q.L.; funding acquisition, Y.Z. All authors have read and agreed to the published version of this manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant No. U21A20464, 62066005.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Patle, B.K.; Pandey, A.; Parhi, D.R.K.; Jagadeesh, A.J.D.T. A review: On path planning strategies for navigation of mobile robot. Def. Technol. 2019, 15, 25. [Google Scholar] [CrossRef]
  2. Eyuboglu, M.; Atali, G. A novel collaborative path planning algorithm for 3-wheel omnidirectional Autonomous Mobile Robot. Robot. Auton.Syst. 2023, 169, 104527. [Google Scholar] [CrossRef]
  3. Marashian, A.; Razminia, A. Mobile robot’s path-planning and path-tracking in static and dynamic environments: Dynamic programming approach. Robot. Auton. Syst. 2023, 172, 104592. [Google Scholar] [CrossRef]
  4. Majer, N.; Luithle, L.; Schürmann, T.; Schwab, S.; Hohmann, S. Game-Theoretic Trajectory Planning of Mobile Robots in Unstructured Intersection Scenarios. IFAC-PapersOnLine 2023, 56, 11808–11814. [Google Scholar] [CrossRef]
  5. Li, G.; Liu, C.; Wu, L.; Xiao, W. A mixing algorithm of ACO and ABC for solving path planning of mobile robot. Appl. SoftComput. 2023, 148, 110868. [Google Scholar] [CrossRef]
  6. Yu, Z.; Yuan, J.; Li, Y.; Yuan, C.; Deng, S. A path planning algorithm for mobile robot based on water flow potential field method and beetle antennae search algorithm. Comput. Electr. Eng. 2023, 109, 108730. [Google Scholar] [CrossRef]
  7. Zhang, D.; Luo, R.; Yin, Y.-B.; Zou, S.-L. Multi-objective path planning for mobile robot in nuclear accident environment based on improved ant colony optimization with modified A∗. Nucl. Eng. Technol. 2023, 55, 1838–1854. [Google Scholar] [CrossRef]
  8. Muir, P.F.; Neuman, C.P. Kinematic modeling of wheeled mobile robots. J. Robot. Syst. 1987, 4, 281–340. [Google Scholar] [CrossRef]
  9. Ou, J.; Hong, S.H.; Song, G.; Wang, Y. Hybrid path planning based on adaptive visibility graph initialization and edge computing for mobile robots. Eng. Appl. Artif. Intell. 2023, 126, 107110. [Google Scholar] [CrossRef]
  10. Zhu, G.; Wei, P. Low-Altitude UAS Traffic Coordination with Dynamic Geofencing. In Proceedings of the 16th AIAA Aviation Technology, Integration, and Operations Conference, Washington, DC, USA, 13–17 June 2016. [Google Scholar]
  11. Hermand, E.; Nguyen, T.W.; Hosseinzadeh, M.; Garone, E. Constrained Control of UAVs in Geofencing Applications. In Proceedings of the 2018 26th Mediterranean Conference on Control and Automation (MED), Zadar, Croatia, 19–22 June 2018; pp. 217–222. [Google Scholar]
  12. Kim, J.; Atkins, E. Airspace Geofencing and Flight Planning for Low-Altitude, Urban, Small Unmanned Aircraft Systems. Appl. Sci. 2022, 12, 576. [Google Scholar] [CrossRef]
  13. Sathiya, V.; Chinnadurai, M.; Ramabalan, S. Mobile robot path planning using fuzzy enhanced improved multi-Objective particle swarm optimization (FIMOPSO). Expert Syst. Appl. 2022, 198, 116875. [Google Scholar] [CrossRef]
  14. Lazarowska, A. Discrete Artificial Potential Field Approach to Mobile Robot Path Planning. IFAC-PapersOnLine 2019, 52, 277–282. [Google Scholar] [CrossRef]
  15. Zhang, C.; Xi, Y. Sub-optimality analysis of mobile robot rolling path planning. Sci. China Ser. 2003, 46, 116–125. [Google Scholar] [CrossRef]
  16. Li, G.S.; Chou, W.S. Path planning for mobile robot using self-adaptive learning particle swarm optimization. Sci. China Inf. Sci. 2018, 61, 052204. [Google Scholar] [CrossRef]
  17. Oftadeh, R.; Mahjoob, M.; Shariatpanahi, M. A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search. Comput. Math. Appl. 2010, 60, 2087–2098. [Google Scholar] [CrossRef]
  18. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  19. Chu, S.C.; Tsai, P.W.; Pan, J.S. Cat Swarm Optimization. In Proceedings of the PRICAI: Trends in Artificial Intelligence, 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; pp. 854–858. [Google Scholar]
  20. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  21. Yang, X.S. Firefly Algorithm, Stochastic Test Functions and Design Optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  25. Yang, X.S. Multiobjective firefly algorithm for continuous optimization. Eng. Comput. 2013, 29, 175–184. [Google Scholar] [CrossRef]
  26. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  27. Xin, Y.; Yong, L. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  28. Digalakis, J.G.; Margaritis, K.G. On benchmarking functions for genetic algorithms. Int. J. Comput. Math. 2001, 77, 481–506. [Google Scholar] [CrossRef]
  29. Molga, M.; Smutnicki, C. Test functions for optimization needs. Comput. Inform. Sci. 2005, 101, 48. [Google Scholar]
  30. Yang, X.-S. Test Problems in Optimization. In Engineering Optimization: An Introduction with Metaheuristic Applications; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  31. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  32. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  33. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl. -Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  34. Zhang, Z.; Zhao, Z. A Multiple Mobile Robots Path planning Algorithm Based on A-star and Dijkstra Algorithm. Int. J. Smart Home 2014, 8, 75–86. [Google Scholar] [CrossRef]
  35. Cen, Z.; Qiang, Z.; Wei, X. Robotic Global Path-Planning Based Modified Genetic Algorithm and A∗ Algorithm. In Proceedings of the 2011 Third International Conference on Measuring Technology and Mechatronics Automation, Shanghai, China, 6–7 January 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 167–170. [Google Scholar]
  36. Lü, Y.; Chen, Z. A path planning algorithm based on a directional relationship with uncertain environment information. J. Univ. Sci. Technol. China 2013, 43, 782–789. [Google Scholar]
  37. Cheng, Y.; Jiang, P.; Hu, Y.F. A Distributed Snake Algorithm for Mobile Robots Path Planning with Curvature Constraints. In Proceedings of the IEEE International Conference on Systems, Singapore, 12–15 October 2008; IEEE: Piscataway, NJ, USA, 2009; pp. 2056–2062. [Google Scholar]
  38. Kurihara, K.; Nishiuchi, N.; Hasegawa, J.; Masuda, K. Mobile Robots Path Planning Method with the Existence of Moving Obstacles. In Proceedings of the IEEE Conference on Emerging Technologies & Factory Automation, Catania, Italy, 19–22 September 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 195–202. [Google Scholar]
  39. Msg, A.; Hbj, B. An intelligent approach for autonomous mobile robots path planning based on adaptive neuro-fuzzy inference system. ScienceDirect 2021, 13, 101491. [Google Scholar]
  40. Zhang, Z.; He, R.; Yang, K. A bioinspired path planning approach for mobile robots based on improved sparrow search algorithm. Adv. Manuf. 2022, 10, 114–130. [Google Scholar] [CrossRef]
  41. Mei, Z.; Chen, Y.; Jiang, M.; Wu, H.; Cheng, L. Mobile Robots Path Planning Based on Dynamic Movement Primitives Library. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 312–317. [Google Scholar]
  42. Bergh, F.; Engelbrecht, A.P. A study of particle swarm optimization particle trajectories. Inf. Sci. 2006, 176, 937–971. [Google Scholar]
  43. Bai, L.; Gong, L.; Zhao, C. Unmanned Combat Aerial Vehicles Path Planning using a Novel Probability Density Model Based on Artificial Bee Colony Algorithm. In Proceedings of the 2013 Fourth International Conference on Intelligent Control and Information Processing (ICICIP), Beijing, China, 9–11 June 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 620–625. [Google Scholar]
  44. Kang, S.L.; Zong, W.G. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar]
  45. Thomaz, C.E.; Pacheco, M.; Vellasco, M. Mobile Robot Path Planning Using Genetic Algorithms; Springer: Berlin/Heidelberg, Germany, 1999; Volume 71, pp. 671–679. [Google Scholar]
Figure 1. Opposition-based learning.
Figure 1. Opposition-based learning.
Biomimetics 09 00039 g001
Figure 2. Test functions convergence curves.
Figure 2. Test functions convergence curves.
Biomimetics 09 00039 g002aBiomimetics 09 00039 g002bBiomimetics 09 00039 g002cBiomimetics 09 00039 g002d
Figure 3. Mobile robot size.
Figure 3. Mobile robot size.
Biomimetics 09 00039 g003
Figure 4. The directions of mobile robot.
Figure 4. The directions of mobile robot.
Biomimetics 09 00039 g004
Figure 5. The method of solving.
Figure 5. The method of solving.
Biomimetics 09 00039 g005
Table 1. Various MRPP solving methods.
Table 1. Various MRPP solving methods.
DocumentMethod
[2]collaborative path planning algorithm
[3]static and dynamic environments
[6]water flow potential field method and beetle antennae search algorithm
[7]ant colony optimization
[9]water flow potential field method and beetle antennae search algorithm
[11]optimization and reinforcement learning
[12]new approach based on Bezier curves
[13]FIMOPSO
[16]self-adaptive learning particle swarm optimization
Table 2. Uni-modal functions.
Table 2. Uni-modal functions.
FunctionDimensionRangefmin
f 1 = i = 1 n x i 2 30[−100,100]0
f 2 = i = 1 n x i + i = 1 n x i 30[−10,10]0
f 3 = i = 1 n ( j = 1 i x j ) 2 30[−100,100]0
f 4 = m a x { x i , 1 i n } 30[−100,100]0
f 5 = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30,30]0
f 6 = i = 1 n ( x i + 0.5 ) 2 30[−100,100]0
f 7 = i = 1 n i x i 4 + R a n d o m   [ 0,1 ) 30[−1.28,1.28]0
Table 3. Multi-modal functions.
Table 3. Multi-modal functions.
FunctionDimensionRangefmin
f 8 = i = 1 n x i sin x i 30[−500,500] 418.9829 × 5
f 9 = i = 1 n [ x i 2 10 c o s ( 2 π x i ) + 10 ] 30[−5.12,5.12]0
f 10 = 20 e x p ( 0.2 1 n j = 1 n x j ) e x p ( 1 n c o s ( 2 π x j ) ) + 20 + e 30[−32,32]0
f 11 = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30[−600,600]0
f 12 = π n 10 sin π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y i 1 2 + i = 1 n u ( x i , 10,100,4 ) y i = 1 + x i + 1 4 u x i , a , k , m = k ( x i a ) m                           x i > a 0                                           a < x i < a k ( x i a ) m               x i < a 30[−50,50]0
f 13 = 0.1 10 s i n 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + s i n 2 ( 2 π x n ) ] + i = 1 n u ( x i , 5100.4 ) 30[−50,50]0
Table 4. Fixed-dimension multi-modal functions.
Table 4. Fixed-dimension multi-modal functions.
FunctionDimensionRangefmin
f 14 = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65,65]1
sf 15 = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5,5]0.00030
f 16 = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5,5]−1.0316
f 17 = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 1 1 8 π c o s x 1 + 10 2[−5,5]0.398
f 18 = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × [ 30 + 2 x 1 3 x 2 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2,2]3
f 19 = i = 1 4 c i e x p ( j = 1 3 a i j ( x j p i j ) 2 ) 3[1,3]−3.86
f 20 = i = 1 4 c i e x p ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0,1]−3.32
f 21 = i = 1 5 [ ( X a i ) ( X a i ) T + c i ) ] 1 4[0,10]−10.1532
f 22 = i = 1 7 [ ( X a i ) ( X a i ) T + c i ) ] 1 4[0,10]−10.4028
f 23 = i = 1 10 [ ( X a i ) ( X a i ) T + c i ) ] 1 4[0,10]−10.4028
Table 5. The result of Uni-modal functions.
Table 5. The result of Uni-modal functions.
FWOAWOASOAPSOGWOSTOASSA
f 1 Min08.430261 × 10−2101.555097 × 10−362.810415 × 10−846.393086 × 10−891.635800 × 10−234.769387 × 10−09
Max5.273753 × 10−1717.652315 × 10−327.652315 × 10−324.526335 × 10−761.821216 × 10−842.765344 × 10−209.360340 × 10−09
Ave2.854804 × 10−1729.126593 × 101943.559156 × 10−334.031582 × 10−772.397316 × 10−854.012824 × 10−217.021318 × 10−09
Std.01.956124 × 10−641.956124 × 10−641.292897 × 10−1522.043497 × 10−1694.681402 × 10−411.419605 × 10−18
f 2 Min03.336010 × 10−1231.228525 × 10−211.047446 × 10−121.465409 × 10−502.186051 × 10−154.064451 × 10−05
Max9.873765 × 10−1045.406626 × 10−205.406626 × 10−203.829112 × 10−062.796488 × 10−484.621727 × 10−132.059145 × 10+00
Ave5.485366 × 10−1055.434907 × 10−1121.048486 × 10−202.593201 × 10−073.096340 × 10−498.820698 × 10−142.282708 × 10−01
Std.4.020400 × 10−2081.286922 × 10−401.286922 × 10−405.202686 × 10−132.706440 × 10−971.466744 × 10−262.252014 × 10−01
f 3 Min04.745790 × 10+011.042476 × 10−232.851281 × 10−043.116347 × 10−325.942626 × 10−145.760246 × 10−02
Max5.961890 × 10+003.403934 × 10−173.403934 × 10−178.574375 × 10−034.108705 × 10−243.588812 × 10−108.382606 × 10+00
Ave2.101297 × 10−014.010754 × 10+032.264491 × 10−182.048678 × 10−031.416761 × 10−252.915104 × 10−118.969260 × 10−01
Std.1.181940 × 10+005.174733 × 10−355.174733 × 10−353.242489 × 10−065.616559 × 10−495.133289 × 10−212.658013 × 10+00
f 4 Min04.299918 × 10−074.505221 × 10−131.229180 × 10−034.800334 × 10−231.128292 × 10−074.350890 × 10−04
Max1.117087 × 10−033.594542 × 10−083.594542 × 10−082.791385 × 10−021.432535 × 10−204.542304 × 10−064.277223 × 10+00
Ave3.777352 × 10−051.715458 × 10+011.323292 × 10−097.525940 × 10−031.617966 × 10−216.961723 × 10−075.554818 × 10−01
Std.4.155907 × 10−084.291170 × 10−174.291170 × 10−173.887478 × 10−058.220771 × 10−427.019692 × 10−136.972356 × 10−01
f 5 Min1.660556 × 10−022.544580 × 10+012.596043 × 10+013.548720 × 10−012.487610 × 10+012.620052 × 10+012.152424 × 10+01
Max2.695590 × 10+012.861168 × 10+012.861168 × 10+017.741710 × 10+012.712737 × 10+012.873763 × 10+015.826600 × 10+02
Ave2.050639 × 10+012.591175 × 10+012.753575 × 10+014.015986 × 10+012.633584 × 10+012.751388 × 10+019.092186 × 10+01
Std.1.082212 × 10+023.846561 × 10−013.846561 × 10−017.837989 × 10+024.753373 × 10−014.483316 × 10−011.561153 × 10+04
f 6 Min1.306296 × 10−041.480551 × 10−041.633545 × 10+0005.460670 × 10−067.128582 × 10−013.853132 × 10−09
Max5.012690 × 10−043.248522 × 10+003.248522 × 10+002.899680 × 10−295.060667 × 10−012.508231 × 10+008.553629 × 10−09
Ave3.071433 × 10−043.234442 × 10−042.462839 × 10+001.416252 × 10−301.159188 × 10−011.500161 × 10+006.730791 × 10−09
Std.8.797922 × 10−091.861930 × 10−011.861930 × 10−012.971108 × 10−592.452040 × 10−021.996132 × 10−011.571142 × 10−18
f 7 Min9.352327 × 10−072.738278 × 10−053.030240 × 10−052.342989 × 10−038.511159 × 10−051.283198 × 10−049.841093 × 10−03
Max8.370940 × 10−046.483309 × 10−046.483309 × 10−047.825572 × 10−034.966548 × 10−043.612736 × 10−035.114226 × 10−02
Ave1.123975 × 10−045.837796 × 10−042.616332 × 10−045.040257 × 10−032.489494 × 10−048.743143 × 10−042.689375 × 10−02
Std.4.441664 × 10−083.626657 × 10−083.626657 × 10−082.025908 × 10−061.153306 × 10−085.463082 × 10−071.042387 × 10−04
Table 6. The result of Multi-modal functions.
Table 6. The result of Multi-modal functions.
FWOAWOASOAPSOGWOSTOASSA
f 8 Min−1.256946 × 10+04−1.256945 × 10+04−7.887318 × 10+03−8.423960 × 10+03−7.457098 × 10+03−7.580349 × 10+03−9.210553 × 10+03
Max−9.862899 × 10+03−5.030709 × 10+03−5.030709 × 10+03−5.561062 × 10+03−3.605024 × 10+03−5.132615 × 10+03−5.633845 × 10+03
Ave−1.208077 × 10+04−1.179017 × 10+04−6.157293 × 10+03−6.743667 × 10+03−6.337316 × 10+03−5.889671 × 10+03−7.647054 × 10+03
Std.5.006503 × 10+056.502353 × 10+056.502353 × 10+05;5.045922 × 10+056.460060 × 10+053.280557 × 10+059.304187 × 10+05
f 9 Min0002.089413 × 10+01001.492438 × 10+01
Max05.684342 × 10−145.684342 × 10−148.457133 × 10+015.684342 × 10−141.260584 × 10+019.949549 × 10+01
Ave03.789561 × 10−151.894781 × 10−154.424245 × 10+013.789561 × 10−157.857429 × 10−014.092592 × 10+01
Std.01.077058 × 10−281.077058 × 10−282.612571 × 10+022.079836 × 10−286.245225 × 10+003.038580 × 10+02
f 10 Min8.881784 × 10−168.881784 × 10−161.509903 × 10−147.993606 × 10−157.993606 × 10−151.995507 × 10+011.575264 × 10−05
Max4.440892 × 10−151.996086 × 10+011.996086 × 10+011.899744 × 10+001.509903 × 10−141.995985 × 10+013.158812 × 10+00
Ave1.125026 × 10−154.440892 × 10−151.929332 × 10+013.073540 × 10−011.036208 × 10−141.995836 × 10+011.237668 × 10+00
Std.8.124361 × 10−311.327821 × 10+011.327821 × 10+013.497835 × 10−018.994828 × 10−301.435757 × 10−061.132933 × 10+00
f 11 Min0000003.676968 × 10−02
Max02.077809 × 10−022.077809 × 10−024.672941 × 10−022.022412 × 10−028.989056 × 10−021.286951 × 10−08
Ave08.562736 × 10−046.926030 × 10−049.768366 × 10−039.371244 × 10−048.318053 × 10−037.221352 × 10−03
Std.01.439097 × 10−051.439097 × 10−05;1.087677 × 10−041.534190 × 10−054.163995 × 10−047.826887 × 10−05
f 13 Min2.108763 × 10−052.380328 × 10−051.107146 × 10−011.578612 × 10−324.409217 × 10−075.906150 × 10−022.240856 × 10−11
Max8.51455 7 × 10−053.001715 × 10−013.001715 × 10−015.182541 × 10−013.571543 × 10−022.162930 × 10−015.905217 × 10+00
Ave4.197168 × 10−054.948684 × 10−041.978382 × 10−014.491879 × 10−021.304051 × 10−021.144425 × 10−011.942916 × 10+00
Std.2.311110 × 10−103.789442 × 10−033.789442 × 10−031.162089 × 10−025.920125 × 10−052.306902 × 10−032.686687 × 10+00
f 14 Min2.905554 × 10−043.394113 × 10−041.178007 × 10+001.473043 × 10−326.291644 × 10−066.834331 × 10−012.353540 × 10−10
Max2.905554 × 10−042.114193 × 10+002.114193 × 10+009.737116 × 10−024.123093 × 10−011.969527 × 10+004.394886 × 10−02
Ave4.052124 × 10−033.628557 × 10−031.677016 × 10+006.510216 × 10−031.609783 × 10−011.293502 × 10+006.560701 × 10−03
Std.1.334761 × 10−024.883671 × 10−024.883671 × 10−023.274727 × 10−041.209685 × 10−026.628387 × 10−028.727186 × 10−05
Table 7. The result of Fixed-dimension multi-modal functions.
Table 7. The result of Fixed-dimension multi-modal functions.
FWOAWOASOAPSOGWOSTOASSA
f 14 Min9.980038 × 10−019.980038 × 10−019.980038 × 10−019.980038 × 10−019.980038 × 10−019.980038 × 10−019.980038 × 10−01
Max9.980038 × 10−012.982105 × 10+002.982105 × 10+001.992031 × 10+001.076318 × 10+019.980038 × 10−019.980038 × 10−01
Ave9.980038 × 10−011.064141 × 10+001.064141 × 10+001.229943 × 10+002.117150 × 10+009.980038 × 10−019.980038 × 10−01
Std.1.166008 × 10−231.312219 × 10−011.312219 × 10−011.828534 × 10−013.621514 × 10+002.062775 × 10−183.995308 × 10−32
f 15 Min3.074875 × 10−043.075390 × 10−043.076348 × 10−043.074860 × 10−043.074864 × 10−043.078072 × 10−043.074860 × 10−04
Max1.223604 × 10−031.256914 × 10−031.256914 × 10−031.594050 × 10−032.036334 × 10−021.595878 × 10−031.594901 × 10−03
Ave3.417386 × 10−045.549406 × 10−041.194463 × 10−034.361424 × 10−042.343596 × 10−031.205608 × 10−039.405784 × 10−04
Std.2.796709 × 10−082.809383 × 10−082.809383 × 10−081.541092 × 10−073.735096 × 10−053.336449 × 10−081.403797 × 10−07
f 16 Min−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00
Max−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031627 × 10+00−1.031628 × 10+00
Ave−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00−1.031628 × 10+00
Std.1.046047 × 10−211.456520 × 10−141.456520 × 10−144.590354 × 10−311.514620 × 10−186.415465 × 10−148.614565 × 10−30
f 17 Min3.978874 × 10−013.978874 × 10−013.978877 × 10−013.978874 × 10−013.978874 × 10−013.978877 × 10−013.978874 × 10−01
Max3.978878 × 10−013.980685 × 10−013.980685 × 10−013.978874 × 10−013.978884 × 10−013.980266 × 10−013.978874 × 10−01
Ave3.978874 × 10−013.978874 × 10−013.979114 × 10−013.978874 × 10−013.978875 × 10−013.979088 × 10−013.978874 × 10−01
Std.8.514675 × 10−151.349701 × 10−091.349701 × 10−090.000000 × 10+003.859782 × 10−149.203009 × 10−101.178905 × 10−28
f 18 Min3.000000 × 10+003.000000 × 10+003.000000 × 10+003.000000 × 10+003.000000 × 10+003.000000 × 10+003.000000 × 10+00
Max3.000004 × 10+003.000005 × 10+003.000005 × 10+003.000000 × 10+003.000008 × 10+003.000020 × 10+003.000000 × 10+00
Ave3.000000 × 10+003.000000 × 10+003.000001 × 10+003.000000 × 10+003.000001 × 10+003.000002 × 10+003.000000 × 10+00
Std.5.635206 × 10−132.210026 × 10−122.210026 × 10−121.740934 × 10−302.567861 × 10−121.625908 × 10−118.357709 × 10−28
f 19 Min−3.862782 × 10+00−3.862782 × 10+00−3.862767 × 10+00−3.862782 × 10+00−3.862782 × 10+00−3.862773 × 10+00−3.862782 × 10+00
Max−3.862762 × 10+00−3.854857 × 10+00−3.854857 × 10+00−3.862782 × 10+00−3.856489 × 10+00−3.854856 × 10+00−3.862782 × 10+00
Ave−3.862778 × 10+00−3.862627 × 10+00−3.855427 × 10+00−3.862782 × 10+00−3.862571 × 10+00−3.855671 × 10+00−3.862782 × 10+00
Std.2.805649 × 10−113.958128 × 10−063.958128 × 10−067.344567 × 10−301.319696 × 10−065.734189 × 10−062.305378 × 10−29
f 20 Min−3.321995 × 10+00−3.321993 × 10+00−3.200659 × 10+00−3.321995 × 10+00−3.321994 × 10+00−3.321919 × 10+00−3.321995 × 10+00
Max−3.321938 × 10+00−2.840363 × 10+00−2.840363 × 10+00−3.203102 × 10+00−3.134100 × 10+00−3.015514 × 10+00−3.202625 × 10+00
Ave−3.321976 × 10+00−3.257065 × 10+00−3.056846 × 10+00−3.262549 × 10+00−3.249134 × 10+00−3.069592 × 10+00−3.214881 × 10+00
Std.2.390144 × 10−105.765841 × 10−035.765841 × 10−033.655752 × 10−034.467225 × 10−035.893752 × 10−031.318810 × 10−03
f 21 Min−1.015320 × 10+01−1.015320 × 10+01−1.014653 × 10+01−1.015320 × 10+01−1.015317 × 10+01−1.014820 × 10+01−1.015320 × 10+01
Max−1.015288 × 10+01−4.965276 × 10−01−4.965276 × 10−01−2.630472 × 10+00−5.100549 × 10+00−4.982139 × 10−01−5.055198 × 10+00
Ave−1.015312 × 10+01−1.015317 × 10+01−5.785824 × 10+00−5.968921 × 10+00−9.984578 × 10+00−5.944664 × 10+00−8.971262 × 10+00
Std.5.716151 × 10−091.600742 × 10+011.600742 × 10+011.007418 × 10+018.509064 × 10−011.798099 × 10+014.748450 × 10+00
f 22 Min−1.040294 × 10+01−1.040294 × 10+01−1.039981 × 10+01−1.040294 × 10+01−1.040287 × 10+01−1.039889 × 10+01−1.040294 × 10+01
Max−1.040269 × 10+01−9.080722 × 10−01−9.080722 × 10−01−1.837593 × 10+00−1.040245 × 10+01−9.080713 × 10−01−5.087672 × 10+00
Ave−1.040288 × 10+01−9.516997 × 10+00−7.709851 × 10+00−8.028456 × 10+00−1.040271 × 10+01−8.729593 × 10+00−1.022576 × 10+01
Std.3.938498 × 10−091.272529 × 10+011.272529×e+011.219797 × 10+011.193802 × 10−081.032487 × 10+019.417361 × 10−01
f 23 Min−1.053641 × 10+01−1.053641 × 10+01−1.053479 × 10+01−1.053641 × 10+01−1.053639 × 10+01−1.053488 × 10+01−1.053641 × 10+01
Max−1.053611 × 10+01−9.488805 × 10−01−9.488805 × 10−01−2.421734 × 10+00−2.421726 × 10+00−9.488816 × 10−01−5.175647 × 10+00
Ave−1.053635 × 10+01−1.010588 × 10+01−9.842207 × 10+00−8.675884 × 10+00−1.026572 × 10+01−9.480819 × 10+00−9.821641 × 10+00
Std.3.456077 × 10−094.688671 × 10+004.688671 × 10+001.028709 × 10+012.194825 × 10+006.051566 × 10+003.435321 × 10+00
Table 8. The result of environment 1.
Table 8. The result of environment 1.
FWOAWOAPSOGWOSTOASSASOA
Mean28.429632.553831.025630.302430.479130.033530.3244
Best27.560228.123627.789727.789727.789727.560227.5602
Worst29.651548.184250.045934.399631.018931.082231.0189
Std.0.5799034.103065.235521.571750.9209491.094880.948411
Table 9. The result of environment 2.
Table 9. The result of environment 2.
FWOAWOAPSOGWOSTOASSASOA
Mean29.37245.481331.677330.558929.453129.852229.458
Best28.46428.700328.46428.826929.404628.826928.8269
worst30.835240051.606539.56329.916630.858730.3269
Std.0.56228267.09784.600772.405120.1345630.368740.224772
Table 10. The result of environment 3.
Table 10. The result of environment 3.
FWOAWOAPSOGWOSTOASSASOA
Mean30.556245.716944.595131.055731.148531.335331.1643
Best28.426829.802629.9129.219830.873329.11730.5405
Worst34.625540040034.399631.57532.148831.575
Std.1.3652866.953267.15390.9235680.1733750.5425790.211504
Table 11. The result of environment 4.
Table 11. The result of environment 4.
FWOAWOAPSOGWOSTOASSASOA
Mean28.773829.689329.518329.269728.809229.01528.8804
Best28.312128.772928.527728.312128.490228.661128.4902
Worst29.024835.656934.43932.892628.980129.559229.3811
Std.0.13841.802561.360261.276190.1210890.2046190.164828
Table 12. The result of environment 5.
Table 12. The result of environment 5.
FWOAWOAPSOGWOSTOASSASOA
Mean28.220329.438229.215329.004329.147929.295129.2025
Best27.681327.794927.794927.681327.681327.966227.9662
Worst29.636632.569731.857833.114229.636631.817429.7949
Std.0.5129141.090520.8076750.9720060.5774130.7930530.472232
Table 13. The p-value of experiments.
Table 13. The p-value of experiments.
Environment 1Environment 2Environment 3Environment 4Environment 5
WOA vs. FWOA3.54595 × 10−1541.23598 × 10−1581.04784 × 10−1541.09486 × 10−1618.03319 × 10−99
PSO vs. FWOA1.12188 × 10−1071.06873 × 10−718.32723 × 10−307.92567 × 10−1541.29799 × 10−54
GWO vs. FWOA7.60116 × 10−1271.25814 × 10−1042.91533 × 10−1544.87965 × 10−1584.75254 × 10−60
STOA vs. FWOA1.72411 × 10−1234.22364 × 10−131.02367 × 10−502.64802 × 10−795.51899 × 10−98
SSA vs. FWOA2.55839 × 10−1054.28541 × 10−191.29613 × 10−521.63432 × 10−1386.13455 × 10−82
SOA vs. FWOA6.62596 × 10−1263.5665 × 10−202.33206 × 10−576.97704 × 10−1221.83984 × 10−107
Table 14. The result of environment 6.
Table 14. The result of environment 6.
FWOAPSOWOAHSFAMSA
Mean5.26565.957726.544546.064485.935025.94622
Best5.245775.732896.059515.775775.608695.80207
Worst5.333136.161546.930886.233526.534886.07259
Std.0.0004636210.01398970.05955650.008508260.04182030.00479293
Table 15. The result of environment 7.
Table 15. The result of environment 7.
FWOAPSOWOAHSFAMSA
Mean2.071992.742513.127882.854642.684282.5689
Best2.070472.517042.705312.581452.319472.37212
Worst2.075123.065113.474293.020963.082972.74311
Std.1.26108 × 10−060.0129690.03893340.01174010.02748490.00630382
Table 16. The result of environment 8.
Table 16. The result of environment 8.
FWOAPSOWOAHSFAMSA
Mean2.602483.325573.710713.357893.306543.27013
Best2.289483.098823.279643.019732.886513.11258
Worst2.629923.56234.193253.705613.763083.36859
Std.0.007095380.0132380.04846910.03476630.04230180.00482256
Table 17. The result of environment 9.
Table 17. The result of environment 9.
FWOAPSOWOAHSFAMSA
Mean2.469522.663233.043483.035962.587133.38617
Best2.552814.397282.838572.884022.492262.91938
Worst2.815244.901773.527623.205792.857743.85635
Std.0.004281030.0104460.02041760.008869050.005334160.0688828
Table 18. The result of environment 10.
Table 18. The result of environment 10.
FWOAPSOWOAHSFAMSA
Mean2.620723.30953.758943.317113.384363.58167
Best2.598493.096633.347063.000463.099223.37216
Worst2.729483.591654.232913.598573.667574.15024
Std.0.001272140.01679680.05062450.01995170.02031310.0265063
Table 19. The p-value.
Table 19. The p-value.
Environment 6Environment 7Environment 8Environment 9Environment 10
PSO vs. FWOA4.81232 × 10−310001.05172 × 10−2522.49163 × 10−301
WOA vs. FWOA9.88131 × 10−324005.57775 × 10−3150
HS vs. FWOA1.60769 × 10−320007.70742 × 10−3221.84425 × 10−318
FA vs. FWOA3.15265 × 10−318002.30662 × 10−2763.30036 × 10−321
MSA vs. FWOA7.88286 × 10−3081.15611 × 10−32103.95253 × 10−3231.22528 × 10−321
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, T.; Liang, Z.; Wei, Y.; Luo, Q.; Zhou, Y. Hybrid Whale Optimization with a Firefly Algorithm for Function Optimization and Mobile Robot Path Planning. Biomimetics 2024, 9, 39. https://doi.org/10.3390/biomimetics9010039

AMA Style

Tian T, Liang Z, Wei Y, Luo Q, Zhou Y. Hybrid Whale Optimization with a Firefly Algorithm for Function Optimization and Mobile Robot Path Planning. Biomimetics. 2024; 9(1):39. https://doi.org/10.3390/biomimetics9010039

Chicago/Turabian Style

Tian, Tao, Zhiwei Liang, Yuanfei Wei, Qifang Luo, and Yongquan Zhou. 2024. "Hybrid Whale Optimization with a Firefly Algorithm for Function Optimization and Mobile Robot Path Planning" Biomimetics 9, no. 1: 39. https://doi.org/10.3390/biomimetics9010039

APA Style

Tian, T., Liang, Z., Wei, Y., Luo, Q., & Zhou, Y. (2024). Hybrid Whale Optimization with a Firefly Algorithm for Function Optimization and Mobile Robot Path Planning. Biomimetics, 9(1), 39. https://doi.org/10.3390/biomimetics9010039

Article Metrics

Back to TopTop