Next Article in Journal
New Geometric Theorems Derived from Integral Equations Applied to Radiative Transfer in Spherical Sectors and Circular Segments
Previous Article in Journal
Application of Weighting Algorithm for Enhanced Broadband Vector Network Analyzer Measurements
Previous Article in Special Issue
Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MISAO: A Multi-Strategy Improved Snow Ablation Optimizer for Unmanned Aerial Vehicle Path Planning

by
Cuiping Zhou
1,
Shaobo Li
1,2,*,
Cankun Xie
1,
Panliang Yuan
1 and
Xiangfu Long
3
1
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
2
Guizhou Institute of Technology, Guiyang 550003, China
3
School of Mechanical Engineering, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(18), 2870; https://doi.org/10.3390/math12182870
Submission received: 28 August 2024 / Revised: 11 September 2024 / Accepted: 12 September 2024 / Published: 14 September 2024

Abstract

:
The snow ablation optimizer (SAO) is a meta-heuristic technique used to seek the best solution for sophisticated problems. In response to the defects in the SAO algorithm, which has poor search efficiency and is prone to getting trapped in local optima, this article suggests a multi-strategy improved (MISAO) snow ablation optimizer. It is employed in the unmanned aerial vehicle (UAV) path planning issue. To begin with, the tent chaos and elite reverse learning initialization strategies are merged to extend the diversity of the population; secondly, a greedy selection method is deployed to retain superior alternative solutions for the upcoming iteration; then, the Harris hawk (HHO) strategy is introduced to enhance the exploitation capability, which prevents trapping in partial ideals; finally, the red-tailed hawk (RTH) is adopted to perform the global exploration, which, enhances global optimization capability. To comprehensively evaluate MISAO’s optimization capability, a battery of digital optimization investigations is executed using 23 test functions, and the results of the comparative analysis show that the suggested algorithm has high solving accuracy and convergence velocity. Finally, the effectiveness and feasibility of the optimization path of the MISAO algorithm are demonstrated in the UAV path planning project.

1. Introduction

Following the ongoing advances in both science and intelligence, unmanned aerial vehicles (UAVs) are increasingly being used in combat surveillance, electronic jamming, combat assessment and radar deception due to their good performance [1], as well as in civil fields, such as aerial photography, information collection, material distribution, terrain mapping and agricultural plant protection [2]. As UAVs are widely used in various fields, the UAV path planning problem in modern years became the focus of scholars’ attention. Meanwhile, the market demand for UAVs is also growing; according to the latest industry report from Mordor Intelligence, the UAV market size is $17.31 billion in 2024 and is expected to reach $32.95 billion by 2029, while growing at a CAGR of 13.74% [3]. This growth is mainly attributed to the ease and expertise of UAV applications in the field, and with the application of artificial intelligence and machine learning technologies, the autonomous flight and intelligent path planning capabilities of UAVs have been significantly improved, and these trends not only highlight the critical role of UAVs in modern society but also further emphasize the importance of UAV path planning research. UAV path planning includes local path planning and global path planning [4], and the goal is to safeguard the safety of the airframe and seek the optimal path with the lowest cost between the beginning and the goal sites under the condition of conforming to its own performance and environmental constraints [5]. The traditional classical techniques for UAV path planning mainly include the random tree ways [6], probabilistic graph line method (PRM) (PRM) [7], artificial potential field and A* search algorithms [8], etc. However, these conventional methods usually suffer from drawbacks, such as weak optimization searching performance and slow astringency speed, which cause an incapacity to trade off the conflict between the exploration process and the large amount of data effectively and to deal with more complex path planning problems. Moreover, with the changing era, meta-heuristic algorithms (MAs) [9] have been extensively adopted in diverse spheres of applications during recent years because of their excellent convergence velocity, strong optimality searching capabilities, higher estimation accuracy, ease of manipulation and flexibility of complex constraints [10].
In recent years, MA has been considered an elegant way to solve engineering simulation problems, and researchers have introduced a novel set of new smart optimization algorithms to solve engineering specification problems, such as sinh cosh optimizer (SCHO) [11], parrot optimizer (PO) [12], triangulation topology aggregation optimizer (TTAO) [13] and Genghis khan shark optimizer (GKSO) [14]. The path planning problem belongs to a category of common optimal problems due to its limiting nature. Optimization problems have been a key area of interest for researchers. They are defined as finding a globally optimal solution to a problem for decision variables with limited assets or particular restrictions [15]. Following the rise of digital technology, MA plays an instrumental role in several disciplines and engineering fields, including fault diagnosis [16], path planning [17], prediction studies [18], image segmentation [19], task assignment [20], function selection and parameter identification [21,22]. UAV path optimum problems are usually marked by complexity, nonlinear constraints, non-convexity, dynamically noisy objective functions and a large solution space [23], including single UAV and swarm UAV path planning, with the eventual purpose of finding the optimal path under controlled conditions. The path planning problem is to find the best path among all possible cases. The evolution of optimal techniques is considered one of the most crucial aspects in defining the strategy of the route planning scheme, and a multitude of intelligent optimization schemes are employed to path-planning to reduce paths and reduce costs. Dwangan et al. [24] proposed the grey wolf optimizer (GWO) [25] to tackle the path planning problem for 3D UAVs, along with preventing conflicts between barriers and others. Ni et al. [26] suggested a hybrid strategy of Q-learning to integrate an artificial bee colony algorithm (ABC) [27] in unmanned vehicle path planning to overcome a vehicle engineering challenge. Ait-Saadi proposed a meta-heuristic improvement of the African vulture optimization algorithm (AVOA) [28] for solving the UAV path planning problem in a 3D setting. Although these algorithms are effective in providing reasonable flight paths for UAVs, they suffer from the problem that older algorithms with weaker optimization capabilities cannot solve the path planning problem better. For this purpose, it is recommended to design the algorithms with stronger performance to plan the path design of UAVs.
In 2023, Deng and Liu, inspired by snow sublimation and melting behavior, proposed a novel physical heuristic method, the snow ablation optimizer (SAO) [29], to deal with numerical optimization and engineering design problems. SAO is considered an elegant meta-heuristic method in which the search agent switches between investigation and expedition modes to converge on the overall optimal according to the varying conversion patterns of snow. It has the advantages of a sensitive mechanism, fewer settings of variables required and excellent accuracies. Despite that, SAO inevitably has drawbacks, such as poor global seeking ability and pre-mature concurrency. These limitations mean that it still faces dilemmas in the solution of complex problems. Therefore, in response to the dilemma and to further improve the optimization properties, Xiao et al. [30] improved the original SAO in photovoltaic model parameter estimation and engineering design problems to overcome its restrictions and enhance the algorithm’s capability. Jia et al. [31] designed the heat transfer and condensation strategies to improve the defects of the original two-population mechanism to heighten the optimization capability. Based on the above work, this article presents a multi-strategy improved snow ablation optimizer (MISAO) to offset the shortcomings of the traditional SAO algorithm and then increase the acceleration and integration velocity. The suggested MISAO algorithm combines four enhancement techniques to balance discovery and exploitation. Firstly, tent chaos and elite reverse learning initialization strategies are integrated to generate uniformly distributed high-quality populations, thus increasing the biodiversity of the populations; secondly, a greedy choice method is used in the exploration phase to maintain improved candidate solutions for the subsequent generation, hence balancing discovery and exploitation; and then, In the development stage, the Harris hawks optimization (HHO) [32] position update formula is introduced to strengthen the development capability, broaden the search range and eliminate the trapping in partial optical improving the accuracy of convergence. Finally, the swooping predator phase strategy of the red-tailed hawk algorithm (RTH) [33] is adopted for the global exploration to boost the overall optimal capability and raise the alarm reliability of the mechanism. For the complete estimation of the optimization property of MISAO, 23 IEEE CEC2005 baseline test references with separate algorithms and alternative tactic algorithms are used to conduct qualitative convergence. Statistically, in different dimensions of MISAO, MISAO is applied to both single-UAV and multi-UAV path planning problems to validate the usefulness and effectiveness of the presented algorithms. The essential achievements are highlighted below:
(1) Building on the SAO algorithm; we submit the multi-strategy improved snow ablation algorithm (MISAO), which combines the HHO and RTH optimal strategies to allow for global discovery and local exploitation.
(2) The superiority of MISAO was verified using 23 CEC2005 test functions, and the derived outcomes were benchmarked against different algorithms and improvement strategies.
(3) Friedman’s and Wilcoxon’s rank sum tests are performed to verify that MISAO can outperform competitors’ algorithms regarding solution agreement, the convergence factor and robustness.
(4) The suggested algorithms’ effect in resolving actual situations is appraised using single-UAV and multi-UAV path planning design problems.
The lines of this study are structured as follows: Section 2 describes the related work. Section 3 provides a detailed description of the basic SAO algorithm. In Section 4, MISAO is developed, and the improved strategy is illustrated. In Section 5, the optical performance of MISAO is evaluated on the IEEE CEC2005 benchmark test suite. Section 6 verifies MISAO’s effectiveness in real-world scenarios through single-UAV and multi-UAV path planning design problems. In Section 7, the experimental outcomes are concluded and discussed.

2. Related Work

So far, various meta-heuristic algorithms have been used by researchers to develop numerical models for tackling optimization challenges. They have also classified MA algorithms, including unnatural heuristics and natural heuristics. Unnatural heuristics mainly rely on human thinking, such as harmony search [34], adaptive dimension search, taboo search [35,36], etc. However, these traditional methods are ineffective in identifying the optimal secret solution efficiently and tend to be trapped in the partial optimum, so natural heuristic algorithms are popular among researchers. Actions, such as the predation of animals in nature, physicochemical reactions and the renewal and evolution of species may be potential resources of inspiration for meta-heuristic architectures. Researchers commonly divided MAs into three types: evolutionary algorithms, physical algorithms and group intelligence algorithms [37]. Evolutionary algorithms (EAs) simulate the evolutionary processes in natural sessions, such as mating, genetic variation mutation, etc. Representative of these are genetic algorithms (GAs) [38], differential evolution (DE) [39], genetic programming (GP) [40] and evolutionary strategies (ESs) [41]; physical algorithms mimic the principles of physical reactions in reality, such as gravity, inertial forces, mass balance and molecular dynamics, to search in space, representative of which are the gravitational search algorithm (GSA) [42], multiverse optimizer (MVO) [43], atomic search optimization (ASO) [44] and time optimization algorithm (RIME) [45]; Group intelligence algorithms are inspired by ‘social’ organisms that behave collectively, such as social behaviors, such as teaming, foraging, reproduction and predation, within a group of organisms, etc. Some of the more famous algorithms and the GWO, whale optimization algorithm (WOA) [46], the HHO and the dung beetle optimization (DBO) [47]. Illustrated in Figure 1 is the taxonomy of metaheuristic algorithms and representative algorithms.
Owing to the excellent optimal finding ability and flexibility of metaheuristic algorithms, they have been widely used in various complex constrained path planning and engineering design problems. Mainstream path planning mainly involves UAV path planning and robot path planning. Miao [48] adaptively refined the traditional ant colony algorithm to ultimately achieve optimal performance enhancement in the route planning of indoor moving robots. Lin [49] employed a hybrid PSO-SA algorithm to optimize mobile robots’ operation and maintenance paths in industry and commerce to reduce consumption and time costs. Wu et al. [50] introduced optimal individual and hybrid selection policies to co-evolve the best optimal multi-UAV cooperative roadmap planning algorithm. Zhang [51] exercised the improved sparrow search algorithm (CFSSA) to plan the inspection paths of UAVs in smart workshops to improve inspection efficiency and reduce workshop costs. Among other engineering design problems, Malheiros-Silveira et al. [52] employed the ABC method combined with a limited element model for optimizing the backward design of photonic structure crystals. Elymany et al. [53] employed a novel mixed maximal power point tracking (MPPT) technique incorporating the zebra optimization algorithm (ZOA) [54] and gorilla troop optimizer (GTO) [55] in purchase to obtain the maximum power for both the thermal solar module and the wind turbine. Based on cumulative covariance matrix (CCM) and biogeography-based optimization (BBO) [56] Cao et al. put forward a (CCM-BBO) framework to build a feature coordinate system using the CCM operator, which is applied to the problem of intrusion-detection optimization. Rezk et al. [57] introduced a multivariate universe optimization (MVO) for devising load filter control in multi-connected electricity systems for wind and voltaic power plants, in an effort to optimize the control parameters of the load frequency controllers (LFCs) for multi-source power systems (MSPSs).
In the algorithmic application scenario of route optimization, although the sources of MA may be diverse, the basic structure is the two phases of exploration and development, and MA needs to strike a suitable balance between these two phases for ensuring the best results in the optimization process in a robust manner [58]. Although we have identified the power of MAs when addressing path or other domain optimization projects they can suffer from partial optimality, early convergence and solution diversity depletion. As a result, researchers have improved different MA algorithms to enhance the search and exploitation performance and balance the partial exploration and global optimality finding. These have gained wide popularity in solving path-planning problems with various complex constraints and practical engineering optimization tasks in different fields. In path planning, Wu suggested a neighbor integrated learning particle swarm optimization (N-CLPSO) [59] that brought in a remove–reinsert neighborhood search mechanism to solicit the vehicle path optimization problem and validate it in a real-world problem. Huang et al. [60] proposed the ACVDEPSO algorithm for easier path searching and optimization to solve the problem of generating higher-quality path planning for UAVs in 3D surroundings. Zhang [61] employed multi-trajectory scanning to suggest a multi-strategy modified white shark optimization algorithm to improve the planning of UAV flight paths. In multi-objective optimization, Zhang and Peng et al. [62] offered a multi-objective inflationary algorithm with a dual binding processing facility in order to overcome the shortcomings of integrated UAVs and achieve excellent planning performance. Lyu et al. [63] demonstrated the advantages of the IDBO to enhance 3D UAV path planning and proved the advantages in practical application scenarios. In the aerospace field, Su et al. [64] combined the powerful robustness and overall optimal features of the hyper-heuristic WOA and the efficient and accidental features of the Gaussian pseudo-spectral method (GPM) to carry out the optical simulation to improve the re-entry trajectory of the launch vector. Among other improved methods, considering the excellent performance of the nonlinear marine predator algorithm (NMPA) [65], Sadiq used it to solicit equitable power distribution in the NOMA-VLC-B5G network. Wu et al. [66] recommended the quantum computing and multi-strategy augmented improved sparrow search algorithm (QMESSA) to solve engineering problems. Ahmed [67] fused three improvement strategies and introduced the modified grey wolf optimization algorithm (MELGWO) using the dynamic linear population size reduction technique and applied it in the engineering field. Abualigah et al. [68] suggested the HHMV strong modification method to mitigate the traditional HHO’s major drawback of slow convergence and creeping towards the best solution. A summary of the existing studies and the districts of this study was made as displayed in Table 1.

3. SAO Algorithm

The snow melting process undergoes two phases: the melting and sublimation of the snow, which results in it being converted as fluid water and vapor. Liquid water derived from snowmelt can be further transformed into vapor using evaporation. The melting and sublimation process of the SAO algorithm is illustrated in Figure 2. According to the snow melting process, SAO searches for the optimal solution through four phases: initiation phase, exploratory phase, developmental phase and two-population mechanism.

3.1. Initiation Phase

The random selection of generating populations is the mechanism of the initiation phase of the SAO algorithm, where the D i m is set to the dimension magnitude of the optimal issue. In this study, the number of populations is fixed to N m , and the populations seek the optimal solution inside the bounds L u and U l to ensure the effectiveness of the design of the search; the initial position of the whole aggregate can be modeled as the columns of the matrices N r o w s and D i m in the form of equations. The formulas are illustrated below:
X i n = L u + θ r a n d × ( U l L u ) = [ x 1 , 1 x 1 , 2 x 1 , D i m 1 x 1 , D i m x 2 , 1 x 2 , 2 x 2 , D i m 1 x 2 , D i m x N m 1 , 1 x N m 1 , 2 x N m 1 , D i m 1 x N m 1 , D i m x N m , 1 x N m , 2 x N m , D i m 1 x N m , D i m ] N m × D i m
where θ r a n d ( 0 , 1 ) , X i n denotes the initial position.
X i n = ( x i , 1 , x i , 2 , , x i , j , , x i , D i m )
where i [ 1 , N m ] , j [ 1 , D i m ] .

3.2. Exploratory Phase

The formation of vapor emerges from the switching of snow or liquid water, either through deposition or sublimation, and the transitions create an irregular pattern of dynamics in which the populations exhibit a high degree of dispersion. Therefore, in the exploratory phase, we used Brownian motion to symbolize this irregular motion activity. Brownian motion has been defined as a stochastic process used to model molecular motion or the mechanism of foraging behavior in animals. The length of a typical Brownian motion step for a typical Brownian motion is available from a random variable that obeys a standard normal distribution, where the variance of the normality distribution is 1 and the mean is 0, and the formula is illustrated below:
f B M ( x ; 0 , 1 ) = 1 2 π × exp ( x 2 2 )
The characteristic of Brownian motion guaranteed that as many promising areas as possible in the resulting coverage of the search space were provided for exploration. As a result, it is effective in depicting vapor diffusion scenarios, X e in the exploration process, and the equation of the position update is as described below:
X e i ( t + 1 ) = X e l i t e ( t ) + B M e i ( t ) ( z 1 × ( X b ( t ) X e i ( t ) ) + ( 1 z 1 ) × ( X ¯ ( t ) X e i ( t ) ) )
where X e i ( t ) stands for the current position, B M e i ( t ) expresses the Brownian motion, which contains a vector of Gaussian-inspired random numbers, the symbol identifies the entry multiplication, z 1 ( 0 , 1 ) , X b ( t ) stands for the optimal best available settlement, X e l i t e ( t ) is a randomly selected element from an elite set of individuals in the colony and X ¯ ( t ) signifies the position of the centroid of mass of the entire species population. The corresponding mathematical expression is listed below:
X ¯ ( t ) = 1 N i = 1 N X e i ( t )
X e l i t e ( t ) [ X b ( t ) , X s e ( t ) , X t h ( t ) , X ¯ h ( t ) ]
X ¯ h ( t ) = 1 N 1 i = 1 N 1 X e i ( t )
where N 1 corresponds to the number of leaders, N 1 = N 2 , X s e ( t ) represents the second-best candidate search agent in the population, X t h ( t ) means the third-search agent in the population and X ¯ h ( t ) expresses the position of the center of mass of the individuals with the top 50% of scores. During each iteration, X e l i t e ( t ) is extracted from a set consisting of X b ( t ) , X s e ( t ) , X t h ( t ) , X ¯ h ( t ) , chosen at random. As depicted in Figure 3, the cross term ( z 1 × ( X b ( t ) X e i ( t ) ) + ( 1 z 1 ) × ( X ¯ ( t ) X e i ( t ) ) ) is sketched in a two-dimensional parameter space. The parameter z 1 is accountable for managing the locomotion directed towards the immediate best individual performance and the movement towards the leader’s location in the prime. Combining these two cross terms is primarily designed to capture individual interactions.

3.3. Exploitation Phase

When snow is converted to liquid water through melting, individuals are encouraged to specialize in developing high-quality responses surrounding the best current settlement instead of scaling up in domains of search that have a highly fragmented signature. During the development phase, the snowmelt procedure was modeled by means of the conventional diurnal method with the scientific formulation described below:
M d ( t ) = D d d f ( t ) × T d ( t )
D d d f ( t ) = 0.35 + 0.25 × e t M t 1 e 1
T d ( t ) = e t M t
M d ( t ) = 0.35 + 0.25 × e t M t 1 e 1 × e t M t
where D d d f ( t ) ( 0.35 , 0.6 ) indicates the degree-day coefficient, T d ( t ) represents the mean temperature of the day, M d ( t ) stands for the snowmelt rate, t denotes the current number of iterations and M t is the maximum number of iterations. The position renewal equations for this phase are displayed below:
X e i ( t + 1 ) = M ( t ) × X b ( t ) + B M e i ( t ) ( z 2 × ( X b ( t ) X e i ( t ) ) + ( 1 z 2 ) × ( X ¯ ( t ) X e i ( t ) ) )
where z 2 ( 1 , 1 ) is a stochastically generated value between −1 and 1.

3.4. Two-Population Mechanism

A dual-stock mechanism has been introduced in pursuit of a dialogue between exploration and development in SAO. As mentioned earlier, some transformed liquid water produced from snow, as well as steam, can be converted to vapors for the discovery process. This indicates that, as the iterative number grows, the population prefers to explore the structure of the solve space using irregular patterns of motion, which is the Brownian motion principle that I have described earlier. Therefore, a two-population mechanism was engineered to maintain exploration and development, with the overall colony being partitioned randomly into two equal-sized subprofiles: p stands for population as a whole, p a and p b represent two subpopulations and p a is responsible for exploring new possible solutions, while p b is used to develop and exploit known solutions. The sizes of p , p a and p b correspond to N , N a and N b , respectively. In subsequent iterations, the number of N a progressively decreases and the number of N a gradually increases, using the following formula to calculate the following:
N a = N a + 1 , N b = N b 1 ,   i f   N a < N
Concluding the two phases, the complete position update equation of the SAO algorithm is displayed as follows:
X e i ( t + 1 ) = { X e l i t e ( t ) + B M e i ( t ) ( z 1 × ( X b ( t ) X e i ( t ) ) + ( 1 z 1 ) × ( X ¯ ( t ) X e i ( t ) ) ) M ( t ) × X b ( t ) + B M e i ( t ) ( z 2 × ( X b ( t ) X e i ( t ) ) + ( 1 z 2 ) × ( X ¯ ( t ) X e i ( t ) ) )
As illustrated in Algorithm 1 for the SAO pseudo-code.
Algorithm 1: Snow ablation optimizer (SAO)
Input: The maximum number of iterations M t , population size, N ,   N a = N b = N 2
Output: The overall optimal position X b and the fitness value f b
1. Randomly initiating the stock position X i n ,using Equation (1)
2. Calculate the fitness value for each individual f b
3. Record the present optimal X b and fitness values f b
4. Construction of elite pools, using Equations (7) and (8)
5. While ( t M t ) do
6. Calculation of snowmelt rate, using Equation (11)
7. Demarcation of the two subpopulations p a and p b
8. For i 1   to   N a
9. Renew the position of the i th individual, using Equation (4)
10. End for
11. If N a < N then
12. N a = N a + 1 , N b = N b 1
13. End if
14. For i 1   to   N b
15. Renew the position of the i th individual, using Equation (12)
16. End for
17. Calculate the fitness value f b and update the optimal position X b
18. Updating the elite pool
19. t = t + 1
20. End while

4. Improved SAO Algorithm

4.1. Tent Chaos Fusion Elite Reverse Learning

In heuristic algorithms, the initialization positions are generally generated randomly, which easily causes uneven distribution of the initialized population, significantly impacting the overall acceleration of convergence and the perceived quality of the best solve. Therefore, it is crucial to have a homogeneously spread population, not only to enhance the finding performance of the algorithm but also to recover the optimal result more quickly. The stochasticity and ergodicity characterize chaos while hastening the convergence of the computation method [69]. Tent chaos mapping [70] demonstrates a substantially more homogeneous traversal in the state space traversal. This can also help the algorithm better seek the optimal value in the exploration space. The distribution and frequency of tent chaotic mapping in two-dimensional space are demonstrated in Figure 4. Since the single initialization strategy has the problem of a small cycle phenomenon and is vulnerable to immobility, the point is not enough to generate a more uniform population, and an elite reverse learning strategy is added to select a more excellent population as the initial population. Tent chaos mapping is modeled using the following equation:
x i + 1 = { x i a x [ 0 , a ) ( 1 x i )   a x [ a , 1 ]
where x stands for a random number between [0, 1], and in this work, we take 0.499. x i , x i + 1 represent the position variables before and after the chaos mapping.
In the elite reverse learning strategy, firstly, N original solves are produced through tent mapping; secondly, the initial solutions of the individuals in the current population are ranked, and the corresponding extreme vertex of itself is considered as the elegant individual X i , j e according to Equation (15). The elegant chaotic inverse method is used to create X ¯ i , j e . At last, all the original savers produced by chaos and the inverse chaotic elite are merged and sorted, and the first better saver is adopted as the initial population. The dynamic boundary formula of the elite inverse solution is as follows:
X ¯ i , j e = s × ( λ j + η j ) X i , j e
X ¯ i , j e = r a n d ( λ j , η j ) , X ¯ i , j e < λ j ,   o r   X ¯ i , j e < η j
where: s ( 0 , 1 ) , λ j = min ( X i , j e ) , η j = max ( X i , j e ) .

4.2. Greedy Choice

In the exploratory phase, the procedure of seeking solutions in the base SAO, the new solves will directly replace the solutions of the previous generation to renew the positions. However, the adaptability values of the new positions may not necessarily be more superior than those of the original positions, which may lead to the inability to judge the optimal positions in some cases, so greedy selection [71] is introduced to compare the adaptation values generated among the freshly produced places and the original ones, and then, the dominant places in each successive iteration are retained for the next one, whilst the worst ones are rejected. Using this technique in MISAO, the best individuals can be retained and the robustness of the algorithm can be enhanced. In addition, it facilitates the balancing of exploration and development progression, which drives MISAO to produce higher-quality solutions. This mechanism improves the quality of global solutions by continuously having good individuals and keeps the community moving in a more optimal manner with each subsequent iteration. The process is represented as given below:
X g i ( t + 1 ) = { X g i ( t ) , if   f ( X g i ( t ) ) < f ( X g i ( t + 1 ) ) X g i ( t + 1 ) , otherwise
where f represents the fitness value. X g i ( t ) and X g i ( t + 1 ) indicates the position before and after the update, respectively.

4.3. HHO Strengthening Development

During the development phase, the original SAO position update formulae had the disadvantage of needing to be more developed and prone to falling into local optima. Harris hawks track and detect prey with powerful eyesight, but sometimes, the prey is not easily detected. They will wait, observe and monitor the desert area, which may take several hours to find the prey. Harris’s hawks randomly stop at certain locations and wait to detect prey, inspired by the predatory actions of the HHO and the introduction of a position-updating formula for the transition and exploitation phases of the HHO. The transition phase switches between different exploitation actions according to the escape energy of the prey. The formula for the escape energy is expressed as follows:
E 1 = 2 ( 1 t M t )
E 0 = 2 r a n d 1
E e s = 2 E 0 ( 1 t M t )
where: E 0 represents the initial state energy, varying between intervals (−1, 1), E e s denotes the escape energy, M t means the maximum number of iterations. When the fleeing energy | E e s | 1 , the Harris hawk searches distinct areas to discover the location of its prey (global exploration), | E e s | < 1 Harris hawk searching for locations around prey (local exploration). When | E e s | 1 , the position renewal expression is as described below:
X e i ( t + 1 ) = { X r a ( t ) r 1 | X r a ( t ) 2 r 2 X ( t ) | q 0.5 ( X b ( t ) X m e a n ( t ) ) r 3 ( L u + r 4 ( U l L u ) ) q < 0.5
where X r a ( t ) is a random individual position, X ( t ) is the current individual position, X b ( t ) means the best current solution, X m e a n indicates the average position of the current population and L u and U l are the infimum and supremum of the problem, r 1 ,   r 2 ,   r 3 ,   r 4 ,   q ( 0 , 1 ) .
The exploitation phase consists of four search strategies, and in this study, soft and hard siege are used as the exploited strategies for prey predation. The formula for the soft siege phase when | E e s | 0.5 ,   r 0.5 is as given below:
X e i ( t + 1 ) = X b ( t ) X ( t ) E e s | J X b ( t ) X ( t ) |
J = 2 ( 1 r 5 )
The hard siege phase formula when | E e s | < 0.5 ,   r 0.5 is listed below:
X e i ( t + 1 ) = X b ( t ) E e s | X b ( t ) X ( t ) |
where J is a stochastic value indicating the jump strength of the hunter during the escape course, r 5 ( 0 , 1 ) .

4.4. RTH Global Exploration

In the basic SAO algorithm, the mechanism is explored and developed in two stages to lead the present individual to the optimal individual location, and the majority of individuals in the group prefer to gather around the sensed current optimal value, so it is susceptible to fall into premature convergence to a local best, and the accuracy of the convergence will not be too high. Based on this, the RTH stooping dive is introduced later to enhance global exploration and improve the algorithm’s overall accuracy. In this phase, the red-tailed hawk stoops quickly from the best position and targets the victim from the best place in the low-flying phase, which can be illustrated by the following formula:
G r = 2 ( 1 t M t )
S = sin 2 ( 2.5 t M t )
T F = 1 + 0.5 × sin ( 2.5 t M t )
S s 1 ( t ) = X ( t ) T F × X mean
S s 2 ( t ) = G × X ( t ) T F × X best
X ( t ) = S × X b e s t + x ( t ) × S s 1 ( t ) + y ( t ) × S s 2 ( t )
where G r is the gravity factor, S is the acceleration, T F is the transfer factor, X mean indicates the mean value of the current position and X best stands for the current optimal position; the direction coefficient is expressed as given below:
{ x d i ( t ) = Q ( t ) sin ( λ d ( t ) ) y d i ( t ) = Q ( t ) cos ( λ d ( t ) )
{ Q ( t ) = Q 0 ( h t M t ) r a n d λ d ( t ) = A ( 1 t M t ) r a n d
where x d i ( t ) and y d i ( t ) express the directional coefficient of the X-axis and Y-axis, respectively, Q 0 is the initial value of the radius taking the value 0.5 and A denotes an angle value of 15. r a n d ( N , 1 ) , h indicates a control factor of 1.5.
The suggested MISAO algorithm in this section is illustrated in the flowchart in Figure 5, and the pseudo-code is depicted in Algorithm 2:
Algorithm 2: Multi-strategy improved snow ablation optimizer (MISAO)
Input: The maximum number of iterations M t , population size N ,   N a = N b = N 2
Output: The overall optimal position X b and the fitness value f b
1.Use tent chaos plus elite reverse learning strategy to generate the population location X i n , using Equations (15) and (16)
2. Calculate the fitness value for each individual f b
3. Record the present optimal X b and fitness values f b
4. Construction of elite pools, using Equations (7) and (8)
5. While ( t M t ) do
6. Calculation of snowmelt rate, using Equation (11)
7. Demarcation of the two subpopulations p a and p b
8. For i 1   to   N a
9. Renew the position of the i   th individual, using Equation (4)
10. Perform greedy choice, retain better candidate solutions, using Equation (18)
11. End for
12. If N a < N then
13. N a = N a + 1 , N b = N b 1
14. End if
15. For i 1   to   N b
16. The position of the i   th individual is updated using the HHO strategy, using Equations (22)–(25)
17. End for
18. For i N to N
19. The position of the ith individual is updated using the RTH strategy, using Equation (31)
20. End for
21. Calculate the fitness value f b and update the optimal position X b
22. Updating the elite pool
23. t = t + 1
24. End while

4.5. Time Complexity Analysis

When improving the basic SAO, time complexity is an essential factor in judging the superiority of the method [72]; time complexity describes the execution time of an algorithm in the worst-case scenario, which indicates that time resources are required to run and is used to compare the efficiency of different algorithms. This article uses the large O representation [73] to represent the time complexity. It mainly comes from three phases: initiation of the algorithm, updating the positions of individuals in the population and evaluation of the fitness.
Assumptions are made at the time of analysis about the population dimensions, the dimensionality of the optimization issue and the number of iterations, expressed as N, D and T. Then, the time complexity analysis is performed for the SAO algorithm and the MISAO algorithm, respectively. In the SAO algorithm, T1 = O(ND) denotes the time complexity of the initial phase; in each iteration of the exploratory phase and the exploitation phase, N individuals undergo a position update, iterating T rounds, and the complexity during the iteration is T2 = O(M∗ND). The fitness value is calculated as T3 = O(N∗M), so the complexity of the SAO algorithm can be simplified as T = O(M∗ND). In MISAO, the tent fusion elite reverse learning initiation phase of the SAO algorithm is the same as T1; in the exploration phase with the development phase position update T2 = O(M∗ND), the fitness value is computed as T3 = O(N∗M); in the exploration phase by adding the greedy selection strategy, the complexity is T4 = (N∗M), using the RTH strategy for a position update of all, T5 = O(M∗ND), so the complexity of the MISAO algorithm can be simplified as T = O(M∗ND). In summary, MISAO does not significantly extend the corresponding time complexity compared with the SAO.

5. Experimental Validation

5.1. Baseline Functions and Parameter Settings

In an attempt to objectively assess the optimization finding ability and usefulness of the MISAO, in this work, 23 benchmark functions of CEC2005 are selected to experimentally verify the conversion speed and seek accuracy and precision of the MISAO. The CEC2005 test set is widely considered the classic test set and is chosen by many researchers for estimating the performance of their algorithms. Taheri et al. [74] introduced a partial reinforcement optimizer (PRO) and tested and compared it with the well-known MA algorithm on a test set that includes CEC2005. GhaemiDizaji et al. [75] investigated an opposed high-dimensional optimization algorithm (OHDA) on high dimensional data using functions from CEC2005. Kumari et al. [76] proposed the boosted chimp optimizer algorithm and verified its advantages in optimality finding based on the CEC2005 test set. These illustrate that it covers a wide range of complex optimization problems and is able to effectively evaluate the performance of the algorithm in different scenarios. In the CEC2005 function, the functions F 1 F 7 are unimodal functions with only one central best solve, which are designed to measure the acceleration of the algorithm’s convergence rate and optimality searching ability. F 8 F 13 are multimodal functions with one central optimal solution and several local optimal solutions designed to test the algorithm’s global searching ability and mining ability. F 14 F 23 are fixed dimension composite modal functions, which are applied to measure the behavior of a particular algorithm or data structure in different dimensions and are often employed to strike a suitable balance in the relationship between the global search and partial exploration of an algorithm. Table 2 demonstrates the names, dimensions, ranges of values and best values of the 23 benchmark functions.
To comprehensively evaluate the efficiency of the optimal performance of our presented MISAO, MISAO is compared with various popular MAs. Eight popular and newer algorithms include GA, MVO, RIME, WOA, chimp optimization algorithm (ChOA) [77], particle swarm optimization (PSO) [78], golden jackal optimization (GJO) [79] and sparrow search algorithm (SSA) [80]. Illustrated in Table 3 are the settings of the parameters for each method. The same cluster size N was set to 30 for all the algorithms, and the largest number of iterations was 500, and to maintain fairness, each algorithm was run separately 30 times on each benchmark function. The experiments in this research paper are operated on a Windows 11 Professional operating system, processor Intel(R) Core (TM) i7-10700, 2.90 GHz, 8 GB of RAM and coded using MATLAB2022b. The convergence accuracy and stability of the proposed algorithm is reflected by comparing the mean, standard and optimal values of the output results of the 23 functions, and the optimal outcomes obtained are marked in black bold style.

5.2. CEC2005 Testing Set Test Comparison

Following the requirement of thoroughly testing the optimality-seeking performance of the MISAO algorithm, comparative tests are performed with prevailing different popular intelligent optimizations. The results of the test experiments are displayed in Table 4. The analysis of the results reveals that according to the unimodal functions F 1 F 7 , the MISAO algorithm is ranked first and has the smallest minimum in optimal value, average value and standard error of SSA based on function F 6 and the smallest standard error of ChOA based on function F 5 , except that the MISAO algorithm has the smallest results based on the three indicators and ranks the first place. At the same time, there is a significant improvement effect compared to SAO, which indicates the powerful seeking ability of MISAO based on unimodal functions, and the integrated effect of the mentioned algorithm is stronger than the other algorithms in terms of the local development ability to execute the algorithms. Based on the multimodal functions F 8 F 13 with different expression patterns, except for the standard deviation of MVO based on the F13 function, the remaining three indicators of the other functions based on MISAO are all the lowest and ranked first, which indicates that MISAO has better exploration capability in the process of looking for the best answer of multi-peak functions and has the greatest stability and display in the quality and stability of the search for the optimal solution and has more stable performance and significant advantages than the common algorithms. MISAO has more stable performance than and significant advantages over ordinary algorithms. Analyzing the fixed vector functions F 14 F 23 , it was found that, based on the F 14 function, MISAO achieves the minimum in the optimum, the mean and standard deviation is slightly lower than the RIME function and both achieve the minimum in the average and optimum, and the index of the standard variation is slightly smaller than the three algorithms of SAO, PSO and GA, respectively, and the performance is not as good as PSO in function, but it is also ranked in the top among the algorithms, and the performance is not as good as PSO. The optimal value, mean and standard deviation of MISAO in the functions F 15 , F 17 and F 20 are all the smallest and ranked first, which shows the stability of the proposed MISAO algorithm’s comprehensive performance and the ability to find the optimal solution in a better and more rapid way, achieving a balance between the exploration property and development ability.
The convergence of the different algorithms during the iteration process on 23 functions is demonstrated in Figure 6, and the convergence curves are plotted by averaging the optimal solutions in each iteration over 30 runs. Set the horizontal axis to depict the iterative times and the vertical axis to correspond to the adaptation degree votes. The convergence effect of each algorithm was analyzed and found. The decreasing trend of the curves in most of the tested functions, such as F 1 F 5 , F 11 and F 13 F 16 , reveals that the MISAO algorithm has the quickest convergence and the greatest convergence precisions as the frequency of repeated generations grows. The convergence curves of the functions F 9 , F 12 and F 17 F 23 converge approximately in a direct line, and the best values are found with the fastest possible convergence. For functions F 10 and F 18 , although the functions do not reach the fastest convergence and the best fitness value, they are finally ranked at the top in terms of accuracy and precision. The MISAO algorithm outperforms other common methods for optimizing unimodal, multimodal and fixed-dimension functions in local development and global discovery. Meanwhile, the algorithm also has outstanding results in terms of accuracy and speed of synchronization. Comparative experiments fully demonstrate the remarkable ability of the MISAO. This excellent performance substantially contributes to enhance the convergence precision and efficiency of the calculation technique.

5.3. Strategy Effectiveness Analysis

In sequence to prove the usefulness of the improved method, the MISAO algorithm developed in this document is compared with the original SAO, the tent chaos fusion elite backward learning method is called algorithm SAO1, algorithm SAO2 for the greedy selection strategy, algorithm SAO3, which focuses on the HHO enhancement exploitation strategy, and algorithm SAO4 with RTH global exploration based on 23 test functions for comparison experiments with 30 times. The results of the independent laboratory test experiments are listed in Table 5. Figure 7 demonstrates the convergence process between the algorithms of different strategies, which can be more intuitively seen as the differences between the strategies. The analysis results are found. Firstly, SAO1 helps to overcome the problem of poor optimization quality by introducing the chaos mechanism, which enhances the population diversity and thus boosts the global search capability of the algorithm. Secondly, SAO2 ensures the inheritance of high-quality solutions by retaining excellent solutions during the search process, which in turn improves the search efficiency and optimization accuracy. SAO3, on the other hand, strengthens the utilization of high-quality solutions during the development period, which heightens the convergence speed and avoids slipping into a local optimum. Finally, SAO4 further enhances the search scope by integrating a global exploration mechanism, which enables the algorithm to identify the global optimal solution more efficiently when facing sophisticated tasks, in which F 1 F 2 , F 4 , F 9 F 12 , F 15 , F 17 and F 20 achieve optimal results in terms of the three indicators.
Except for the F 8 function, where there are two indicators, which contain the standard deviation and the average value of SAO2 performed well, the remaining functions based on MISAO had one or more contrasting values that outperformed the comparison algorithm with only one strategy. For the underlying SAO, the algorithms using a single tactic have improved the overall optimal performance, indicating that the effective performance of each tactic helps the algorithms balance the discovery and exploit functions in time to achieve the optimal solution seeking beyond the local optimum.
Meanwhile, the analysis of the convergence effect based on the 23 test functions reveals that each algorithm has a different degree of improvement compared to the original SAO; the tent chaos fusion elite reversal strategy makes the algorithm better traverse the investigation space by uniformly generating the initial positions, the greedy choice improves the local search ability in the exploration phase, the HHO strategy enhances the development and the RTH updates the optimal solution dimension by the direction to speed up the convergence of the MISAO convergence to the holistic optimal option. The close convergence results of SAO4 and MISAO can be clearly noticed in the function methods F 1 F 4 , F 11 and F 13 F 16 , indicating that the use of the RTH strategy takes a fundamental role in enhancing the search capability of MISAO. The combination of the remaining strategies gives MISAO a breakthrough in terms of comprehensiveness and enables it to maintain its excellent convergence rate and local optimum avoidance potential. Overall, MISAO converges to the optimal solution as fast as possible in most of the tested functions, and its convergence precision is also greater than the remaining competing algorithms, demonstrating its excellent exploration and development capabilities. The mentioned outcomes demonstrate that the integrated enhancement method described in this study can prevent the algorithm from slipping into a partial best, and at the same time, it increases the aggregation rate and the synthesis search ability of the basic SAO in solving the CEC2005 function. The four proposed strategies all enhance the algorithm to different degrees. Each strategy has its advantages in dealing with different problems. The integration of their respective advantages finally produces satisfactory results and proves the effectiveness of the recommended enhanced algorithms and the relevance of improving the use of the strategies.

5.4. Runtime Analysis

In order to analyze the sensitivity of the proposed algorithms to the running time more effectively, we conducted time comparison experiments on the CEC2005 test set, using the results of 30 independent runs. The number of iterations for each algorithm was set to 500, and finally, the running time of each function under different intelligent optimization algorithms was obtained, and the experimental results are shown in Figure 8. Each fold line in the figure represents the total running time of single-peak functions, multi-peak functions and fixed-dimension functions, respectively, and the average running time of the 23 functions run 30 times. The trend of the folded lines clearly shows that ChOA has the longest running time among the compared algorithms. In the results of the MISAO algorithm, the average running time is only slightly higher than that of the SAO algorithm, which is mainly due to the fact that the MISAO algorithm incorporates more policy features, such as greedy selection, to obtain the best solution and RTH global updating of the location when augmenting the global exploration. This suggests that our proposed MISAO algorithm does not consume too much time, although some time resources are appropriately sacrificed in applying the improved strategies. This also validates the effectiveness of our algorithm from another perspective.

5.5. Statistical Analysis

A structured quantitative study was carried out for the suggested optimal capabilities of MISAO. The statistical analyses of the algorithms given determine their rank in these experiments. Based on this, the non-parametric Friedman test [81] and Wilcoxon rank test [82] (+/=/−) were used to make the corresponding analyses in this article. These two statistical tests can contribute significant meaning to MISAO. On the one hand, Friedman’s test is able to assess the relative performance of multiple algorithms under different experimental conditions, revealing significant differences by comparing the rankings of the algorithms, and the results of this test provide statistical support for MISAO’s superiority, demonstrating its consistency and reliability in terms of performance metrics. On the other hand, the Wilcoxon rank test focuses on inter-algorithm comparisons, which further validates the superiority of MISAO in solving the optimal design problem by comparing the strengths and weaknesses between the two algorithms. The statistical results not only reinforce the claims made but also show the remarkable improvement of MISAO in terms of accuracy and convergence speed, which provides strong evidence.

5.5.1. Friedman Test

The Friedman test is a non-parametric method of testing. It is primarily used to determine whether there are substantial differences between different algorithms. The test can be viewed as a simulation of the repeated measures process in ANOVA. The original assumption of the Friedman test is that the medians between the data sets involved in the comparison are equal. This is performed by ranking the algorithms based on their optimal solutions (minimum) in each run for each problem and then calculating the average ranking of each algorithm’s overall problems. This ranking-based analysis method can better reflect the overall performance differences of different algorithms based on various problems and provide a basis for subsequent multiple comparisons. The Friedman test pays more attention to the relative performance differences between the algorithms than directly comparing the absolute expressive power of the algorithms on each problem. Set each algorithm to operate independently for 30 runs, and each algorithm’s Friedman values and overall rankings are illustrated in Table 6. A stacked plot of Friedman values for MISAO and nine other algorithms based on the 23 functions of CEC2005 is displayed in Figure 9. The results presented in the tables and figures clearly show that the recommended algorithm MISAO is significantly different from the other algorithms in most of the test functions. The Friedman mean of the proposed method is 2.08, ranked first in the overall rankings compared to the other algorithms. It also demonstrates that the MISAO is superior to other algorithms. In this study, it can be seen from the Friedman test results that the MISAO algorithm exceeded the other compared algorithms in all performance indicators, showing higher accuracy and faster convergence speed. This result not only validates the effectiveness of the MISAO algorithm but also highlights its advantages in tackling complex optimization projects and practical applications.

5.5.2. Wilcoxon Rank Test

The nonparametric Wilcoxon rank test was employed to gauge the prominence of statistical distinctions between MISAO and the other algorithms at the 0.05 level of significance. The algorithm is run independently 30 times, and Table 7 illustrates the outcome of the study; conformity here stands for three senses, ‘+’ means that the suggested algorithm is more excellent than the reference algorithm, ‘=’ denotes that there is basically no change from the response algorithm and ‘−’ indicates that the suggested algorithm is poorer than the reference algorithm. Depending on the results, MISAO found that 20 out of 23 functions were superior to RIME, 20 were superior to PSO, 22 were superior to ChOA, 18 were superior to GJO, 20 were superior to MVO, 22 were superior to GA, 20 were superior to SSA, 20 were superior to WOA and 15 were superior to SAO. MISAO generally has a stronger competitive advantage based on the CEC2005 test function, clearly outperforming all other participants. This demonstrates its remarkable benefits in terms of convergence speed and solution accuracy. The results of the Wilcoxon test provide plausibility support for the performance of MISAO, further supporting the usefulness of its multi-strategy improvement, underlining the robustness and consistency of MISAO in the face of intricate questions, and hinting at the possibility of the algorithm’s better resilience and optimization capabilities in real-world scenarios.

6. 3D UAV Path Planning

UAVs can play an influential role in agriculture, industry and other fields and bring a strong impetus to the development of knowledge and engineering expertise. While route planning for UAVs is a key topic in the development of the research. UAV path planning research is a demand not only for the development of technology but also an important step to achieve intelligence and automation, which is of far-reaching significance for the drive to promote the application of UAVs in a broader range of fields.

6.1. 3D Single-UAV Path Planning

In the current rapid development of UAV technology, the importance of single UAV path planning has become increasingly prominent. A single UAV can provide efficient and flexible solutions when performing specific tasks and plays a great role in agricultural monitoring, disaster relief and rescue. The effectiveness of single-UAV path planning not only affects the task-completion efficiency but also directly relates to the safety and reliability of the task. Single-UAV path planning has an extensive scope of potential applications in practice and provides a wealth of research topics for researchers. Through an in-depth discussion of single-UAV path planning algorithms, we can improve the autonomy and intelligence of UAVs and lay a solid foundation for future UAV applications.

6.1.1. Mathematical Modeling of Single-UAV

(1)
Terrain environment modeling
UAVs are used in various fields due to their powerful advantages. In UAV path planning, terrain information must be acquired rapidly and exactly in order to undertake tasks, such as security investigations and topographic measurements. Topographic modeling affects the effectiveness and feasibility of path planning to a certain extent. The 3D route planning project for UAVs is depicted as a mission in a specified 3D dimension, including terrain threats, UAV’s own restrictions and being able to solve the optimal route of motion that satisfies the constraints from beginning to end [83]. Figure 10 illustrates the corresponding topographic environmental conditions, and the mathematical model of the terrain of this document can be established according to the formulas mentioned below:
Z ( x , y ) = i = 1 n h i exp [ ( x x i x s i ) 2 ( y y i y s i ) 2 ]
where n expresses the total amount of peaks, ( x i , y i ) is the center coordinate of the i th peak, h i represents the terrain parameter and controls the height of the peaks and x s i and y s i are slopes of the peaks of the X and Y coordinate axes, respectively.
The interaction of the ground and obstacles needs to be taken into account during flight, so the ground and obstacles are modeled with the following equations:
Z = s i n ( y + 1 ) + s i n ( x ) + s i n ( y 2 + x 2 ) + 2 c o s ( y ) + c o s ( y 2 + x 2 )
(2)
Flight cost modeling
In particular, given the better estimation of the quality of the paths, this document takes into account the length of the paths, the height of the terrain and the smoothness of the paths to establish a cost model. Suppose the set of node sequences is { S , M 1 , M 2 , M n 1 , G } ; this set consists of a sequence N + 1 nodes, S and G indicate the start and end points of the UAV, respectively, and M 1 , M 2 , M n 1 are the nodes during the flight. Set the 3D representation of the start and end points as S = ( x s 0 , y s 0 , z s 0 ) , G = ( x e n , y e n , z e n ) . M i = ( x i , y i , z i ) denotes intermediate nodes. In this simulation, we model the cost of flying the UAV with the objective function described below:
F t 1 = w 1 P 1 + w 2 H 1 + w 3 A 1
where F t 1 is the objective function, P 1 is a function representing the length of the path, H 1 means the height function, A 1 indicates the smooth planning function and w 1 , w 2 and w 3 denote the weights, which can be set according to different scene requirements. The parameters set in this experiment were 0.45, 0.35 and 0.2, respectively.
The path length is often a key metric in UAV path planning problems. Shorter paths can reduce time consumption, save fuel and improve mission reliability. Usually, a UAV’s total distance in completing a mission is calculated by accumulating the distances between adjacent path nodes with the following formula.
P 1 = i = 1 n ( Δ x p i ) 2 + ( Δ y h i ) 2 + ( Δ z a i ) 2
{ Δ x p i = ( X i + 1 X i ) Δ y h i = ( Y i + 1 Y i ) Δ z a i = ( Z i + 1 Z i )
where Δ x p i is the change of coordinates in the x direction, Δ y h i is the change of coordinates in the y direction and Δ z a i is the change of coordinates in the z direction.
In undertaking its flying missions, the UAV shall fulfil its tasks safely and efficiently within a controllable altitude range; UAV flight heights should always be at a distance from terrain levels, so it is necessary to choose the appropriate altitude constraints, for which formulas are as follows:
H 1 = i = 1 n ( z i 1 n i = 1 n z i ) 2
where 1 n i = 1 n z i is the mean height.
The UAV will encounter obstacles when flying and need time to turn in direction. To ensure that it can always keep a favorable orientation during its active flight, the smoothing cost method is used; at this time, we need to consider the influence of the climbing angle of the UAV when it is turning. For the lengths of adjoining line segments i and i + 1 , the dot product of their lengths, the below formula is used:
{ L i = m x i 2 + m y i 2 + m z i 2 L i + 1 = m x i + 1 2 + m y i + 1 2 + m z i + 1 2 P i = m x i m x i + 1 + m y i m y i + 1 + m z i m z i + 1
D i = P i L i L i + 1
A 1 = i = 0 n 1 ( D i )
where m x i = ( x i + 1 x i ) , m y i = ( y i + 1 y i ) and m z i = ( z i + 1 z i ) depict the difference between the adjacent elements of the three axes, respectively.

6.1.2. Single-UAV Path Planning Results

In this part, we set the starting point coordinates of the UAV flight trajectory as (0, 0, 20) and the endpoint coordinates as (200, 200, 30) for the experiments and use the smoothing technique of cubic spline interpolation to perform the interpolation. Seven representative states of art meta-heuristics, MVO, RIME, WOA, ChOA, PSO, GJO and SSA, are selected for comparative experiments. For the purpose of eliminating the random tolerance of the calculation outcomes, the algorithm-related parameter information is set to encompass the number of populations as 30, the largest number of iterations as 500, the frequency of runs as 30 and the means of which they are derived to plan the 3D UAV trajectory. Results of related experiments are displayed as shown in Table 8, which shows that MISAO significantly outperforms other optimization techniques, in which the worst, median and mean values are all the smallest, ranking the first among all the compared algorithms, which also indicates the smooth performance of the proposed planning algorithm and the outstanding role in reducing the cost of UAV planning. However, the optimal value is slightly higher than that of the PSO algorithm, but it ranks second among all the algorithms. The effect is greatly improved over other algorithms, which indicates that MISAO has a strong competitive advantage in UAV path planning. Meanwhile, Figure 11 depicts the planned paths of each competitor. It further validates the great potential of MISAO in solving the 3D UAV path planning problem.

6.2. 3D Multi-UAV Path Planning

In real life, as practical application scenarios change in complexity, it is clear that a single-UAV technology is not sufficient to support all situations; therefore, multiple UAVs working together have gradually become an important mode of application. This kind of collaboration not only enhances the efficiency of mission fulfilment and reduces consumption but also helps expand the application fields of UAVs, such as agricultural monitoring, environmental protection, logistics and transport, disaster rescue, etc. The sophistication of the operational mission often leads to a variety of synergy and performance constraints when performing coordinated missions engaging multiple UAVs. Therefore, the study of multi-UAV path planning has important theoretical and practical significance.

6.2.1. Mathematical Modeling of Multi-UAV

The terrain environment of multiple UAVs is modeled the same way as single UAVs, according to Equation (34). The flight cost model mainly considers path, height, threat and angle costs. The corresponding mathematical models are described below.
(1)
Path cost
In UAV path planning, the smaller the planned trajectory, the cheaper the time and power consumed, i.e., the lower the cost of flight. The flight distance can be determined by summing the Euclidean distances between adjacent path nests. The set of node sequences is consistent with that of a single UAV, the start point is S , intermediate nodes are M and the endpoint is G . The path cost equation is expressed as follows:
P 2 = i = 0 n ( x p i x p i 1 ) 2 + ( y p i y p i 1 ) 2 + ( z p i z p i 1 ) 2
(2)
Height cost
Maintaining a stable altitude during flight reduces energy consumption. In keeping with the security of flight, the height of UAVs is restricted to two defined ranges to ensure that the boundaries are not exceeded, and the altitude cost equation is illustrated below:
H 2 = i = 0 n J i
J i = { h i < 0 | h i h max h min 2 | o t h e r w i s e
where h max means the maximum height and h min denotes the minimum height.
(3)
Threat cost
The existence of different types of space menaces, such as radar and obstacles in space, is a source of threats when flying drones, so there is a need to consider the threat of uncertainty to UAV path planning; threats signify the set of threat points, where each threat point consists of a location and a radius, threats k = ( x k , y k , r k ) . The threat cost equation is built according to the following:
T = k = 1 threats   i = 1 n 1 M k i
M k i = { 0 d k i > ( r k + d u ) d k i < ( r k + d u ) ( r k + d u + 10 d u ) d k i otherwise
where d k i stands for the shortest distance from the threat point to the path segment, d u represents the size of the UAV and r k is the radius of the threat area.
(4)
Angle cost
When the UAV is in flight, it needs to fly smoothly, and the angle of climb and turn should be controlled to keep the UAV in a safe condition. For each point i (from 1 to N − 2), the previous segment is projected as L s f = ( x L j + 1 x L j , y L j + 1 y L j , 0 ) , and the latter segment is projected as L s b = ( x L j + 2 x L j + 1 , y L j + 2 y L j + 1 , 0 ) .
The angle of climb is calculated as follows:
C a 1 = tan 1 ( z abs ( i + 1 ) z abs ( i ) L s f )
C a 2 = tan 1 ( z abs ( i + 2 ) z abs ( i + 1 ) L s b )
where z abs ( i ) expresses the height of the i th point and C a 1 and C a 2 indicate the angle of climb from point i to i + 1 and i + 1 to i + 2 , respectively.
Turning angles are calculated as follows:
T a = tan 1 L s f × L s b L s f L s b
where the symbol is the dot product and × is the cross product.
The total angle cost is the sum of the angle of climb and the angle of turn and can be established according to the following formula:
A 2 = β 1 i = 1 n 2 T a + β 2 i = 1 n 1 | C a 2 C a 1 |
where β 1 and β 2 are penalty constants.
The overall objective function can be represented as follows:
F t 2 = w 4 P 2 + w 5 H 2 + w 6 T + w 7 A 2
where w 4 , w 5 , w 6 and w 7 indicate weights, which were set to 5, 1, 10 and 1, respectively.
The UAV flight environment model is modeled according to the following equation:
Z ( x , y ) = 2 2 cos ( x y 2 ) cos ( π 4 x + y 2 ) + sin ( x 2 + y 2 ) + cos ( x 2 + y 2 )

6.2.2. Multi-UAV Path Planning Results

In this part, we set the start point as (150, 150, 50) and the endpoint as (900, 720, 150). Similar to single-UAV path planning, we choose seven representative algorithms for the comparison. In an attempt to eliminate the random errors in the calculation results, the settings are the same as those described earlier for the single UAV path planning experiments and calculating its average effect to plan the 3D UAV flight trajectory. The results of path, threat, height and angle cost of various methods for multi-UAV route planning are displayed in Table 9, while the total cost of each UAV is summarized in Table 10. Although the results of ChOA are slightly better than the MISAO algorithm in terms of path cost, ChOA is affected by the penalty factor in terms of threat cost, which leads to poor output. Therefore, considering the effects of the four costs together, the MISAO algorithm still performs better. In order to better demonstrate the utility of the proposed algorithm in path planning and to reduce the effect of experimental randomness, we changed the start and end points of the flights, which were set to (120, 130, 80) and (870, 830, 130) for the experiments, respectively. The experimental results are shown in Table 11 and Table 12, in which the path costs of the ChOA algorithm are all higher than those of the MISAO algorithm. This indicates that a single cost index cannot comprehensively represent the effect of UAV path planning, and only through a comprehensive cost index can the role of each algorithm in UAV path planning be fully demonstrated. The final results show that the MISAO algorithm has the lowest total cost in UAV planning and is ranked first among all the compared algorithms. This result further proves the stability of the proposed algorithm in path planning and its significant advantage in cost reduction, demonstrating the strong competitiveness of MISAO in UAV path planning. Simultaneously, Figure 12 shows the 3D planned path maps and floor plans generated by each rival algorithm starting at (150, 150, 50) and ending at (900, 720, 150). The 3D paths generated by MISAO are visually smoother and have shorter path lengths, showing its effectiveness in achieving smooth flight paths. This further validates the great application outlook of MISAO in solving the 3D multi-UAV path planning problem and underlines its importance in practical applications. Through these experimental results, we can see the distinctions in the performance of different approaches in the UAV path planning task, especially the MISAO algorithm, which performs excellently in all the cost indexes, which ensures the efficient operation of the UAV in the complex environment. In summary, the MISAO algorithm performs well in cost control and provides smooth and optimized path selection, which offers a credible solution for the application of multi-UAV systems.

6.3. Managerial Insights

In UAV path planning, the choice of algorithms and models is crucial to achieve optimal flight efficiency, and our suggested MISAO algorithm is an excellent tool for minimizing flight costs. By applying the MISAO algorithm, decision-makers can ensure that UAVs are both efficient and safe in performing their missions. It helps decision-makers to make an informed judgement on technology selection and enhances the efficiency and reliability of UAVs in real-world applications. In our chosen MISAO algorithm, it is first based on the traditional SAO algorithm and combines four strategy combinations to improve the traditional SAO, in order to overcome its own problems, such as a slow convergence rate and being prone to trap in the local optimum and ultimately to achieve a significant improvement in the performance of the algorithm. For the purpose of verifying the performance advantages of the proposed algorithm, the suggested MISAO algorithm and other classical algorithms are compared and experimented with the CEC2005 test set, and the validity of each improvement method is analyzed in depth, and the non-parametric statistical test of the algorithm has demonstrated the superiority and validity of the proposed algorithm, which can help to make a sensible judgement on the choice of technology.
In the 2D UAV path planning problem, we first determine the UAV’s flight region and its geographic features by modeling the mountain topography and obstacles. Next, when considering the constraints, we introduce three key cost functions: the length of the UAV flight path, the height limit and the threat of the climb angle. These cost functions reflect the impact of different constraints on path planning, respectively. When modeling the costs, we consider that each constraint has a different weight on the total cost. For this reason, we use a weighting approach to integrate each cost function into one total cost function. This weighting approach effectively reflects the importance of different constraints and ensures that the constraints are balanced during the path planning process. Finally, after considering the impact of each cost function, we obtain an optimized total cost function. This function not only helps to identify the optimal path but also ensures that the UAV can safely and efficiently avoid obstacles and detrimental terrains during flight, thus achieving the set mission objectives.
In the 3D UAV path planning problem, we similarly begin by modeling the mountain terrain and obstacles in purchase to accurately describe the environment in which the UAV will fly. When considering the constraints, we introduce four main cost functions: path cost, height cost, threat cost and angle cost. Among them, the angular cost is considered more comprehensively, covering both the effect of the climb angle and the turn angle, so as to better reflect the challenges that UAVs may face during flight. In constructing the final total cost function, we recognize that each constraint affects path planning to a different degree. Therefore, we introduce weighting factors to ensure that the model realistically reflects the path planning needs of UAVs in complex environments. The setting of these weighting factors not only takes into account the importance of each cost function but is also based on feedback from actual flight conditions to ensure that the UAV is able to effectively respond to a variety of environmental factors during its mission.

7. Conclusions

In our article, we submit a multi-policy augmentation called the MISAO algorithm, firstly, based on a tent chaotic mapping fusion elite reverse learning strategy to grow the population diversity through the uniform initial position to traverse the exploration space better; secondly, the greedy selection rule is adopted to augment the robustness of the method; and then, the HHO position updating formula is introduced to enhance the exploitation during the development stage, which is conducive for the algorithm to leapfrog the partial best; finally, the excellent performance of the RTH strategy is introduced to balance the global exploration and local development. It enables the mechanism to approach the optimal solve faster and thus obtain the global best solve. To verify the superiority of MISAO, it is subjected to performance comparison experiments and policy validation experiments based on the CEC2005 test set, and the obtained results demonstrate the promising performance of MISAO. Finally, in this paper, to verify the algorithm’s usefulness in practical application scenarios, the algorithm is modeled in a realistic environment. Using the algorithm suggested in this study for three-dimensional single-UAV and multi-UAV route planning, the results of the simulation demonstrate that MISAO has prominent strengths in tackling the 3D route planning problem of UAVs, as well as the high quality of the flight paths of the paths planned by MISAO and the lowest consumption cost.
Although MISAO has been proven to have excellent performance in seeking superiority and is employed in the UAV path planning project in this paper, the algorithm manifestation can still be strengthened by combining other excellent strategies and algorithms to explore the excellent improvement effect applicable to different application scenarios. In future research, more realistic areas need to be investigated, including fault diagnosis, medical assistance, predictive research and many other areas.

Author Contributions

Conceptualization, S.L. and X.L.; methodology, C.Z.; software, C.X.; validation, C.Z., C.X. and P.Y.; formal analysis, P.Y.; investigation, C.Z.; resources, S.L.; data curation, X.L.; writing—original draft preparation, C.Z.; writing—review and editing, C.Z.; visualization, C.X.; supervision, S.L.; project administration, P.Y.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guizhou Science and Technology Plan Project (Grant No. QKHZYD [2023]002) and the Guiyang Science and Technology Platform Construction Project (Grant No. ZKHT [2023]7-2).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The support of Guizhou University, China, is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hu, G.; Zhong, J.; Wei, G. SaCHBA_PDN: Modified Honey Badger Algorithm with Multi-Strategy for UAV Path Planning. Expert Syst. Appl. 2023, 223, 119941. [Google Scholar] [CrossRef]
  2. Hoeffmann, M.; Patel, S.; Bueskens, C. Optimal Guidance Track Generation for Precision Agriculture: A Review of Coverage Path Planning Techniques. J. Field Robot. 2024, 41, 823–844. [Google Scholar] [CrossRef]
  3. Unmanned Aerial Vehicles Market Size & Share Analysis—Industry Research Report—Growth Trends. Available online: https://www.mordorintelligence.com/industry-reports/uav-market (accessed on 7 September 2024).
  4. Das, P.K.; Behera, H.S.; Panigrahi, B.K. A Hybridization of an Improved Particle Swarm Optimization and Gravitational Search Algorithm for Multi-Robot Path Planning. Swarm Evol. Comput. 2016, 28, 14–28. [Google Scholar] [CrossRef]
  5. Chen, J.; Ling, F.; Zhang, Y.; You, T.; Liu, Y.; Du, X. Coverage Path Planning of Heterogeneous Unmanned Aerial Vehicles Based on Ant Colony System. Swarm Evol. Comput. 2022, 69, 101005. [Google Scholar] [CrossRef]
  6. Wang, W.; Deng, H.; Wu, X. Path Planning of Loaded Pin-Jointed Bar Mechanisms Using Rapidly-Exploring Random Tree Method. Comput. Struct. 2018, 209, 65–73. [Google Scholar] [CrossRef]
  7. Ravankar, A.A.; Ravankar, A.; Emaru, T.; Kobayashi, Y. HPPRM: Hybrid Potential Based Probabilistic Roadmap Algorithm for Improved Dynamic Path Planning of Mobile Robots. IEEE Access 2020, 8, 221743–221766. [Google Scholar] [CrossRef]
  8. Liu, L.; Wang, B.; Xu, H. Research on Path-Planning Algorithm Integrating Optimization A-Star Algorithm and Artificial Potential Field Method. Electronics 2022, 11, 3660. [Google Scholar] [CrossRef]
  9. Hosseini, E.; Al-Ghaili, A.M.; Kadir, D.H.; Gunasekaran, S.S.; Ahmed, A.N.; Jamil, N.; Deveci, M.; Razali, R.A. Meta-Heuristics and Deep Learning for Energy Applications: Review and Open Research Challenges (2018–2023). Energy Strateg. Rev. 2024, 53, 101409. [Google Scholar] [CrossRef]
  10. Wu, Y. A Survey on Population-Based Meta-Heuristic Algorithms for Motion Planning of Aircraft. Swarm Evol. Comput. 2021, 62, 100844. [Google Scholar] [CrossRef]
  11. Bai, J.; Li, Y.; Zheng, M.; Khatir, S.; Benaissa, B.; Abualigah, L.; Wahab, M.A. A Sinh Cosh Optimizer. Knowl.-Based Syst. 2023, 282, 111081. [Google Scholar] [CrossRef]
  12. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot Optimizer: Algorithm and Applications to Medical Problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef] [PubMed]
  13. Zhao, S.; Zhang, T.; Cai, L.; Yang, R. Triangulation Topology Aggregation Optimizer: A Novel Mathematics-Based Meta-Heuristic Algorithm for Continuous Optimization and Engineering Applications. Expert Syst. Appl. 2024, 238, 121744. [Google Scholar] [CrossRef]
  14. Hu, G.; Guo, Y.; Wei, G.; Abualigah, L. Genghis Khan Shark Optimizer: A Novel Nature-Inspired Algorithm for Engineering Optimization. Adv. Eng. Inform. 2023, 58, 102210. [Google Scholar] [CrossRef]
  15. Tondut, J.; Ollier, C.; Di Cesare, N.; Roux, J.C.; Ronel, S. An Automatic Kriging Machine Learning Method to Calibrate Meta-Heuristic Algorithms for Solving Optimization Problems. Eng. Appl. Artif. Intell. 2022, 113, 104940. [Google Scholar] [CrossRef]
  16. Li, J.; Chen, W.; Han, K.; Wang, Q. Fault Diagnosis of Rolling Bearing Based on GA-VMD and Improved WOA-LSSVM. IEEE Access 2020, 8, 166753–166767. [Google Scholar] [CrossRef]
  17. Li, F.; Kim, Y.-C.; Xu, B. Non-Standard Map Robot Path Planning Approach Based on Ant Colony Algorithms. Sensors 2023, 23, 7502. [Google Scholar] [CrossRef]
  18. Zhang, S.; Zhang, N.; Zhang, Z.; Chen, Y. Electric Power Load Forecasting Method Based on a Support Vector Machine Optimized by the Improved Seagull Optimization Algorithm. Energies 2022, 15, 9197. [Google Scholar] [CrossRef]
  19. Wang, C.; Tu, C.; Wei, S.; Yan, L.; Wei, F. MSWOA: A Mixed-Strategy-Based Improved Whale Optimization Algorithm for Multilevel Thresholding Image Segmentation. Electronics 2023, 12, 2698. [Google Scholar] [CrossRef]
  20. Wang, Y.; Zhang, L. Improved Multi-Objective Particle Swarm Optimization Algorithm Based on Area Division with Application in Multi-UAV Task Assignment. IEEE Access 2023, 11, 123519–123530. [Google Scholar] [CrossRef]
  21. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced Whale Optimization Algorithm for Medical Feature Selection: A COVID-19 Case Study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef]
  22. Wu, Z.-Q.; Liu, C.-Y.; Zhao, D.-L.; Wang, Y.-Q. Parameter Identification of Photovoltaic Cell Model Based on Improved Elephant Herding Optimization Algorithm. Soft Comput. 2023, 27, 5797–5811. [Google Scholar] [CrossRef]
  23. Choudhury, H.A.; Sinha, N.; Saikia, M. Nature Inspired Algorithms (NIA) for Efficient Video Compression—A Brief Study. Eng. Sci. Technol. 2020, 23, 507–526. [Google Scholar] [CrossRef]
  24. Dewangan, R.K.; Shukla, A.; Godfrey, W.W. Three Dimensional Path Planning Using Grey Wolf Optimizer for UAVs. Appl. Intell. 2019, 49, 2201–2217. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  26. Ni, X.; Hu, W.; Fan, Q.; Cui, Y.; Qi, C. A Q-Learning Based Multi-Strategy Integrated Artificial Bee Colony Algorithm with Application in Unmanned Vehicle Path Planning. Expert Syst. Appl. 2024, 236, 121303. [Google Scholar] [CrossRef]
  27. Li, X.; Zhang, S.; Shao, P. Discrete Artificial Bee Colony Algorithm with Fixed Neighborhood Search for Traveling Salesman Problem. Eng. Appl. Artif. Intell. 2024, 131, 107816. [Google Scholar] [CrossRef]
  28. Ait-Saadi, A.; Meraihi, Y.; Soukane, A.; Yahia, S.; Ramdane-Cherif, A.; Gabis, A.B. An Enhanced African Vulture Optimization Algorithm for Solving the Unmanned Aerial Vehicles Path Planning Problem✩. Comput. Electr. Eng. 2023, 110, 108802. [Google Scholar] [CrossRef]
  29. Deng, L.; Liu, S. Snow Ablation Optimizer: A Novel Metaheuristic Technique for Numerical Optimization and Engineering Design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  30. Xiao, Y.; Cui, H.; Hussien, A.G.; Hashim, F.A. MSAO: A Multi-Strategy Boosted Snow Ablation Optimizer for Global Optimization and Real-World Engineering Applications. Adv. Eng. Inform. 2024, 61, 102464. [Google Scholar] [CrossRef]
  31. Jia, H.; You, F.; Wu, D.; Rao, H.; Wu, H.; Abualigah, L. Improved Snow Ablation Optimizer with Heat Transfer and Condensation Strategy for Global Optimization Problem. J. Comput. Des. Eng. 2023, 10, 2177–2199. [Google Scholar] [CrossRef]
  32. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Futur. Gener. Comp. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  33. Ferahtia, S.; Houari, A.; Rezk, H.; Djerioui, A.; Machmoum, M.; Motahhir, S.; Ait-Ahmed, M. Red-Tailed Hawk Algorithm for Numerical Optimization and Real-World Problems. Sci. Rep. 2023, 13, 12950. [Google Scholar] [CrossRef] [PubMed]
  34. Tawhid, M.A.; Ali, A.F. Multidirectional Harmony Search Algorithm for Solving Integer Programming and Minimax Problems. Int. J. Bio-Inspired Comput. 2019, 13, 141–158. [Google Scholar] [CrossRef]
  35. Kazemzadeh Azad, S.; Akis, E. Cost Efficient Design of Mechanically Stabilized Earth Walls Using Adaptive Dimensional Search Algorithm. Tek. Dergi 2020, 31, 10167–10188. [Google Scholar] [CrossRef]
  36. Ge, Y.; Wang, A.; Zhao, Z.; Ye, J. A Tabu-Genetic Hybrid Search Algorithm for Job-Shop Scheduling Problem. In Proceedings of the 3rd International Conference on Power, Energy and Mechanical Engineering (ICPEME 2019), Prague, Czech Republic, 16–19 February 2019; Agarwal, R.K., Ed.; EDP Sciences: Les Ulis, France, 2019; Volume 95, p. 04007. [Google Scholar]
  37. Faramarzi-Oghani, S.; Neghabadi, P.D.; Talbi, E.-G.; Tavakkoli-Moghaddam, R. Meta-Heuristics for Sustainable Supply Chain Management: A Review. Int. J. Prod. Res. 2023, 61, 1979–2009. [Google Scholar] [CrossRef]
  38. Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  39. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  40. Koza, J.R.; Bennett, F.H.; Andre, D.; Keane, M.A.; Dunlap, F. Automated Synthesis of Analog Electrical Circuits by Means of Genetic Programming. IEEE Trans. Evol. Comput. 1997, 1, 109–128. [Google Scholar] [CrossRef]
  41. Rechenberg, I. Evolution Strategy: Nature’s Way of Optimization. In Proceedings of the Optimization: Methods and Applications, Possibilities and Limitations; Bergmann, H.W., Ed.; Springer: Berlin/Heidelberg, Germany, 1989; pp. 106–126. [Google Scholar]
  42. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  43. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A Nature-Inspired Algorithm for Global Optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  44. Zhao, W.; Wang, L.; Zhang, Z. Atom Search Optimization and Its Application to Solve a Hydrogeologic Parameter Estimation Problem. Knowl.-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  45. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A Physics-Based Optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  46. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  47. Xue, J.; Shen, B. Dung Beetle Optimizer: A New Meta-Heuristic Algorithm for Global Optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  48. Miao, C.; Chen, G.; Yan, C.; Wu, Y. Path Planning Optimization of Indoor Mobile Robot Based on Adaptive Ant Colony Algorithm. Comput. Ind. Eng. 2021, 156, 107230. [Google Scholar] [CrossRef]
  49. Lin, S.; Liu, A.; Wang, J.; Kong, X. An Intelligence-Based Hybrid PSO-SA for Mobile Robot Path Planning in Warehouse. J. Comput. Sci. 2023, 67, 101938. [Google Scholar] [CrossRef]
  50. Wu, Y.; Nie, M.; Ma, X.; Guo, Y.; Liu, X. Co-Evolutionary Algorithm-Based Multi-Unmanned Aerial Vehicle Cooperative Path Planning. Drones 2023, 7, 606. [Google Scholar] [CrossRef]
  51. Zhang, J.; Zhu, X.; Li, J. Intelligent Path Planning with an Improved Sparrow Search Algorithm for Workshop UAV Inspection. Sensors 2024, 24, 1104. [Google Scholar] [CrossRef]
  52. Malheiros-Silveira, G.N.; Delalibera, F.G. Inverse Design of Photonic Structures Using an Artificial Bee Colony Algorithm. Appl. Opt. 2020, 59, 4171–4175. [Google Scholar] [CrossRef]
  53. Elymany, M.M.; Enany, M.A.; Elsonbaty, N.A. Hybrid Optimized-ANFIS Based MPPT for Hybrid Microgrid Using Zebra Optimization Algorithm and Artificial Gorilla Troops Optimizer. Energy Conv. Manag. 2024, 299, 117809. [Google Scholar] [CrossRef]
  54. Trojovska, E.; Dehghani, M.; Trojovsky, P. Zebra Optimization Algorithm: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  55. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. Artificial Gorilla Troops Optimizer: A New Nature-Inspired Metaheuristic Algorithm for Global Optimization Problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
  56. Cao, Z.; Li, J.; Fu, Y.; Wang, Z.; Jia, H.; Tian, F. An Adaptive Biogeography-Based Optimization with Cumulative Covariance Matrix for Rule-Based Network Intrusion Detection. Swarm Evol. Comput. 2022, 75, 101199. [Google Scholar] [CrossRef]
  57. Rezk, H.; Mohamed, M.A.; Diab, A.A.Z.; Kanagaraj, N. Load Frequency Control of Multi-Interconnected Renewable Energy Plants Using Multi-Verse Optimizer. Comput. Syst. Sci. Eng. 2021, 37, 219–231. [Google Scholar] [CrossRef]
  58. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An Improved Grey Wolf Optimizer for Solving Engineering Problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  59. Wu, Q.; Xia, X.; Song, H.; Zeng, H.; Xu, X.; Zhang, Y.; Yu, F.; Wu, H. A Neighborhood Comprehensive Learning Particle Swarm Optimization for the Vehicle Routing Problem with Time Windows. Swarm Evol. Comput. 2024, 84, 101425. [Google Scholar] [CrossRef]
  60. Huang, C.; Zhou, X.; Ran, X.; Wang, J.; Chen, H.; Deng, W. Adaptive Cylinder Vector Particle Swarm Optimization with Differential Evolution for UAV Path Planning. Eng. Appl. Artif. Intell. 2023, 121, 105942. [Google Scholar] [CrossRef]
  61. Zhang, R.; Li, X.; Ren, H.; Ding, Y.; Meng, Y.; Xia, Q. UAV Flight Path Planning Based on Multi-Strategy Improved White Sharks Optimization. IEEE Access 2023, 11, 88462–88475. [Google Scholar] [CrossRef]
  62. Zhang, W.; Peng, C.; Yuan, Y.; Cui, J.; Qi, L. A Novel Multi-Objective Evolutionary Algorithm with a Two-Fold Constraint-Handling Mechanism for Multiple UAV Path Planning. Expert Syst. Appl. 2024, 238, 121862. [Google Scholar] [CrossRef]
  63. Lyu, L.; Jiang, H.; Yang, F. Improved Dung Beetle Optimizer Algorithm With Multi-Strategy for Global Optimization and UAV 3D Path Planning. IEEE Access 2024, 12, 69240–69257. [Google Scholar] [CrossRef]
  64. Su, Y.; Dai, Y.; Liu, Y. A Hybrid Hyper-Heuristic Whale Optimization Algorithm for Reusable Launch Vehicle Reentry Trajectory Optimization. Aerosp. Sci. Technol. 2021, 119, 107200. [Google Scholar] [CrossRef]
  65. Sadiq, A.S.; Dehkordi, A.A.; Mirjalili, S.; Pham, Q.-V. Nonlinear Marine Predator Algorithm: A Cost-Effective Optimizer for Fair Power Allocation in NOMA-VLC-B5G Networks. Expert Syst. Appl. 2022, 203, 117395. [Google Scholar] [CrossRef]
  66. Wu, R.; Huang, H.; Wei, J.; Ma, C.; Zhu, Y.; Chen, Y.; Fan, Q. An Improved Sparrow Search Algorithm Based on Quantum Computations and Multi-Strategy Enhancement. Expert Syst. Appl. 2023, 215, 119421. [Google Scholar] [CrossRef]
  67. Ahmed, R.; Rangaiah, G.P.; Mahadzir, S.; Mirjalili, S.; Hassan, M.H.; Kamel, S. Memory, Evolutionary Operator, and Local Search Based Improved Grey Wolf Optimizer with Linear Population Size Reduction Technique. Knowl.-Based Syst. 2023, 264, 110297. [Google Scholar] [CrossRef]
  68. Abualigah, L.; Diabat, A.; Svetinovic, D.; Abd Elaziz, M. Boosted Harris Hawks Gravitational Force Algorithm for Global Optimization and Industrial Engineering Problems. J. Intell. Manuf. 2023, 34, 2693–2728. [Google Scholar] [CrossRef]
  69. Fu, Y.; Liu, D.; Fu, S.; Chen, J.; He, L. Enhanced Aquila Optimizer Based on Tent Chaotic Mapping and New Rules. Sci. Rep. 2024, 14, 3013. [Google Scholar] [CrossRef]
  70. Ma, B.; Lu, P.; Zhang, L.; Qi, Q.; Chen, Y.; Hu, Y.; Wang, M.; Wang, G. CMSRAS: A Novel Chaotic Multi-Specular Reflection Optimization Algorithm Considering Shared Nodes. IEEE Access 2021, 9, 43050–43094. [Google Scholar] [CrossRef]
  71. Chen, W.; Peng, B.; Schoenebeck, G.; Tao, B. Adaptive Greedy versus Non-Adaptive Greedy for Influence Maximization. J. Artif. Intell. Res. 2022, 74, 303–351. [Google Scholar] [CrossRef]
  72. Petr, J.; Portier, J.; Versteegen, L. A Faster Algorithm for Cops and Robbers. Discret. Appl. Math. 2022, 320, 11–14. [Google Scholar] [CrossRef]
  73. Pelusi, D.; Mascella, R.; Tallini, L.; Nayak, J.; Naik, B.; Abraham, A. Neural Network and Fuzzy System for the Tuning of Gravitational Search Algorithm Parameters. Expert Syst. Appl. 2018, 102, 234–244. [Google Scholar] [CrossRef]
  74. Taheri, A.; RahimiZadeh, K.; Beheshti, A.; Baumbach, J.; Rao, R.V.; Mirjalili, S.; Gandomi, A.H. Partial Reinforcement Optimizer: An Evolutionary Optimization Algorithm. Expert Syst. Appl. 2024, 238, 122070. [Google Scholar] [CrossRef]
  75. GhaemiDizaji, M.; Dadkhah, C.; Leung, H. OHDA: An Opposition Based High Dimensional Optimization Algorithm. Appl. Soft. Comput. 2020, 91, 106185. [Google Scholar] [CrossRef]
  76. Kumari, C.L.; Kamboj, V.K.; Bath, S.K.; Tripathi, S.L.; Khatri, M.; Sehgal, S. A Boosted Chimp Optimizer for Numerical and Engineering Design Optimization Challenges. Eng. Comput. 2023, 39, 2463–2514. [Google Scholar] [CrossRef] [PubMed]
  77. Khishe, M.; Mosavi, M.R. Chimp Optimization Algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  78. Jiang, J.-J.; Wei, W.-X.; Shao, W.-L.; Liang, Y.-F.; Qu, Y.-Y. Research on Large-Scale Bi-Level Particle Swarm Optimization Algorithm. IEEE Access 2021, 9, 56364–56375. [Google Scholar] [CrossRef]
  79. Chopra, N.; Ansari, M.M. Golden Jackal Optimization: A Novel Nature-Inspired Optimizer for Engineering Applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  80. Xue, J.; Shen, B. A Novel Swarm Intelligence Optimization Approach: Sparrow Search Algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  81. Derrac, J.; Garcia, S.; Molina, D.; Herrera, F. A Practical Tutorial on the Use of Nonparametric Statistical Tests as a Methodology for Comparing Evolutionary and Swarm Intelligence Algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  82. Saracci, R. The Signed-Rank (Wilcoxon) Test. Lancet 1969, 1, 416–417. [Google Scholar] [CrossRef]
  83. Du, N.; Zhou, Y.; Deng, W.; Luo, Q. Improved Chimp Optimization Algorithm for Three-Dimensional Path Planning Problem. Multimed. Tools Appl. 2022, 81, 27397–27422. [Google Scholar] [CrossRef]
Figure 1. Taxonomy of meta-heuristic algorithms and representative algorithms.
Figure 1. Taxonomy of meta-heuristic algorithms and representative algorithms.
Mathematics 12 02870 g001
Figure 2. Melting and sublimation process of SAO algorithm.
Figure 2. Melting and sublimation process of SAO algorithm.
Mathematics 12 02870 g002
Figure 3. Schematic diagram of the cross term.
Figure 3. Schematic diagram of the cross term.
Mathematics 12 02870 g003
Figure 4. Distribution of tent chaos mapping.
Figure 4. Distribution of tent chaos mapping.
Mathematics 12 02870 g004
Figure 5. MISAO flowchart.
Figure 5. MISAO flowchart.
Mathematics 12 02870 g005
Figure 6. Convergence process of different algorithms.
Figure 6. Convergence process of different algorithms.
Mathematics 12 02870 g006
Figure 7. Convergence process of different strategies.
Figure 7. Convergence process of different strategies.
Mathematics 12 02870 g007
Figure 8. Runtime line chart.
Figure 8. Runtime line chart.
Mathematics 12 02870 g008
Figure 9. Friedman cumulative value.
Figure 9. Friedman cumulative value.
Mathematics 12 02870 g009
Figure 10. Terrain environment with mountain peaks.
Figure 10. Terrain environment with mountain peaks.
Mathematics 12 02870 g010
Figure 11. Path planning diagram of different algorithms.
Figure 11. Path planning diagram of different algorithms.
Mathematics 12 02870 g011
Figure 12. 3D and plan view of path planning with different algorithms.
Figure 12. 3D and plan view of path planning with different algorithms.
Mathematics 12 02870 g012
Table 1. Differences and links between this study and existing studies.
Table 1. Differences and links between this study and existing studies.
ReferenceSpecificitiesTechnologyApplication ScenarioResearch Focus
[48]Around the path planning problem, each study aims to improve the efficiency of path planning, reduce the consumption of time and resources and improve the overall operational performance. All use improved algorithmsIAACOIndoor mobile robotsFocus on algorithm improvement for path planning efficiency
[49]PSO-SACommercial and industrial operations and maintenanceConsumption and time costs in specific applications
[50]CCEA-CPPCollaborative multi-UAS operationsImprovements to the efficiency of path planning
[51]CFSSAIntelligent workshop inspectionStrategies that emphasize co-evolution
[59]Path planning and optimization problems for UAVs, addressing challenges in path planning, improving path feasibility and optimization performance and proposing new algorithms or improvementsN-CLPSOVehicle path optimizationRoute optimization and efficiency improvements in searches
[60]ACVDEPSOUAV path planning in 3D environments
chemical
Route optimization and efficiency improvements in searches
[61]MSWSOEmphasis on multi-strategy optimization of UAV flight pathsMulti-strategy and multi-objective optimization
[62]Multi-objective inflation algorithmFocus on multi-objective optimization to address the limitations of integrated UAVsMulti-strategy and multi-objective optimization
[63]IDBO3D path planning ProblemEmphasis on validation of effectiveness in practical applications
Our studyMISAOFocus on 2D and 3D drone path planningBoth single drone and drone integration issues cover a wide range of topics
Table 2. Twenty-three benchmark functions.
Table 2. Twenty-three benchmark functions.
Function TypeDisplayed FormulaDimensionsSearch RangeOptimum Value
Unimodal
functions
F 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]0
F 3 ( x ) = i = 1 n ( j 1 i x j ) 2 30[−100, 100]0
F 4 ( x ) = max i { | x i | , 1 i n } 30[−100, 100]0
F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
F 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0
F 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
Multimodal functions F 8 ( x ) = i = 1 n x i sin ( | x i | ) 30[−500, 500]−12,569.5
F 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
F 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0
F 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30[−600, 600]0
F 12 ( x ) = π n { 10 sin 2 ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 )
y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = { k ( x i a ) m X i > a 0 a < X i < a k ( x i a ) m x i < a
30[−50, 50]0
F 13 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n 1 ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
Fixed dimension functions F 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65.536, 65.536]1
F 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.0003075
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316285
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[5, 10] × [0, 15]0.398
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[0, 1]−3.863
F 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.322
F 21 ( x ) = i = 1 5 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10.153
F 22 ( x ) = i = 1 7 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10.403
F 23 ( x ) = i = 1 10 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10.536
Table 3. Algorithm parameter settings.
Table 3. Algorithm parameter settings.
AlgorithmReferenceYearParametersValue
GA[38]1992pc, pm0.8, 0.05
MVO[43]2015WEP_Max, WEP_Min, r1, r2, r31, 0.2, [0, 1], [0, 1], [0, 1]
RIME[45]2023w, r1, r25, [0,1], [0,1]
WOA[46]2016A, C, b, l, a, a2, r1, r2, p2 × a × r1 − a, C = 2 × r2, 1,
[−1, 1], [0, 2][−1, −2], [0, 1], [0, 1], [0, 1],
ChOA[77]2023f, r1, r2, a, m[0, 2.5], [0, 1], [0, 1],
[−2f, 2f], chaos (3)
PSO[78]1995Vmax, wMax, wMin, c1, c26, 0.9, 0.6, 2, 2
GJO[79]2022RL, c1, E0, E1, u, v, β0.05 × levy(y), 1.5, [−1, 1], [0, 1.5], [0, 1], [0, 1], 1.5
SSA[80]2020ST, c2, c30.5, [0, 1], [0, 1]
SAO[29]2023k, r1, r21, [0, 1], [−1, 1]
MISAO 2023, 2023, 2019k, r1, r2, A, R0, r, E0, q1, [0, 1], [−1, 1], 15, 0.5, 1.5, [−1, 1], [0, 1]
Table 4. Experimental results of different algorithms.
Table 4. Experimental results of different algorithms.
FunctionIndexRIMEPSOChOAGJOMVOGASSAWOASAOMISAO
F1Best8.450 × 10−17.436 × 10−15.571 × 10−111.660 × 10−577.045 × 10−11.030 × 1042.933 × 10−85.469 × 10−844.457 × 10−46.045 × 10−179
Ave2.285 × 1002.388 × 1002.121 × 10−71.293 × 10−541.244 × 10+02.556 × 1042.578 × 10−71.030 × 10−724.991 × 10−32.417 × 10−147
Sta1.093 × 1001.297 × 1003.017 × 10−73.774 × 10−543.280 × 10−19.594 × 1034.233 × 10−73.804 × 10−726.367 × 10−31.271 × 10−146
F2Best5.415 × 10−12.245 × 1001.491 × 10−157.722 × 10−344.849 × 10−14.287 × 1013.065 × 10−18.205 × 10−581.724 × 10−39.373 × 10−93
Ave1.274 × 1004.450 × 1002.568 × 10−62.067 × 10−321.192 × 1005.767 × 1011.939 × 1002.254 × 10−509.423 × 10−31.327 × 10−77
Sta1.270 × 1001.151 × 1003.147 × 10−62.167 × 10−321.717 × 1001.021 × 1011.571 × 1001.039 × 10−495.684 × 10−36.232 × 10−77
F3Best6.034 × 1029.929 × 1014.083 × 10−31.920 × 10−225.846 × 1013.503 × 1043.742 × 1021.633 × 1041.044 × 1031.555 × 10−168
Ave1.515 × 1031.816 × 1024.910 × 1011.143 × 10−162.151 × 1025.293 × 1041.441 × 1034.541 × 1043.854 × 1035.002 × 10−139
Sta5.431 × 1025.422 × 1011.406 × 1026.144 × 10−167.995 × 1011.247 × 1046.777 × 1021.529 × 1043.243 × 1032.740 × 10−138
F4Best3.081 × 1001.126 × 1004.933 × 10−41.538 × 10−189.339 × 10−15.605 × 1015.822 × 1005.157 × 10−26.576 × 10−15.090 × 10−85
Ave6.979 × 10+01.874 × 1001.149 × 10−18.643 × 10−162.166 × 1007.217 × 1011.164 × 1015.038 × 1014.575 × 1001.301 × 10−70
Sta2.698 × 1002.387 × 10−11.418 × 10−11.627 × 10−159.297 × 10−18.429 × 1003.299 × 1002.656 × 1011.826 × 1007.098 × 10−70
F5Best9.998 × 1013.934 × 1022.869 × 1012.623 × 1013.851 × 1012.445 × 1062.401 × 1012.722 × 1012.573 × 1013.450 × 10−2
Ave7.233 × 1028.926 × 1022.887 × 1012.777 × 1015.778 × 1023.338 × 1072.482 × 1022.804 × 1016.871 × 1011.806 × 101
Sta9.399 × 1023.549 × 1029.470 × 10−26.895 × 10−18.537 × 1023.082 × 1073.510 × 1024.236 × 10−17.161 × 1011.190 × 101
F6Best1.123 × 1007.706 × 10−13.273 × 1001.680 × 1007.779 × 10−11.093 × 1043.103 × 10−81.112 × 10−11.861 × 10−42.827 × 10−6
Ave2.246 × 10+02.249 × 10+03.867 × 10+02.599 × 10+01.324 × 10+02.268 × 10+42.439 × 10−73.771 × 10−18.023 × 10−38.436 × 10−3
Sta1.147 × 1001.344 × 1003.899 × 10−16.121 × 10−12.803 × 10−17.817 × 1034.725 × 10−72.010 × 10−11.647 × 10−24.554 × 10−2
F7Best1.426 × 10−23.403 × 1004.098 × 10−51.068 × 10−41.397 × 10−24.250 × 1007.513 × 10−28.006 × 10−52.631 × 10−25.184 × 10−6
Ave3.415 × 10−21.740 × 1011.790 × 10−35.175 × 10−43.608 × 10−21.513 × 1011.762 × 10−14.455 × 10−36.022 × 10−28.804 × 10−5
Sta1.378 × 10−21.054 × 10+12.185 × 10−33.603 × 10−41.456 × 10−21.061 × 1017.407 × 10−23.494 × 10−31.825 × 10−21.052 × 10−4
F8Best−1.077 × 104−8.520 × 103−5.816 × 103−6.425 × 103−9.592 × 103−3.943 × 103−8.892 × 103−1.257 × 104−1.078 × 104−1.257 × 104
Ave−1.009 × 104−6.413 × 103−5.694 × 103−4.316 × 103−7.539 × 103−2.169 × 103−7.529 × 103−1.031 × 104−9.546 × 103−1.257 × 104
Sta3.799 × 1021.262 × 1035.205 × 1011.094 × 1037.900 × 1026.000 × 1027.714 × 1021.844 × 1036.241 × 1023.936 × 10−2
F9Best4.126 × 1019.739 × 1016.558 × 10−90.000 × 1005.360 × 1011.427 × 1021.890 × 1010.000 × 1002.234 × 1010.000 × 100
Ave6.998 × 1011.656 × 1024.130 × 1000.000 × 1001.179 × 1022.628 × 1025.489 × 1010.000 × 1006.225 × 1010.000 × 100
Sta2.011 × 1013.373 × 1016.301 × 1000.000 × 1003.542 × 1015.497 × 1012.324 × 1010.000 × 1002.680 × 1010.000 × 100
F10Best1.010 × 1001.758 × 1001.996 × 1013.997 × 10−156.157 × 10−11.876 × 1014.465 × 10−14.441 × 10−163.759 × 10−34.441 × 10−16
Ave2.195 × 1002.482 × 1001.996 × 1017.076 × 10−151.736 × 1001.991 × 1012.630 × 1003.878 × 10−151.852 × 10−24.441 × 10−16
Sta4.292 × 10−14.888 × 10−11.075 × 10−31.228 × 10−156.237 × 10−14.158 × 10−19.462 × 10−12.185 × 10−159.870 × 10−30.000 × 100
F11Best6.275 × 10−15.147 × 10−22.220 × 10−160.000 × 1006.493 × 10−18.745 × 1011.525 × 10−30.000 × 1007.399 × 10−40.000 × 100
Ave9.570 × 10−11.240 × 10−11.589 × 10−20.000 × 1008.406 × 10−11.888 × 1021.759 × 10−20.000 × 1002.087 × 10−10.000 × 100
Sta8.677 × 10−24.663 × 10−22.510 × 10−20.000 × 1008.516 × 10−26.664 × 1011.317 × 10−20.000 × 1002.894 × 10−10.000 × 100
F12Best6.178 × 10−18.218 × 10−33.330 × 10−17.242 × 10−26.500 × 10−28.066 × 1052.303 × 1005.191 × 10−32.901 × 10−58.490 × 10−8
Ave2.772 × 1005.038 × 10−25.417 × 10−12.125 × 10−12.212 × 1002.354 × 10+76.876 × 1002.032 × 10−26.986 × 10−27.785 × 10−7
Sta1.403 × 1004.255 × 10−22.114 × 10−19.104 × 10−21.131 × 1002.331 × 10+73.378 × 1001.842 × 10−21.409 × 10−11.021 × 10−6
F13Best9.475 × 10−22.682 × 10−12.497 × 1001.345 × 1003.442 × 10−22.962 × 1061.321 × 10−26.692 × 10−23.403 × 10−51.247 × 10−11
Ave2.638 × 10−15.968 × 10−12.779 × 1001.618 × 1001.336 × 10−16.598 × 1071.106 × 1014.739 × 10−19.086 × 10−25.723 × 10−2
Sta1.212 × 10−13.073 × 10−11.274 × 10−11.867 × 10−15.830 × 10−25.397 × 1071.181 × 1012.571 × 10−12.352 × 10−11.550 × 10−1
F14Best9.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−1
Ave9.980 × 10−13.433 × 1001.033 × 1004.396 × 1009.980 × 10−11.284 × 1001.229 × 1002.507 × 1003.496 × 1001.329 × 100
Sta8.150 × 10−122.264 × 1001.867 × 10−13.991 × 1003.204 × 10−117.498 × 10−16.728 × 10−12.643 × 1002.518 × 1007.521 × 10−1
F15Best3.115 × 10−44.823 × 10−41.242 × 10−33.077 × 10−45.026 × 10−41.418 × 10−33.091 × 10−43.085 × 10−43.075 × 10−43.075 × 10−4
Ave3.391 × 10−39.020 × 10−41.313 × 10−31.164 × 10−34.679 × 10−31.708 × 10−22.121 × 10−36.530 × 10−41.862 × 10−33.075 × 10−4
Sta6.777 × 10−31.483 × 10−44.318 × 10−53.633 × 10−37.978 × 10−31.396 × 10−24.969 × 10−36.333 × 10−45.035 × 10−39.029 × 10−8
F16Best−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100−1.031 × 100−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100
Ave−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100−9.351 × 10−1−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100
Sta2.275 × 10−74.879 × 10−161.239 × 10−51.866 × 10−73.896 × 10−71.427 × 10−13.930 × 10−142.446 × 10−96.712 × 10−166.775 × 10−16
F17Best3.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−16.262 × 1013.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−1
Ave3.979 × 10−13.979 × 10−13.994 × 10−13.979 × 10−13.979 × 10−017.280 × 1013.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−1
Sta6.069 × 10−70.000 × 1002.109 × 10−33.776 × 10−54.942 × 10−76.547 × 1006.864 × 10−146.326 × 10−60.000 × 1000.000 × 100
F18Best3.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 100
Ave5.700 × 1003.000 × 1003.000 × 1003.000 × 1005.700 × 1001.176 × 1013.000 × 1003.000 × 1003.000 × 1003.000 × 100
Sta1.479 × 1015.955 × 10−152.287 × 10−46.161 × 10−61.479 × 1012.110 × 1012.215 × 10−131.433 × 10−49.330 × 10−161.278 × 10−15
F19Best−3.863 × 100−3.863 × 100−3.860 × 100−3.863 × 100−3.863 × 100−3.844 × 100−3.863 × 100−3.863 × 100−3.863 × 100−3.863 × 100
Ave−3.863 × 100−3.863 × 100−3.854 × 100−3.858 × 100−3.863 × 100−3.301 × 100−3.863 × 100−3.855 × 100−3.863 × 100−3.863 × 100
Sta7.620 × 10−72.065 × 10−151.718 × 10−33.758 × 10−31.328 × 10−63.546 × 10−11.411 × 10−91.381 × 10−22.696 × 10−152.710 × 10−15
F20Best−3.322 × 100−3.322 × 100−3.252 × 100−3.322 × 100−3.322 × 100−2.425 × 100−3.322 × 100−3.322 × 100−3.322 × 100−3.322 × 100
Ave−3.282 × 100−3.278 × 100−2.568 × 100−3.099 × 100−3.270 × 100−1.459 × 100−3.229 × 100−3.240 × 100−3.286 × 100−3.294 × 100
Sta5.700 × 10−25.827 × 10−24.916 × 10−11.588 × 10−16.074 × 10−25.243 × 10−16.338 × 10−21.170 × 10−15.542 × 10−25.115 × 10−2
F21Best−1.015 × 101−1.015 × 101−5.045 × 100−1.015 × 101−1.015 × 101−2.058 × 100−1.015 × 101−1.015 × 101−1.015 × 101−1.015 × 101
Ave−8.043 × 100−6.997 × 100−3.082 × 100−7.552 × 100−7.124 × 100−9.312 × 10−1−7.402 × 100−7.937 × 100−5.826 × 100−8.773 × 100
Sta2.661 × 1003.095 × 1002.064 × 1002.933 × 1002.988 × 1004.052 × 10−13.494 × 1002.791 × 1002.017 × 1002.280 × 100
F22Best−1.040 × 101−1.040 × 101−5.043 × 100−1.040 × 101−1.040 × 101−2.373 × 100−1.040 × 101−1.040 × 101−1.040 × 101−1.040 × 101
Ave−8.822 × 100−8.986 × 100−3.466 × 100−9.782 × 100−8.660 × 100−9.793 × 10−1−9.064 × 100−6.997 × 100−6.205 × 100−8.259 × 100
Sta2.981 × 1002.911 × 1001.980 × 1001.881 × 1002.761 × 1005.399 × 10−12.767 × 1003.383 × 1002.404 × 1002.634 × 100
F23Best−1.054 × 101−1.054 × 101−7.622 × 100−1.054 × 101−1.054 × 101−2.369 × 100−1.054 × 101−1.053 × 101−1.054 × 101−1.053 × 101
Ave−9.552 × 100−9.984 × 100−4.930 × 100−9.639 × 100−9.382 × 100−1.182 × 100−9.138 × 100−7.911 × 100−6.242 × 100−8.903 × 100
Sta2.251 × 1001.687 × 1001.251 × 1002.339 × 1002.381 × 1004.703 × 10−12.891 × 1003.275 × 1002.741 × 1002.513 × 100
Bold indicates optimal value.
Table 5. Experimental results of different strategies.
Table 5. Experimental results of different strategies.
FunctionIndexSAOSAO1SAO2SAO3SAO4MISAO
F1Best2.750 × 10−41.371 × 10−61.197 × 10−44.930 × 10−48.706 × 10−1758.855 × 10−180
Ave3.823 × 10−38.911 × 10−41.308 × 10−21.718 × 10−21.799 × 10−1432.914 × 10−145
Sta2.584 × 10−32.736 × 10−34.876 × 10−22.837 × 10−29.853 × 10−1431.488 × 10−144
F2Best2.443 × 10−33.935 × 10−41.590 × 10−32.495 × 10−37.709 × 10−911.879 × 10−93
Ave8.470 × 10−32.462 × 10−37.214 × 10−32.257 × 10−24.808 × 10−771.382 × 10−78
Sta6.189 × 10−31.791 × 10−33.902 × 10−32.036 × 10−21.826 × 10−767.520 × 10−78
F3Best6.703 × 1029.263 × 1003.428 × 1022.407 × 1033.730 × 10−1671.714 × 10−160
Ave3.602 × 1035.761 × 1023.429 × 1038.407 × 1031.020 × 10−1341.005 × 10−134
Sta2.241 × 1035.454 × 1022.906 × 1034.796 × 1035.587 × 10−1345.506 × 10−134
F4Best2.264 × 1001.899 × 10−12.102 × 1002.829 × 1002.715 × 10−837.272 × 10−86
Ave4.333 × 1003.506 × 1004.261 × 1006.869 × 1004.885 × 10−681.518 × 10−73
Sta1.448 × 1002.511 × 1001.414 × 1002.447 × 1002.676 × 10−677.409 × 10−73
F5Best2.691 × 1017.918 × 10−32.698 × 1012.839 × 1012.560 × 1011.508 × 100
Ave1.028 × 1022.676 × 1011.949 × 1021.766 × 1022.655 × 1012.220 × 101
Sta1.147 × 1023.262 × 1013.098 × 1024.491 × 1027.420 × 10−19.041 × 100
F6Best1.983 × 10−42.927 × 10−63.476 × 10−41.251 × 10−37.669 × 10−62.912 × 10−6
Ave4.345 × 10−32.586 × 10−44.928 × 10−31.737 × 10−24.322 × 10−21.868 × 10−2
Sta4.107 × 10−33.962 × 10−45.379 × 10−33.148 × 10−29.461 × 10−26.381 × 10−2
F7Best2.733 × 10−21.271 × 10−33.009 × 10−22.296 × 10−21.466 × 10−61.643 × 10−6
Ave5.446 × 10−23.272 × 10−25.981 × 10−26.349 × 10−28.048 × 10−57.706 × 10−5
Sta1.712 × 10−21.852 × 10−21.711 × 10−22.274 × 10−26.488 × 10−56.518 × 10−5
F8Best−1.067 × 104−1.257 × 104−1.176 × 108−1.045 × 104−1.043 × 104−1.347 × 104
Ave−9.262 × 103−1.257 × 104−4.441 × 106−9.216 × 103−9.179 × 103−1.260 × 104
Sta7.172 × 1021.556 × 10−32.144 × 1075.461 × 1026.084 × 1021.645 × 102
F9Best1.295 × 1011.549 × 10−52.067 × 1014.198 × 1010.000 × 1000.000 × 100
Ave5.668 × 1012.108 × 1015.650 × 1018.249 × 1010.000 × 1000.000 × 100
Sta3.094 × 1013.174 × 1012.466 × 1012.358 × 1010.000 × 1000.000 × 100
F10Best2.123 × 10−39.960 × 10−43.575 × 10−31.067 × 10−24.441 × 10−164.441 × 10−16
Ave1.376 × 10−22.001 × 10−11.759 × 10−22.572 × 10−24.441 × 10−164.441 × 10−16
Sta7.748 × 10−37.231 × 10−11.220 × 10−29.599 × 10−30.000 × 1000.000 × 100
F11Best7.761 × 10−41.016 × 10−53.062 × 10−43.805 × 10−40.000 × 1000.000 × 100
Ave1.623 × 10−11.202 × 10−12.001 × 10−12.222 × 10−10.000 × 1000.000 × 100
Sta2.957 × 10−12.780 × 10−13.022 × 10−13.102 × 10−10.000 × 1000.000 × 100
F12Best1.818 × 10−61.721 × 10−98.876 × 10−62.747 × 10−43.989 × 10−83.588 × 10−10
Ave3.409 × 10−15.758 × 10−17.997 × 10−21.232 × 10−13.115 × 10−63.734 × 10−7
Sta1.389 × 1001.074 × 1003.273 × 10−12.519 × 10−11.115 × 10−53.494 × 10−7
F13Best7.349 × 10−46.546 × 10−86.164 × 10−45.652 × 10−41.715 × 10−21.329 × 10−6
Ave3.468 × 10−23.701 × 10−32.796 × 10−21.064 × 10−19.547 × 10−12.170 × 10−2
Sta3.657 × 10−25.249 × 10−34.848 × 10−21.450 × 10−15.797 × 10−14.867 × 10−2
F14Best9.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−19.980 × 10−1
Ave3.299 × 1001.164 × 1003.038 × 1002.413 × 1004.790 × 1001.196 × 100
Sta2.529 × 1004.578 × 10−12.216 × 1002.087 × 1003.892 × 1006.054 × 10−1
F15Best3.075 × 10−43.075 × 10−43.075 × 10−44.703 × 10−43.075 × 10−43.075 × 10−4
Ave3.156 × 10−33.845 × 10−45.751 × 10−46.593 × 10−33.075 × 10−43.075 × 10−4
Sta6.869 × 10−31.255 × 10−42.507 × 10−49.172 × 10−32.282 × 10−85.838 × 10−9
F16Best−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100
Ave−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100−1.032 × 100
Sta6.775 × 10−166.649 × 10−166.712 × 10−166.775 × 10−166.584 × 10−166.775 × 10−16
F17Best3.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−1
Ave3.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−1
Sta0.000 × 1000.000 × 1000.000 × 1000.000 × 1005.459 × 10−90.000 × 100
F18Best3.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 100
Ave3.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 1003.000 × 100
Sta1.256 × 10−151.040 × 10−157.142 × 10−166.388 × 10−161.198 × 10−159.792 × 10−16
F19Best−3.863 × 100−3.863 × 100−3.863 × 100−3.863 × 100−3.863 × 100−3.863 × 100
Ave−3.863 × 100−3.863 × 100−3.863 × 100−3.863 × 100−3.863 × 100−3.863 × 100
Sta2.710 × 10−152.710 × 10−152.710 × 10−152.710 × 10−152.696 × 10−152.710 × 10−15
F20Best−3.322 × 100−3.322 × 100−3.322 × 100−3.322 × 100−3.322 × 100−3.322 × 100
Ave−3.259 × 100−3.306 × 100−3.259 × 100−3.255 × 100−3.290 × 100−3.310 × 100
Sta6.033 × 10−24.111 × 10−26.033 × 10−25.992 × 10−25.348 × 10−23.628 × 10−2
F21Best−1.015 × 101−1.015 × 101−1.015 × 101−1.015 × 101−5.055 × 100−1.015 × 101
Ave−5.247 × 100−7.870 × 100−5.407 × 100−6.618 × 100−5.055 × 100−8.439 × 100
Sta1.854 × 1002.690 × 1001.717 × 1003.068 × 1001.030 × 10−152.434 × 100
F22Best−1.040 × 101−1.040 × 101−1.040 × 101−1.040 × 101−5.088 × 100−1.040 × 101
Ave−6.118 × 100−7.928 × 100−6.214 × 100−6.994 × 100−5.088 × 100−8.796 × 100
Sta2.772 × 1002.691 × 1002.654 × 1003.758 × 1003.938 × 10−152.469 × 100
F23Best−1.054 × 101−1.054 × 101−1.054 × 101−1.054 × 101−1.052 × 101−1.053 × 101
Ave−6.419 × 100−9.276 × 100−7.429 × 100−7.401 × 100−5.308 × 100−9.433 × 100
Sta2.878 × 1002.324 × 1002.774 × 1003.723 × 1009.837 × 10−12.189 × 100
Bold indicates optimal value.
Table 6. Friedman values.
Table 6. Friedman values.
FunctionRIMEPSOChOAGJOMVOGASSAWOASAOMISAO
F18.33 8.33 4.40 3.00 7.33 10.00 4.60 2.00 6.00 1.00
F27.00 8.80 4.00 3.00 6.83 10.00 7.37 2.00 5.00 1.00
F36.67 4.23 3.20 2.00 4.60 9.60 6.57 9.40 7.73 1.00
F46.93 4.53 3.03 2.00 4.63 9.73 8.07 8.90 6.17 1.00
F57.63 8.50 4.40 2.70 7.07 10.00 5.77 2.90 4.97 1.07
F66.73 6.77 8.77 7.33 5.40 10.00 1.00 4.00 2.97 2.03
F75.57 9.70 3.03 2.33 5.70 9.30 8.00 3.57 6.73 1.07
F82.77 6.90 7.87 8.83 5.63 9.97 5.67 3.07 3.30 1.00
F96.43 8.83 4.00 2.00 7.80 9.93 5.83 2.00 6.17 2.00
F106.37 7.10 9.43 2.85 5.50 9.57 7.03 2.05 4.00 1.10
F118.87 6.63 4.33 2.00 8.07 10.00 4.93 2.00 6.17 2.00
F127.77 3.57 6.10 4.87 7.20 10.00 8.80 2.83 2.87 1.00
F134.13 5.83 8.37 7.33 3.13 10.00 7.17 5.07 2.20 1.77
F142.43 6.80 6.87 8.50 3.50 5.67 2.23 6.80 6.77 5.43
F155.93 6.50 8.30 3.50 6.20 9.80 5.90 3.83 4.00 1.03
F166.50 2.85 8.83 7.23 7.40 10.00 4.03 5.00 1.60 1.55
F175.87 2.05 9.00 7.63 6.33 10.00 3.85 6.17 2.05 2.05
F185.60 3.02 8.80 6.30 7.17 9.57 3.97 7.57 1.50 1.52
F195.10 2.98 8.67 7.60 5.90 10.00 4.00 7.73 1.53 1.48
F204.60 3.40 8.57 7.63 4.57 9.97 5.53 6.07 2.53 2.13
F213.97 4.37 8.90 5.10 4.10 9.73 3.60 5.30 5.45 4.48
F224.63 2.73 8.80 4.73 3.90 9.83 2.80 6.50 5.47 5.60
F234.13 2.07 8.17 5.63 3.97 9.93 3.17 6.70 5.67 5.57
Average5.82 5.50 6.78 4.96 5.74 9.68 5.21 4.85 4.38 2.08
Overall rank86947105321
Table 7. Wilcoxon rank test results.
Table 7. Wilcoxon rank test results.
Function TypeMISAO vs. RIMEMISAO vs. PSOMISAO vs. ChOAMISAO vs. GJOMISAO vs. MVOMISAO vs. GAMISAO vs. SSAMISAO vs. WOAMISAO vs. SAO
(+/=/−)(+/=/−)(+/=/−)(+/=/−)(+/=/−)(+/=/−)(+/=/−)(+/=/−)(+/=/−)
Unimodal7/0/07/0/07/0/07/0/07/0/07/0/06/0/17/0/06/0/1
Multimodal 6/0/06/0/06/0/04/2/06/0/06/0/06/0/04/2/06/0/0
Fixed dimension 7/0/37/2/19/0/17/2/17/1/29/1/08/0/29/1/03/7/0
Total20/0/320/2/122/0/118/4/120/1/222/1/020/0/220/3/015/7/1
Table 8. Path planning results of different algorithms.
Table 8. Path planning results of different algorithms.
AlgorithmBestMedianWorstMean
RIME424.205461.567670.051471.074
PSO235.007477.273668.182467.745
ChoA438.452492.688679.303499.315
GJO424.395426.087534.398434.375
MVO424.056437.418700.349474.485
SSA423.993453.215587.823460.189
WOA424.019525.968771.773522.988
SAO423.990448.589696.159456.818
MISAO256.730388.449477.976371.880
Bold indicates optimal value.
Table 9. Four costs for different algorithms.
Table 9. Four costs for different algorithms.
UVARIMEPSOChOAGJOMVOSSAWOASAOMISAO
Path costUVA11075.03 1429.98 1016.65 1032.93 1095.77 1117.40 1061.48 1072.73 1038.50
UVA21063.97 1406.79 1019.03 1033.09 1094.94 1116.73 1045.09 1113.62 1042.57
UVA31084.12 1388.17 1020.83 1017.71 1094.50 1087.09 1077.71 1078.78 1039.31
UVA41085.60 1380.99 1020.41 1030.62 1091.99 1115.94 1075.26 1090.36 1038.52
UVA51092.01 1339.15 1017.76 1023.12 1075.06 1098.49 1059.29 1079.55 1045.48
Threat costUVA12.64 333,337.39 4,666,669.54 4.93 4.49 333,340.19 333,337.45 3.14 4.33
UVA23.83 2,000,002.79 4,666,669.87 5.04 3.19 4.29 1,666,670.32 6.00 4.12
UVA35.25 1,000,004.54 3,333,337.15 4.03 3.79 4.37 333,338.61 333,336.53 3.28
UVA44.89 333,337.22 3,333,336.64 6.03 333,338.39 5.98 4.03 3.78 3.72
UVA53.80 3.61 5,000,002.43 5.19 4.06 2.74 1,333,341.58 5.38 4.07
High costUVA1199.81 324.18 365.86 273.14 163.91 244.53 296.04 110.70 107.64
UVA2192.69 310.23 361.80 286.34 159.62 241.88 267.67 114.81 135.94
UVA3188.16 305.01 329.93 279.71 175.34 250.69 305.76 115.15 135.08
UVA4199.75 316.12 355.94 235.59 167.51 225.95 278.67 120.72 103.74
UVA5173.26 310.85 374.65 282.60 183.80 201.48 315.09 120.95 121.81
Angle costUVA188.67 701.15 160.41 184.11 87.61 129.01 136.80 228.42 78.64
UVA293.95 729.69 149.27 149.22 69.65 128.85 111.82 225.71 92.44
UVA388.88 679.20 159.46 127.81 76.41 134.39 133.24 218.00 72.45
UVA488.97 762.94 177.89 143.11 72.50 119.59 137.88 264.42 67.86
UVA5114.27 653.05 162.24 149.16 68.92 134.75 116.21 234.26 79.53
Table 10. Total cost of different algorithms.
Table 10. Total cost of different algorithms.
RIMEPSOChOAGJOMVOSSAWOASAOMISAO
UVA17464.59 344,430.20 4,675,571.83 8085.11 7210.11 341,501.54 341,742.08 6702.22 6351.88
UVA27344.55 2,010,868.68 4,675,532.31 8183.06 7143.69 8135.59 1,674,684.28 6947.92 6668.84
UVA37396.31 1,010,674.66 3,341,900.06 8017.46 7306.15 8081.10 341,917.99 340,099.96 6623.11
UVA47519.35 344,166.33 3,342,175.96 7658.20 340,545.96 7964.80 8304.95 6927.19 6301.61
UVA57310.73 10,460.89 5,008,999.94 8095.99 7286.32 7644.76 1,341,905.13 6846.83 6529.06
Total37,035.52 3,720,600.76 21,044,180.09 40,039.82 369,492.22 373,327.78 3,708,554.43 367,524.13 32,474.50
Bold indicates optimal value.
Table 11. Four costs for different algorithms for the start (120, 130, 80) and end (870, 830, 130).
Table 11. Four costs for different algorithms for the start (120, 130, 80) and end (870, 830, 130).
UVARIMEPSOChOAGJOMVOSSAWOASAOMISAO
Path costUVA11221.54 1391.86 1280.00 1200.45 1177.15 1202.99 1346.13 1269.49 1168.31
UVA21195.97 1406.50 1330.19 1230.07 1175.52 1242.65 1299.77 1235.78 1180.97
UVA31174.74 1397.22 1375.19 1219.37 1184.90 1228.18 1354.88 1268.65 1179.04
UVA41231.57 1408.44 1336.29 1241.64 1211.13 1226.36 1341.87 1245.94 1145.24
UVA51222.05 1380.98 1358.21 1212.39 1203.52 1245.22 1332.96 1243.26 1172.37
Threat costUVA17.69 1,666,671.61 2,333,336.32 2,000,006.19 6.87 8.38 2,333,340.76 5.11 7.02
UVA29.28 2,000,006.71 666,671.15 666,674.82 666,675.27 333,341.55 3,666,673.17 3.45 4.94
UVA37.48 3,000,006.48 1,666,670.16 666,674.32 6.43 1,000,011.87 4,666,672.16 4.50 6.62
UVA4333,340.78 3,333,337.80 1,666,670.24 1,000,009.25 8.71 5.71 3,666,675.04 3.04 6.85
UVA56.10 1,666,673.55 333,339.20 2,000,007.16 7.85 333,344.53 4,333,340.92 2.70 8.62
High costUVA1204.79 315.95 289.87 266.84 163.03 227.60 323.49 190.86 142.99
UVA2220.91 301.14 277.59 214.38 199.57 216.83 302.77 171.63 155.69
UVA3212.73 319.29 281.59 249.41 177.19 215.57 299.80 176.74 143.23
UVA4235.38 303.26 301.99 240.04 199.38 228.37 294.04 168.32 136.37
UVA5211.22 292.09 314.21 218.88 195.62 225.46 300.82 174.70 149.43
Angle costUVA192.84 542.99 273.08 173.95 156.83 112.89 319.19 251.90 147.78
UVA2118.85 472.01 316.50 190.84 164.24 115.34 197.20 312.29 122.21
UVA398.49 512.30 312.45 155.11 152.85 135.18 250.17 222.03 137.13
UVA4145.25 496.53 299.00 200.53 120.17 116.71 282.89 208.60 140.08
UVA5130.96 466.36 277.71 167.06 131.08 149.36 235.88 257.76 128.31
Table 12. Total cost of different algorithms for the start (120, 130, 80) and end (870, 830, 130).
Table 12. Total cost of different algorithms for the start (120, 130, 80) and end (870, 830, 130).
RIMEPSOChOAGJOMVOSSAWOASAOMISAO
UVA18256.15 1,677,333.39 2,342,908.13 2,008,850.84 7589.74 8412.16 2,343,625.51 8513.04 7426.28
UVA28317.05 2,010,522.64 676,414.52 675,159.82 674,612.83 341,838.42 3,676,396.95 8210.90 7588.94
UVA38106.95 3,010,697.77 1,676,674.48 675,420.35 7675.63 1,008,443.62 4,676,694.70 8337.18 7471.24
UVA4341,997.64 3,343,909.15 1,676,670.56 1,008,818.43 8148.35 8537.98 3,676,607.68 8124.55 7236.79
UVA58359.52 1,676,965.68 343,550.08 2,008,425.02 8082.71 341,974.60 4,343,249.82 8223.76 7493.13
Total375,037.32 11,719,428.63 6,716,217.77 6,376,674.46 706,109.27 1,709,206.78 18,716,574.65 41,409.43 37,216.37
Bold indicates optimal value.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, C.; Li, S.; Xie, C.; Yuan, P.; Long, X. MISAO: A Multi-Strategy Improved Snow Ablation Optimizer for Unmanned Aerial Vehicle Path Planning. Mathematics 2024, 12, 2870. https://doi.org/10.3390/math12182870

AMA Style

Zhou C, Li S, Xie C, Yuan P, Long X. MISAO: A Multi-Strategy Improved Snow Ablation Optimizer for Unmanned Aerial Vehicle Path Planning. Mathematics. 2024; 12(18):2870. https://doi.org/10.3390/math12182870

Chicago/Turabian Style

Zhou, Cuiping, Shaobo Li, Cankun Xie, Panliang Yuan, and Xiangfu Long. 2024. "MISAO: A Multi-Strategy Improved Snow Ablation Optimizer for Unmanned Aerial Vehicle Path Planning" Mathematics 12, no. 18: 2870. https://doi.org/10.3390/math12182870

APA Style

Zhou, C., Li, S., Xie, C., Yuan, P., & Long, X. (2024). MISAO: A Multi-Strategy Improved Snow Ablation Optimizer for Unmanned Aerial Vehicle Path Planning. Mathematics, 12(18), 2870. https://doi.org/10.3390/math12182870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop