Next Article in Journal
Unveiling Acoustic Cavitation Characterization in Opaque Chambers through a Low-Cost Piezoelectric Sensor Approach
Previous Article in Journal
Classification of Microscopic Hyperspectral Images of Blood Cells Based on Lightweight Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Improved Particle Swarm Optimization Algorithm and Gazelle Optimization Algorithm and Application

School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(8), 1580; https://doi.org/10.3390/electronics13081580
Submission received: 13 March 2024 / Revised: 10 April 2024 / Accepted: 17 April 2024 / Published: 20 April 2024

Abstract

:
In addressing the challenges associated with low convergence accuracy and unstable optimization results in the original gazelle optimization algorithm (GOA), this paper proposes a novel approach incorporating chaos mapping termed multi-strategy particle swarm optimization with gazelle optimization algorithm (MPSOGOA). In the population initialization stage, segmented mapping is integrated to generate a uniformly distributed high-quality population which enhances diversity, and global perturbation of the population is added to improve the convergence speed in the early iteration and the convergence accuracy in the late iteration. By combining particle swarm optimization (PSO) and GOA, the algorithm leverages individual experiences of gazelles, which improves convergence accuracy and stability. Tested on 35 benchmark functions, MPSOGOA demonstrates superior performance in convergence accuracy and stability through Friedman tests and Wilcoxon signed-rank tests, surpassing other metaheuristic algorithms. Applied to engineering optimization problems, including constrained implementations, MPSOGOA exhibits excellent optimization performance.

1. Introduction

Optimization is the process wherein individuals, under certain constraints, employ specific methods and techniques to enhance the performance of existing entities, thereby seeking the optimal solution to a given problem within the solution space. Nowadays, optimization issues are ubiquitous in daily life and engineering technology, serving as popular research topics in fields such as automation, computer science, telecommunications, aerospace, and more.
Heuristic algorithms, which draw inspiration from natural laws, can be broadly classified into the following categories: physical methods based on principles such as gravity, temperature, and inertia, which randomly search for the optimal solution to optimization problems, for instance, methods like gravitational search [1]; simulated annealing [2]; and black hole algorithms [3]. Evolutionary algorithms, grounded in Darwin’s theory, facilitate the gradual discovery of optimal solutions as the individuals within a population evolve through iterations during the search process. Typical examples include genetic algorithms [4], biogeography-based optimization algorithms [5], artificial algae algorithm [6], widow optimization search algorithm [7], and taboo search algorithm [8]. In a population of organisms, each individual has its own role, and communication among individuals enables the acquisition of superior information, ultimately completing the population’s evolution. Swarm intelligence optimization algorithms are essentially mathematical models created by researchers to simulate the behavior of collective animals in the natural world.
Inspired by various behaviors exhibited by natural biological populations such as insects, birds, fish, and herds, numerous swarm intelligence optimization algorithms have been proposed and have played a crucial role in many scientific and engineering applications. Mayer Martin Janos et al. used genetic algorithms to find an optimal solution for a hybrid renewable energy system at the home level that is both economical and environmentally friendly [9]. Laith Abualigah and Muhammad Alkhrabsheh can effectively solve the problem of cloud computing task scheduling using a hybrid multilateral optimizer optimized by a genetic algorithm [10]. G Lodewijks et al. [11] significantly reduced CO2 emissions in the airport baggage handling transportation system by applying particle swarm optimization (PSO) algorithm. Bilal Hussain [12] proposed decomposition weighting and PSO (DWS-PSO), which provides a new solution for price-driven demand response and home energy management systems for renewable energy and storage scheduling. Paul Kaushik and Hati Debolina [13] applied the Harris hawk optimization algorithm to household energy management, resulting in reduced power consumption. Jiang [14] and others utilized the artificial bee colony algorithm for ship structural profile optimization. Abd Elaziz Mohamed and colleagues [15] improved the artificial rabbit optimization algorithm for skin cancer prediction, achieving reliable predictive results. Bishla Sandeep and team [16] employed the chimpanzee optimization algorithm for optimizing the scheduling of batteries in electric vehicles. Percin Hasan Bektas and Caliskan Abuzer [17] utilized the whale optimization algorithm to control fuel cell systems. Jagadish Kumar N. and Balasubramanian C. [18] implemented the widow optimization algorithm for cloud service resource scheduling, effectively reducing the cost of cloud services. Zeng [19] optimized heterogeneous wireless sensor network coverage using the wild horse optimization algorithm, achieving significant coverage and connectivity. Chhabra Amit [20] and others applied the vulture search optimization algorithm in feature selection. Liu and team [21] predicted the lifespan of lithium-ion batteries using an improved sparrow algorithm. Xu and colleagues [22] performed feature selection using the binary arithmetic optimization algorithm.
With the development of heuristic algorithms, integrating different optimization mechanisms and evolutionary characteristics into algorithms, as well as drawing on each other’s strengths and overcoming the inherent deficiencies of the algorithms, has gradually become a new trend in the development of optimization algorithms. Chen [23] and others combined the differential evolution algorithm with the biogeography-based optimization algorithm for application in the three-dimensional bin packing problem, significantly improving the utilization of box volume. Long and colleagues [24] integrated the bacterial foraging optimization algorithm and simulated annealing algorithm in local path planning for unmanned vessels, efficiently planning obstacle avoidance paths. Zou and team [25] employed a cross-strategy of whale optimization algorithm and genetic algorithm in the cogeneration system, reducing energy consumption. Ramachandran Murugan [26] and others balanced the locust optimization algorithm and the Harris hawk optimization algorithm in the initial and later convergence stages, applying it to the economic dispatch problem of the thermal-electric field. Manar Hamza and team [27] combined the differential evolution algorithm with the arithmetic optimization algorithm, enhancing the optimization effect. Pashaei Elham and Pashaei Elnaz [28] combined the binary arithmetic optimization algorithm with the simulated annealing algorithm, improving computational accuracy. Bhowmik Sandeep and Acharyya Sriyankar [29] combined the differential evolution algorithm with the genetic algorithm in the image encryption problem.
PSO, as one of the classic metaheuristic algorithms, has been applied in many fields in recent years. Valiollah Panahizadeh [30] used PSO to improve the impact strength and elastic modulus of polyamide-based nanocomposites. Kim Kang Hyun et al. [31] used the PSO algorithm to optimize the drainage system of the undersea tunnel, significantly reducing the construction cost. Kirti Pal et al. [32] optimized the installation cost of flexible AC transmission system through PSO to improve the stability and load conditions of the power system. Zezhong Kang et al. [33] applied the improved PSO algorithm to the rural power grid auxiliary cogeneration system in the North China Plain to determine the optimal unit capacity configuration. In addition, particle swarm computing is often combined with other methods to improve the search performance and has been applied in various fields. Somporn Sirisumrannukul et al. [34] combined artificial neural networks (ANNs) and PSO algorithms to not only collect real-time environmental data and air conditioner usage records, but also autonomously adjust the operation of air conditioners. Sathasivam Karthikeyan et al. [35] adopted the artificial bee swarm (ABC) algorithm and PSO algorithm to optimize the Boost converter and improve the efficiency of the system. Norouzi Hadi and Bazargan Jalal [36] used the linear Musjingen method and PSO algorithm for the first time to study river water pollution and calculate the time change of pollution concentration at different river locations. Shaikh Muhammad Suhail [37] and others combined the PSO algorithm with the moth-flame optimization algorithm for application in power transmission systems. Makhija Divya and colleagues [38] overcame the drawbacks of both the local search of the PSO algorithm and the global search of the grey wolf optimization algorithm, applying this method to the workflow task scheduling problem. Tijani Muhammed Adekilekun and team [39] combined the PSO algorithm with the bat algorithm to effectively avoid falling into local optima, applying this method to the joint economic dispatch scheduling problem in power systems. Osei Kwakye Jeremiah [40] and others combined the PSO algorithm with the gravitational search algorithm to overcome premature convergence. Wang and colleagues [41] integrated the PSO algorithm with the marine predator algorithm. Samantaray Sandeep and team [42] combined the PSO algorithm with the slime mold algorithm in flood flow prediction. Wang [43] and others combined the PSO algorithm with the artificial bee colony algorithm in underwater terrain-assisted navigation, enhancing the matching effect.
Among the swarm intelligence optimization algorithms mentioned above, the gazelle optimization algorithm (GOA) [44] has gained increasing usage in practical engineering optimization problems due to its advantage in finding the optimal solution in test functions. However, due to its inherent drawbacks such as low convergence accuracy and unstable optimization results, it may not yield satisfactory results in all optimization problems. This paper aims to improve the shortcomings of the GOA, addressing its deficiencies and enhancing the convergence speed and stability of GOA. The main contributions of this work are as follows:
  • Initializing the population through chaotic mapping to improve the quality and diversity of initial solutions.
  • Implementing phased population perturbation to enhance the stability of optimization results while maintaining high precision.
  • Combined with PSO, the role of the individual experience of the gazelle in the escape process is used to improve the ability of the algorithm to jump out of the local optimum.
This paper is divided into seven sections. Section 2 provides a review and analysis of the literature. Section 3 briefly describes the principles of the traditional GOA. Section 4 introduces the MPSOGOA. Section 5 presents the experimental design for testing functions and engineering applications. Section 6 discusses the experimental results. The conclusion is presented in the final section.

2. Gazelle Optimization Algorithm

2.1. Exploration Phase

During this phase, the gazelles, without predators or any signs of their presence, remain in a calm state, grazing. Drawing on the foraging behavior of gazelles freely grazing, the algorithm simulates the random movements of gazelles within the solution space. In nature, the strongest gazelles not only possess strong survival abilities but also lead other gazelles in evading predators, with the fittest individual in the population being referred to as the alpha gazelle. Assuming the d-dimensional alpha gazelle is represented as shown:
x = [ x ( 1 ) x ( 2 ) x ( d 1 ) x ( d ) ]
We extend the top gazelle individuals to construct an n × d dimensional Elite matrix, where n represents the population number and d represents the dimension. The matrix is given by Equation (2):
Elite = x 1 , 1 ( 1 ) x 1 , 2 ( 2 ) x 1 , d 1 ( d 1 ) x 1 , d ( d ) x 2 , 1 ( 1 ) x 2 , 2 ( 2 ) x 2 , d 1 ( d 1 ) x 2 , d ( d ) x i , j x n , 1 ( 1 ) x n , 2 ( 2 ) x n , d 1 ( d 1 ) x n , d ( d )
The updating strategy of individual positions in the gazelle population is related to the current optimal individual position. Based on the distance between the current optimal individual and its own grazing position, the individual position is updated, with the displacement step controlled by Brownian motion. The mathematical model is shown in:
gazelle t + 1 = gazelle t + s R R B ( Elite t R B gazelle t )
where gazelle t + 1 and gazelle t represent the positions at the (t + 1)th and t-th iterations, respectively. s denotes the speed of gazelle movement during free grazing, R is a random number between 0 and 1, Elite represents the matrix of the alpha gazelle, and RB is the vector of Brownian motion, as given by Equations (4) and (5):
f x ; μ ; σ = 1 2 π σ 2 exp x μ 2 2 σ 2 = 1 2 π exp x 2 2
where μ and σ are constants, μ = 0 is the mean value, and σ2 = 1 is the unit variance.
R B = f 1 , 1 f 1 , 2 f 1 , d 1 f 1 , d f 2 , 1 f 2 , 2 f 2 , d 1 f 2 , d f i , j f n , 1 f n , 2 f n , d 1 f n , d

2.2. Exploitation Phase

In this phase, the algorithm simulates the fleeing behavior of gazelles upon detecting predators, with each phase adopting movements in opposite directions based on the parity of the iteration count. Equation (6) for Levy flight motion is provided in [39]:
f ( α ) = 0.05 x y 1 α
where α = 1.5, x = N o r m a l ( 0 , σ x 2 ) , y = N o r m a l ( 0 , σ y 2 ) , σ y = 1 , σ x is given by:
σ x = Γ ( 1 + α ) sin ( π α 2 ) Γ ( 1 + α ) 2 α 2 ( α 1 ) 2 1 α
Upon spotting a predator, gazelles immediately initiate escape, simulating the gazelle’s fleeing behavior using a Lévy flight. The escape model is provided by:
g a z e l l e t + 1 = g a z e l l e t + S μ R R L E l i t e t R L g a z e l l e t
where S represents the maximum speed achievable by the gazelle during the escape process, and RL is a vector of random numbers based on the Lévy flight, as given by:
R L = f 1 , 1 f 1 , 2 f 1 , d 1 f 1 , d f 2 , 1 f 2 , 2 f 2 , d 1 f 1 , d x i , j f n , 1 f n , 2 f n , d 1 f n , d
While tracking gazelles, predators move in the same direction. Therefore, during the gazelle’s escape process, the predators also exhibit exploratory behavior in the search space. However, predators are slower in the initial phase of the pursuit, and a Brownian motion is used to simulate the chasing process, followed by the adoption of a Lévy flight in the later stages to model the predator’s behavior. The mathematical model for the predator’s pursuit of gazelles is provided by:
g a z e l l e t + 1 = g a z e l l e t + S μ C F R B E l i t e t R L g a z e l l e t
where CF represents the cumulative effect of predators, as shown in:
C F = 1 t T 2 t T
The survival rate of gazelles in the face of predators is 0.66, which implies that predators have a 34% chance of successful hunting. Using predator success rates (PSRs) to represent the success rate of predators, a mathematical model of the gazelle escape process is established, as shown in:
g a z e l l e t + 1 = g a z e l l e t + C F L B + R U B L B U i f   r PSRs g a z e l l e t + P S R s ( 1 r ) + r g a z e l l e r 1 g a z e l l e r 2 else
where r1 and r2 are random indices of the gazelle population. U is a binary matrix representing the logical value obtained by comparing random numbers in the range of [0, 1] with 0.34, such as U = 0   ,   i f   r < 0.34 1   ,   o t h e r w i s e .

3. MPSOGOA

The GOA possesses the advantage of finding effective solutions for most optimization problems. However, it is characterized by the drawback of low convergence accuracy and slow convergence speed. This paper addresses this issue from three perspectives: introducing a chaotic strategy to enhance the quality of initial solutions; implementing population-wide perturbation to improve the convergence speed in the early iterations and the convergence accuracy in the later iterations; and integrating PSO to emphasize the significance of individual gazelle experiences, effectively balancing the exploration and exploitation aspects of the algorithm.

3.1. Chaos Strategy

Chaotic motion is non-repetitive and has characteristics of randomness and ergodicity. In recent years, chaotic mapping has been used by many scholars [45,46,47,48,49] for optimization algorithms, and has achieved good results in improving population diversity. The initial population of the GOA when solving optimization problems is randomly generated data. The initial gazelle population generated using this method is uncertain, and individual gazelles cannot traverse the feasible region. The diversity and uniformity of the initial population will affect the optimization ability of the algorithm. Chaotic mapping has a higher search speed than random search, can prevent falling into local optimality when solving optimization problems, and improves the global search ability of the algorithm. Reasonable use of chaos theory in the population initialization stage can evenly distribute population individuals within the feasible region, thereby achieving the purpose of improving population diversity and uniformity. Currently, many literatures use Logistic mapping, but the traversal of Logistic mapping is uneven, resulting in unsatisfactory convergence speed of the algorithm.
In this paper, Piecewise is used to map the initialized position of individual gazelles. Piecewise mapping produces uniform initial values within [0, 1] and performs better than Logistic mapping in terms of uniformity. Therefore, the Piecewise chaotic sequence can be introduced into the GOA, and the characteristics of the chaotic sequence can be used to effectively improve the ability of the GOA to search for the optimal solution. Its expression is given by Equation (13):
x ( t + 1 ) = x ( t ) p , 0 x ( t ) < p x ( t ) p 0.5 p , p x ( t ) < 0.5 1 p x ( t ) 0.5 p   , 0.5 x ( t ) < 1 p 1 x ( t ) p , 1 p x ( t ) < 1
The number of iterations is set to 1000, and the distribution histogram and scatter plot of the generated PLCM and Logistic mapping are shown in Figure 1. It can be seen from the diagram that the initial gazelle population based on PLCM chaotic map is more evenly distributed, avoiding the situation of focusing on a certain point, and avoiding the distribution characteristics of large at both ends and small in the middle presented by Logistic map.

3.2. Global Perturbation of the Population

The gazelle matrix represents the positions of individual gazelles, and the optimization of the positions of individual gazelles can be achieved by perturbing the gazelle matrix. By perturbing the gazelle matrix, we can effectively optimize the positions of individual gazelles, thereby enhancing the convergence speed of the algorithm in the initial iterations. Furthermore, this perturbation also increases the capability of the algorithm to escape from local optima in the later iterations, making the algorithm more adept at global searches. In the GOA, the perturbation of the gazelle matrix is typically achieved by randomly selecting and updating the positions of individual gazelles. This randomness helps break the possibility of the algorithm getting trapped in local optima, enabling the algorithm to search for better solutions in a larger search space. The mathematical model for the population-wide perturbation is provided by Equations (14) and (15):
n e w _ g a z e l l e t + 1 = g a z e l l e t + r ( R A N D O M P g a z e l l e t ) i f   F R A N D O M   <   F g a z e l l e t g a z e l l e t + r ( g a z e l l e t R A N D O M ) else
g a z e l l e t + 1 = n e w _ g a z e l l e t + 1 i f   F n e w t <   F g a z e l l e t g a z e l l e t else
where gazelle t represents the positions at the t-th iterations, F g a z e l l e t is its corresponding fitness value, n e w _ g a z e l l e t + 1 represents the temporary position of the (t + 1)-th iterations, Fnewt is its corresponding fitness value, r is a random number within the range [0, 1], P is a coefficient factor of 1 or 2, RANDOM denotes the position of a randomly selected gazelle individual in the population, and FRANDOM is its corresponding fitness value.

3.3. Combined with PSO

3.3.1. PSO

The PSO algorithm [50] is a population-based stochastic search algorithm that mimics the social behavior of birds during foraging. It seeks the optimal solution in the solution space using two attributes: velocity and position. Throughout the iterative process of the algorithm, each particle in the population represents a candidate solution. The best position (pbest) of each particle, as well as the global best position (gbest) of the population, are recorded to find the optimal solution for the optimization problem.
Suppose the PSO algorithm is applied to an optimization problem in a d-dimensional search space. The updated equations for the j-th dimension velocity component v i , j and the position component x i , j of the i-th particle x i = x i , 1 , x i , 2 , ...   , x i , d in the t + 1th iteration of the population are given by:
v t + 1 = ω v t + c 1 r a n d 1 ( p b e s t t x t ) + c 2 r a n d 2 ( g b e s t x t ) x t + 1 = x t + v t + 1
where ω represents the inertia weight, c1 and c2 are acceleration factors, and rand1 and rand2 are uniformly distributed random numbers in the range [0, 1].

3.3.2. Combination of PSO and GOA

Combining the two optimization algorithms has become a mainstream trend, and many scholars have shown that this approach can achieve remarkable results. This fusion not only strengthens the overall performance of the algorithm, but also cleverly makes up for the limitations of a single algorithm and realizes the complementarity and enhancement of advantages. The combination of GOA and PSO algorithm provides a new effective method to deal with complex optimization problems. In the PSO algorithm, the position update of particles mainly depends on the historical best position information of individuals and groups. The GOA simulates the behavior of gazelles in nature when they escape from predators. The core formula of gazelle position renewal includes Equations (3), (8) and (10), all of which indicate that gazelle position renewal depends mainly on the guidance and influence of the best gazelle individual. If the best gazelle individual chooses the wrong escape route, it will lead the population to extinction. In the optimization problem, the choice of the best gazelle individual and its escape route are mapped to the search process of the optimal solution in the algorithm, then the choice of the wrong escape route by the best gazelle individual is equivalent to the algorithm falling into the local optimal solution or misleading solution in the search process. In this case, the whole population (that is, the search space of the algorithm) may be affected by this wrong solution, resulting in the whole search process deviating from the direction of the global optimal solution, and ultimately getting unsatisfactory optimization results. If the concept of individual escape experience is introduced into the GOA, even if the best gazelle individual accidentally falls into the local optimal solution, other gazelle individuals can still escape from the local solution by relying on their individual escape experience. The combined method can be modeled using Equation (17), and the positions and velocities of particles can also be updated,
v t + 1 = ω v t + c 1 r a n d 1 ( p b e s t g a z e l l e t ) + c 2 r a n d 2 ( g b e s t g a z e l l e t ) g a z e l l e t + 1 = g a z e l l e t + v t + 1 ω = ω min + ( ω max ω min ) ( T t ) / T
where ωmin is the minimum inertia weight and ωmax is the maximum inertia weight.
After combining PSO with GOA in the optimization process, it can not only learn from the wisdom of top gazelles, but can also make full use of the experience accumulated by individual gazelles in the escape process, which makes the whole population better jump out of the local optimal trap in the search process and enhances the search ability of the algorithm in the complex and changeable problem space. This fusion makes the excellent solution propagate and utilize in the population more quickly, accelerating the convergence speed of the whole population.

3.4. Pseudocode of the Proposed Algorithm

The pseudocode of the MPSOGOA outlines the process of the search optimization scheme as shown in Algorithm 1. The chaotic strategy improves the quality of the initial solutions, while the population-wide perturbation enhances the convergence speed and accuracy of the algorithm. Integration with PSO amplifies the role of individual experiences during the optimization process, preventing premature convergence of the algorithm. After meeting the termination conditions, the algorithm outputs the identified optimal solution. The combined action of the three strategies ensures both a high level of precision and an improved convergence speed of the optimization results, thereby guaranteeing that MPSOGOA can always find the optimal solution.
Algorithm 1 Pseudocode of MPSOGOA
Initialize algorithm parameters s, μ, S, PSRs, C1, C2.
Use Piecewise mapping to initialize the population.
While (iter < max_iter)
 Evaluate the fitness value of the gazelle. Construct pbest, gbest, and Elite.
For each gazelle in the population:
  Generate a new gazelle matrix based on Equation (14).
  Update the gazelle matrix according to Equation (15).
End For
For each gazelle in the population:
  For each dimension:
   If (mod (iter, 2) = 0) then
    μ = −1
   Else
    μ = 1
   End If
   If (r > 0.5) then
    Execute exploration activities on the gazelle matrix according to Equation (3).
   Else
    If iter < size(gazelle,1)/2 then
     Perform Brownian motion on the gazelle matrix according to Equation (10).
    Else
     Execute Lévy flight on the gazelle matrix according to Equation (8).
    End If
   End If
  End For
End For
Execute particle swarm movement on the gazelle matrix based on Equation (26).
Evaluate the fitness value of the gazelle.
Update pbest, gbest, and Elite.
Execute escape movement on the gazelle matrix according to Equation (12).
Iter = iter + 1
End While
Return the optimal value from the population.
According to the flowchart in Figure 2, the MPSOGOA process involves primarily initializing the population, evaluating the fitness of the gazelle, and updating candidate solutions. The complexity of MPSOGOA is determined by the maximum iteration count (iter_max), the population size (P), and the problem’s dimension (D). From the algorithm flowchart, it is evident that the algorithm complexity is composed of two parts, denoted as O(iter_max × D) + O(F). Among these, F represents the time consumed by the algorithm in evaluating various functions. Thus, the complete analysis of algorithm complexity is as follows: O(MPSOGOA) = O(iter_max × P × D) + O(CFE × P). The complexity analysis of MPSOGOA can be simplified to: O(MPSOGOA) = O(iter_max × P × D + CFE × P).

4. Experimental Design

This section presents an analysis of the simulation results of the MPSOGOA. The performance of the proposed MPSOGOA is evaluated using 35 test functions and 4 practical engineering design problems. The results of MPSOGOA in test functions and practical engineering applications are compared with the following algorithms: MPSOGOA, GOA, grey wolf optimizer (GWO) [51], sine cosine algorithm (SCA) [52], arithmetic optimization algorithm (AOA) [53], PSO [50], differential evolution (DE) [54], chimp optimization algorithm (Chimp) [55], biogeography-based optimization (BBO) [56], and golden jackal optimization (GJO) [57]. All experiments were conducted on a computer running on a 64-bit Windows 7 operating system equipped with a Core i5 CPU operating at a frequency of 2.50 GHz and 4.0 GB of RAM. The Matlab version used was R2021b. To minimize the impact of randomness on the algorithm, all algorithms were independently run 30 times for each test function, with the population size (P) set to 50 and the maximum iteration count (iter_max) set to 1000 generations. The parameter settings for all algorithms in the experiment are shown in Table 1.
To comprehensively evaluate the effectiveness of the algorithm, the following statistical evaluation indicators are utilized: best value, worst value, mean value, standard deviation (SD), and median value.

4.1. Test Function

Table 2 and Table 3 record the 35 test functions used to evaluate the performance of the MPSOGOA. The first 20 problems are classic test functions in optimization problems. Table 2 provides the test functions, their positions, and the corresponding optimal values. F1–F5 are continuous unimodal functions. F6 represents a discontinuous step function, while F7 denotes a quartic noise function. F8–F13 are multimodal functions, and F14–F20 are fixed-dimension multimodal functions. Table 3 presents 15 problems, including a selection of test functions from the CEC2014 and CEC2017 competitions. F21 and F22 are unimodal functions, F23–F31 and F35 are simple multimodal functions, and F34 represents a hybrid function. Unimodal functions typically have a single global optimum and are often used to test the exploratory capabilities of metaheuristic algorithms. Multimodal functions have multiple local optima, making them more complex than unimodal functions. Therefore, they are frequently employed to test whether optimization techniques possess good exploratory capabilities. The number of multimodal functions exponentially increases with the design variables, balancing exploration and exploitation to enhance the algorithm’s search efficiency and prevent it from getting trapped in local optima. These test functions are used to infer the potential for the algorithm to find optimal solutions in real-world problems.

4.2. Practical Engineering Applications

By testing the performance of optimization techniques on real-world engineering problems and designing corresponding parameter values, the overall design cost is minimized. This study focuses on the welding beam design problem, compression spring design problem, and pressure vessel design problem in mechanical engineering. Most engineering design problems in practical applications are governed by equality and inequality constraints, which are managed in the design objective function using penalty functions. The application of MPSOGOA to mechanical engineering design problems is compared with results obtained from nine other metaheuristic algorithms.

4.2.1. Welded Beam

Welded beam design is a problem of minimizing optimization, often utilized to assess the capability of optimization techniques in addressing practical issues. The welded beam design involves the manufacturing of a welded beam with the minimum cost under multiple constraints. The proposed MPSOGOA, along with 10 other metaheuristic algorithms, is applied to the welded beam design problem to minimize the cost of manufacturing the welded beam. Figure 3 provides an illustration of the welded beam. The decision variables that constrain the minimization of the cost of the welded beam design include the length (l), height (h), thickness (b), and weld thickness (h) of the steel bars. The constraints for the welded beam design encompass shear force (τ), bending stress (θ), column buckling load (Pc), beam deflection (δ), and lateral constraints, represented by WBD constraints. The objective function and penalty function for this problem is as follows:
min   f ( x 1 , x 2 , x 3 , x 4 ) = 1.1047 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 )
x 1 x 2 x 3 x 4 = h l t b
The bounds of the variables are as follows:
0.1 x 1 2.0 0.1 x 2 10.0 0.1 x 3 10.0 0.1 x 4 2.0
The constraints are as follows:
g 1 ( x ) = τ ( x ) τ max 0 g 2 ( x ) = σ ( x ) σ max 0 g 3 ( x ) = δ ( x ) δ max 0 g 4 ( x ) = x 1 x 4 0 g 5 ( x ) = P P c ( x ) 0 g 6 ( x ) = 0.125 x 1 0 g 7 ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5 0
The intermediate variables of the constraints are as follows:
τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 τ = P 2 x 1 x 2 τ = M R J M = P ( L + 0.5 x 2 ) R = 0.25 ( x 2 2 + ( x 1 + x 3 ) 2 ) J = 2 ( x 1 x 2 2 ( x 2 2 12 + ( x 1 + x 2 ) 2 4 ) ) σ ( x ) = 6 P L x 4 x 3 2 δ ( x ) = 6 P L 3 E x 3 2 x 4 P c = 4.013 E L 2 x 3 2 x 4 6 36 ( 1 x 3 2 L E 4 G )
where σmax = 3 × 104 psi, P = 6 × 103 lb, L = 14 in, δmax = 0.25 in, E = 30 × 106 psi, τmax = 13600 psi, and G = 1.2 × 107 psi.

4.2.2. Compression Spring Design Issues

The objective of spring design is to minimize the total weight of the tension/compression spring. As depicted in Figure 4, this spring design problem is controlled by three parameters: the wire diameter (d), mean coil diameter (D), and number of active coils in the spring (P). The design function and constraint function for the spring design problem are given in Equations (24) and (25),
l = l 1 l 2 l 3 = d D P
The boundary conditions of the variables are as follows:
0.05 l 1 2 0.25 l 2 1.3 2 l 3 15
M i n i m i z e f ( l ) = ( l 3 + 2 ) l 2 l 1 2
The constraints of the spring design problem are as follows:
s . t . g 1 ( l ) = 1 l 2 3 l 3 71785 l 1 4 0 g 2 ( l ) = 4 l 2 2 l 1 l 2 12566 ( l 3 l 1 3 l 1 4 ) + 1 5108 l 1 2 0 g 3 ( l ) = 1 140.45 l 1 l 2 2 l 3 0 g 4 ( l ) = l 1 + l 2 1.5 1 0

4.2.3. Pressure Vessel Design

The primary objective of the pressure vessel design problem is to minimize the production cost of the pressure vessel to the greatest extent possible (Figure 5). This problem is governed by four control parameters: the thickness of the pressure vessel (Ts), thickness of the head (Th), internal radius of the vessel (R), and length of the vessel head (L). The design function and constraint function for the pressure vessel design problem are provided in Equations (29) and (30), respectively,
x = l 1 l 2 l 3 l 4 = T s T h R L
The variation range of variables is as follows:
0 l 1 99 0 l 2 99 10 l 3 200 10 l 4 200
M i n i m i z e f ( x ) = 0.6224 l 1 l 3 l 4 + 1.7781 l 2 l 3 2 + 3.1661 l 1 2 l 4 + 19.84 l 1 2 l 3
The constraints of the pressure vessel design problem are as follows:
s . t . g 1 ( l ) = l 1 + 0.0193 l 3 0 g 2 ( l ) = l 2 + 0.00954 l 3 0 g 3 ( l ) = π l 3 2 l 4 4 3 π l 3 3 + 1296000 0 g 4 ( l ) = l 4 240 0

5. Results and Discussion

In this section, to validate the performance of the proposed MPSOGOA, 35 test functions and 3 real-world engineering optimization problems are employed for performance testing, and comparisons are made with the original GOA and currently popular metaheuristic algorithms. The entire experiment is conducted in the following parts: (1) analysis of the experimental results of classical test functions, (2) analysis of the experimental results of the CEC2014 and CEC2017 composite test functions, (3) convergence performance of the MPSOGOA, and (4) result analysis of the 3 engineering optimization problems.
The performance of each algorithm is measured using five performance indicators: best value, worst value, average value, standard deviation, and median. Furthermore, the performance of the MPSOGOA is evaluated using Friedman rank sum test and Wilcoxon signed-rank test.

5.1. Test Function Results

5.1.1. Analysis of CEC2005 Experimental Results

Table 4 presents the experimental results of 10 algorithms on 20 benchmark functions, including the best results obtained from 30 independent runs (Best), the worst results (Worst), the mean values (Mean), the standard deviations (Std), the medians (Median), and the Wilcoxon signed-rank test rankings (Rank) of each algorithm across the 20 benchmark functions. The final ranking is determined by the Friedman rank sum ranking of each algorithm across the 20 benchmark functions.
Unimodal functions have only one optimal value, so they are often used to test the exploitation capability of algorithms. It can be seen from Table 4 that MPSOGOA shows the best performance on F1–F4. When solving F1, the convergence accuracy is improved by 180 orders of magnitude compared with GOA and 61 orders of magnitude compared with other algorithms. The convergence accuracy is also improved in different degrees. When solving F2, F3, and F4, the convergence accuracy is improved by more than 57 orders of magnitude and the standard deviation is improved by more than 40 orders of magnitude compared with GOA. Compared with GOA on F5–F7, the optimum values are improved. There is an improvement of two orders of magnitude on F6 and one order of magnitude on F7, with a smaller standard deviation. This means that the stability is also improved while the accuracy of convergence is improved. The above results indicate that MPSOGOA has good exploitation capability. F8–F20 is a multimodal function and a fixed-dimensional multimodal function, which usually has multiple extreme values and is often used to test the exploration ability of the algorithm. In the test of multimodal functions F8–F13, MPSOGOA performs better than other comparison algorithms on most functions and can find the optimal solution. When solving F12 and F13, the convergence accuracy of MPSOGOA is improved by two orders of magnitude compared with GOA. MPSOGOA shows excellent exploration ability on multimodal functions. In the test of F14 and F15, the optimal value of MPSOGOA is similar to other algorithms, but its standard deviation is lower. The experimental results show that MPSOGOA has good performance in solving unimodal and multimodal functions and excellent performance in convergence accuracy and stability.
Table 5 shows the Friedman rankings of the 10 algorithms in the benchmark test function. MPSOGOA achieved good results in the Friedman test. The MPSOGOA proposed in this article ranks first among the 10 optimization algorithms.
Furthermore, we conducted the Wilcoxon signed-rank test to assess the significance of differences between MPSOGOA and the other nine algorithms. When the p-value is less than 0.05, it is considered that there is a significant difference between the two algorithms for that specific function. The results of the Wilcoxon signed-rank test between MPSOGOA and the nine contrastive algorithms are presented in Table 6. It is evident from the table that MPSOGOA exhibits significant differences from the nine contrastive algorithms across the majority of the functions. Overall, across the 20 benchmark test functions, the performance of the MPSOGOA surpasses that of the nine contrastive algorithms, demonstrating strong competitiveness in the achieved results.

5.1.2. Analysis of Experimental Results of CEC2014 and CEC2017 Combined Test Functions

The ability of the algorithm to find the global optimal solution is evaluated by the combined test functions in CEC2014 and CEC2017. The test results of MPSOGOA and the other nine comparison algorithms are shown in Table 7. As can be seen from Table 7, the improvement of MPSOGOA has achieved better optimization effects in all combined functions. In the test of function F21, MPSOGOA becomes the only one that converges to the global optimal solution successfully. In addition, in the tests of F23, F24, and F31, the MPSOGOA and GOAs successfully find the global optimal solution. However, by comparing the standard deviation, it can be found that MPSOGOA has higher stability in the search process and can find the global optimal solution more stably. For F22, F25, F26, F27, F29, F30, F34, and F35 functions, MPSOGOA has achieved significant improvement in both optimization accuracy and stability compared with GOA. This series of improvements not only enhances the robustness of the algorithm, but also further validates the advantages of MPSOGOA in solving complex optimization problems. In summary, MPSOGOA successfully improves the ability of global optimal solution search by combining the advantages of PSO and gazelle optimization. Whether faced with the challenge of a single function or a series of combined functions, MPSOGOA has demonstrated its superior optimization performance and stability.
Compared to the GOA, MPSOGOA exhibits improved convergence accuracy and a strong ability to escape local optima. Additionally, it achieves smaller standard deviations, notably reducing the standard deviations in F21, F22, F24, and F31, suggesting that MPSOGOA significantly surpasses the GOA in terms of stability.
Table 8 shows the results of the Wilcoxon signed-rank test for 15 composite functions. The results show that there are significant differences between MPSOGOA and other algorithms on most test functions.
The rankings of the Friedman test are given in Table 9. MPSOGOA first among the 10 algorithms, proving that MPSOGOA’s optimization ability exceeds other algorithms. This shows that MPSOGOA has good optimization capabilities and can solve most optimization problems.

5.1.3. Convergence Speed

Figure 6 compares the convergence curves of MPSOGOA with the original GOA and eight other optimization algorithms on the classical test functions. The behavior of optimization algorithms that converge to the optimal solution early in the optimization process can lead to the inability of the algorithm to find the final global optimal solution. It can be observed that MPSOGOA quickly converges to the optimal solution in most test functions without being constrained by local optima. The convergence curves of F1–F4 demonstrate the accelerated convergence speed of MPSOGOA. The early convergence to the optimal solution in F5, F9, F10, F11, F15, and F20 is attributed to the role of the chaotic strategy and population-wide perturbation in the initial iterations. The algorithm stalls in the later iterations of F7, and the individual experience of particles and population-wide perturbation play a role in the later iterations, enabling the algorithm to escape local optima and eventually find the global optimal solution. Compared to GOA, MPSOGOA significantly converges faster to the global optimal solution in F1–F7, F9–F13, and F15.
Figure 7 shows the comparison of convergence curves of MPSOGOA, GOA, and eight other optimization algorithms in CEC2014 and CEC2017 combined test functions. Compared with other algorithms, MPSOGOA has an advantage in convergence speed. The convergence curve of MPSOGOA is always lower than the convergence curve of other algorithms, and the convergence curve drops significantly faster. The convergence rate of MPSOGOA on F21, F26, F27, F28, and F31 is obviously better than other algorithms. In the early stage of iteration, MPSOGOA’s convergence ability is far superior to other algorithms, and it can quickly approach the optimal solution in a short time. This is because the algorithm adopts PLCM for population initialization in the initial stage, constructs the initial population with relatively uniform distribution, and improves the quality of the initial population, especially in the initial stage of functions F21 and F31, which have better fitness values than other algorithms. In addition, MPSOGOA shows stable and fast convergence on function F23 with small fluctuation due to the combination of population global perturbation strategy and particle swarm strategy. This further confirms MPSOGOA’s significant advantages in terms of global convergence speed and optimization accuracy.
After comprehensive analysis of MPSOGOA’s optimization accuracy and convergence speed, the MPSOGOA has significantly improved both in terms of convergence speed and global optimization accuracy, and shows advantages in terms of stability. Compared with other algorithms, MPSOGOA has less fluctuation in results during multiple runs, meaning that its output results are more reliable and stable. The improved method is effective.

5.1.4. Ablation Experiment

Ablation experiments were conducted on MPSOGOA and GOA in order to comprehensively validate the effectiveness of the three proposed strategies during the optimization process. Three strategies were employed in this study to enhance the performance of GOA, leading to the following scenarios: the use of only the PWLC mapping strategy in the gazelle optimization algorithm (GOA1), the use of only population-wide disturbance in the gazelle optimization algorithm (GOA2), the use of only the PSO algorithm-integrated gazelle optimization algorithm (GOA3), the simultaneous use of PWLC mapping and population-wide disturbance strategies in the gazelle optimization algorithm (GOA4), the simultaneous use of PWLC mapping and PSO strategy-integrated gazelle optimization algorithm (GOA5), and the simultaneous use of population-wide disturbance and PSO strategy-integrated gazelle optimization algorithm (GOA6). The tests were based on classical test functions from Table 10 with a population size set at 50 and iteration count at 1000 in the experiments. Each algorithm was independently run 30 times, and calculations were performed for the optimal value, worst value, average value, standard deviation, and median.
It is evident that the convergence accuracy of the GOA1 algorithm from Table 10 and Figure 8, which incorporates only the PWLC mapping strategy, the GOA2 algorithm with only the population-wide disturbance strategy, and the GOA3 algorithm with only the strategy integrated with PSO, surpasses that of the standard GOA across 15 test functions. Furthermore, the convergence speed on these 15 test functions is also superior to that of the GOA. This indicates the efficacy of each strategy in enhancing the GOA. A comparative analysis between the optimization results of GOA1, GOA2, and GOA3 with GOA4, GOA5, and GOA6 reveals that the convergence accuracy achieved by the fusion of two improvement strategies is generally superior to that achieved using a single improvement strategy.
Simultaneously, the convergence speed is also enhanced, underscoring the synergistic and stable effectiveness of all improvement strategies. Their collective impact serves to improve the solving capability of MSPGOA. It can be also observed that the convergence accuracy of MPSOGOA from Table 10 and Figure 8, which integrate three improvement strategies, surpasses that of GOAs employing one or two improvement strategies. Notably, MPSOGOA exhibits the fastest convergence speed among the 15 test functions.

5.2. Engineering Problem Results

5.2.1. Welded Beam

Table 11 presents the experimental results of the welded beam design problem. The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms (GOA, GWO, SCA, AOA, PSO, DE, Chimp, BBO, and GJO) in the welded beam design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test. It is evident from the table that the statistical results of all performance aspects of the proposed MPSOGOA in the welded beam design problem are optimal, indicating its strong optimization effectiveness in solving practical engineering applications and effectively reducing the cost of the welded beam design problem. The table indicates that MPSOGOA achieves the optimal cost value of 1.67022 for the welded beam design, with the corresponding optimal decision variables being steel bar length 0.198832, steel bar height 3.33737, steel bar thickness 9.19202, and weld thickness 0.198832. Compared to the original GOA, MPSOGOA exhibits a smaller standard deviation, signifying its superior stability. The results of the signed-rank test in the table indicate that the p-values for MPSOGOA and the other nine algorithms are all less than 0.05, indicating no statistical significance.
Figure 9 shows the convergence curves of each algorithm on the welded beam design problem. These algorithms can quickly converge to the optimal solution.

5.2.2. Compression Spring Design

Table 12 presents the experimental results for the compression spring design problem. The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms in the compression spring design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test. It is evident from the table that MPSOGOA achieves the optimal cost value of 0.0126652 for the compression spring design, with the corresponding optimal decision variables being wire diameter 0.0516905, mean coil diameter 0.356752, and the number of effective coils in the spring 11.287. MPSOGOA and DE achieve the same optimal cost value in the compression spring design problem. Furthermore, when compared to the original GOA, MPSOGOA outperforms GOA in all statistical data aspects. The results of the signed-rank test in the table indicate that the p-values for MPSOGOA and the other nine algorithms are all less than 0.05, indicating no statistical significance.
Figure 10 shows the convergence curves of each algorithm on the welded beam design problem. Except for the PSO and Chimp algorithms, other algorithms can quickly converge to the optimal solution.

5.2.3. Pressure Vessel Design

Table 13 presents the experimental results for the pressure vessel design problem. The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms in the pressure vessel design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test. It is evident from the table that the statistical results of all performance aspects of the proposed MPSOGOA in the pressure vessel design problem are optimal, indicating its strong optimization effectiveness in solving practical engineering applications and effectively reducing the cost of the pressure vessel design problem. From the table, it can be seen that MPSOGOA achieves the optimal cost value of 5885.33 for the pressure vessel design, with the corresponding optimal decision variables being pressure vessel thickness 0.778168, head thickness 0.384649, internal vessel radius 40.3196, and vessel head length 200. Compared to the original GOA, MPSOGOA exhibits significant improvement in terms of standard deviation, suggesting its superior stability in the process of seeking the optimal solution. The results of the signed-rank test in the table indicate that the p-values for MPSOGOA and the other nine algorithms are all less than 0.05, indicating no statistical significance.
Figure 11 shows the convergence curves of each algorithm on the pressure vessel design problem. MPSOGOA, GOA, GWO, SCA, DE, Chimp, and GJO are all able to quickly converge to the optimal solution. AOA falls into a local optimum and eventually escapes from the local optimum, and optimally converges to the optimal value.

6. Conclusions

In this paper, we present an enhanced version of the MPSOGOA, incorporating a chaotic strategy to refine the quality of the initial population. A population-wide perturbation is strategically applied to augment the convergence speed and precision of the algorithm. Furthermore, through synergistic integration with PSO, emphasis is placed on leveraging individual experiences within the optimization process, culminating in a comprehensive enhancement of algorithmic performance. MPSOGOA demonstrates promising outcomes across a suite of 35 test functions and 4 engineering design challenges. Notably, in a majority of the test functions, MPSOGOA consistently identifies optimal solutions, displaying diminished standard deviations compared to the original GOA. This signifies a marked improvement in stability when contrasted with GOA. Additionally, in the Friedman test, MPSOGOA attains the foremost position among all comparative algorithms. Across the spectrum of four engineering design problems, the performance metrics of MPSOGOA surpass those of esteemed optimization algorithms including GOA, GWO, SCA, AOA, PSO, DE, Chimp, BBO, and GJO. To summarize, based on the comparative analysis of algorithms in both test functions and engineering designs, MPSOGOA emerges as a superior choice for addressing optimization challenges.

Author Contributions

Conceptualization, S.Q., H.Z., W.S. and J.Y.; methodology, J.Y. and J.W.; validation, S.Q., H.Z. and J.W.; formal analysis, J.Y.; investigation, J.Y.; resources, J.W.; data curation, H.Z.; writing—original draft preparation, S.Q. and H.Z.; writing—review and editing, J.Y. and J.W.; supervision, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aditya, N.; Mahapatra, S.S. Switching from exploration to exploitation in gravitational search algorithm based on diversity with Chaos. Inf. Sci. 2023, 635, 298–327. [Google Scholar] [CrossRef]
  2. Gonzalez-Ayala, P.; Alejo-Reyes, A.; Cuevas, E.; Mendoza, A. A Modified Simulated Annealing (MSA) Algorithm to Solve the Supplier Selection and Order Quantity Allocation Problem with Non-Linear Freight Rates. Axioms 2023, 12, 459. [Google Scholar] [CrossRef]
  3. Zheng, W.M.; Liu, N.; Chai, Q.W.; Liu, Y. Application of improved black hole algorithm in prolonging the lifetime of wireless sensor network. Complex Intell. Syst. 2023, 9, 5817–5829. [Google Scholar] [CrossRef]
  4. Mansuwan, K.; Jirapong, P.; Thararak, P. Optimal battery energy storage planning and control strategy for grid modernization using improved genetic algorithm. Energy Rep. 2023, 9, 236–241. [Google Scholar] [CrossRef]
  5. Wei, L.; Zhang, Q.; Yang, B. Improved Biogeography-Based Optimization Algorithm Based on Hybrid Migration and Dual-Mode Mutation Strategy. Fractal Fract. 2022, 6, 597. [Google Scholar] [CrossRef]
  6. Şahman, M.A.; Korkmaz, S. Discrete artificial algae algorithm for solving job-shop scheduling problems. Knowl.-Based Syst. 2022, 256, 109711. [Google Scholar] [CrossRef]
  7. Salimon, S.A.; Adebayo, I.G.; Adepoju, G.A.; Adewuyi, O.B. Optimal Allocation of Distribution Static Synchronous Compensators in Distribution Networks Considering Various Load Models Using the Black Widow Optimization Algorithm. Sustainability 2023, 15, 15623. [Google Scholar] [CrossRef]
  8. Umam, M.S.; Mustafid, M.; Suryono, S. A hybrid genetic algorithm and tabu search for minimizing makespan in flow shop scheduling problem. J. King Saud Univ. -Comput. Inf. Sci. 2022, 34, 7459–7467. [Google Scholar] [CrossRef]
  9. János, M.M.; Artúr, S.; Gyula, G. Environmental and economic multi-objective optimization of a household level hybrid renewable energy system by genetic algorithm. Appl. Energy 2020, 269, 115058. [Google Scholar]
  10. Lodewijks, G.; Cao, Y.; Zhao, N.; Zhang, H. Reducing CO2 emissions of an airport baggage handling transport system using a particle swarm optimization algorithm. IEEE Access 2021, 9, 121894–121905. [Google Scholar] [CrossRef]
  11. Abualigah, L.; Alkhrabsheh, M. Amended hybrid multi-verse optimizer with genetic algorithm for solving task scheduling problem in cloud computing. J. Supercomput. 2021, 78, 740–765. [Google Scholar] [CrossRef]
  12. Hussain, B.; Khan, A.; Javaid, N.; Hasan, Q.U.; AMalik, S.; Ahmad, O.; Dar, A.H.; Kazmi, A. A WeightedSum PSO Algorithm for HEMS A New Approach for the Design and Diversified Performance Analysis. Electronics 2019, 8, 180. [Google Scholar] [CrossRef]
  13. Paul, K.; Hati, D. A novel hybrid Harris hawk optimization and sine cosine algorithm based home energy management system for residential buildings. Build. Serv. Eng. Res. Technol. 2023, 44, 459–480. [Google Scholar] [CrossRef]
  14. Jiang, C.; Yang, S.; Nie, P.; Xiang, X. Multi-objective structural profile optimization of ships based on improved Artificial Bee Colony Algorithm and structural component library. Ocean. Eng. 2023, 283, 115124. [Google Scholar] [CrossRef]
  15. Abd Elaziz, M.; Dahou, A.; Mabrouk, A.; El-Sappagh, S.; Aseeri, A.O. An efficient artificial rabbits optimization based on mutation strategy for skin cancer prediction. Comput. Biol. Med. 2023, 163, 107154. [Google Scholar] [CrossRef] [PubMed]
  16. Bishla, S.; Khosla, A. Enhanced chimp optimized self-tuned FOPR controller for battery scheduling using Grid and Solar PV Sources. J. Energy Storage 2023, 66, 107403. [Google Scholar] [CrossRef]
  17. Percin, H.B.; Caliskan, A. Whale optimization algorithm based MPPT control of a fuel cell system. Int. J. Hydrogen Energy 2023, 48, 23230–23241. [Google Scholar] [CrossRef]
  18. Jagadish Kumar, N.; Balasubramanian, C. Cost-efficient resource scheduling in cloud for big data processing using metaheuristic search black widow optimization (MS-BWO) algorithm. J. Intell. Fuzzy Syst. 2023, 44, 4397–4417. [Google Scholar] [CrossRef]
  19. Zeng, C.; Qin, T.; Tan, W.; Lin, C.; Zhu, Z.; Yang, J.; Yuan, S. Coverage Optimization of Heterogeneous Wireless Sensor Network Based on Improved Wild Horse Optimizer. Biomimetics 2023, 8, 70. [Google Scholar] [CrossRef]
  20. Chhabra, A.; Hussien, A.G.; Hashim, F.A. Improved bald eagle search algorithm for global optimization and feature selection. Alex. Eng. J. 2023, 68, 141–180. [Google Scholar] [CrossRef]
  21. Liu, Y.; Sun, J.; Shang, Y.; Zhang, X.; Ren, S.; Wang, D. A novel remaining useful life prediction method for lithium-ion battery based on long short-term memory network optimized by improved sparrow search algorithm. J. Energy Storage 2023, 61, 106645. [Google Scholar] [CrossRef]
  22. Xu, M.; Song, Q.; Xi, M.; Zhou, Z. Binary arithmetic optimization algorithm for feature selection. Soft Comput. 2023, 27, 11395–11429. [Google Scholar] [CrossRef] [PubMed]
  23. Chen, M.; Huo, J.; Duan, Y. A hybrid biogeography-based optimization algorithm for three-dimensional bin size designing and packing problem. Comput. Ind. Eng. 2023, 180, 109239. [Google Scholar] [CrossRef]
  24. Long, Y.; Liu, S.; Qiu, D.; Li, C.; Guo, X.; Shi, B.; AbouOmar, M.S. Local Path Planning with Multiple Constraints for USV Based on Improved Bacterial Foraging Optimization Algorithm. J. Mar. Sci. Eng. 2023, 11, 489. [Google Scholar] [CrossRef]
  25. Zou, D.; Li, M.; Ouyang, H. A MOEA/D approach using two crossover strategies for the optimal dispatches of the combined cooling, heating, and power systems. Appl. Energy 2023, 347, 121498. [Google Scholar] [CrossRef]
  26. Ramachandran, M.; Mirjalili, S.; Nazari-Heris, M.; Parvathysankar, D.S.; Sundaram, A.; Gnanakkan, C.A.R.C. A hybrid grasshopper optimization algorithm and harris hawks optimizer for combined heat and power economic dispatch problem. Eng. Appl. Artif. Intell. 2022, 111, 104753. [Google Scholar] [CrossRef]
  27. Hamza, M.A.; Alshahrani, H.M.; Dhahbi, S.; Nour, M.K.; Al Duhayyim, M.; El Din, E.M.; Yaseen, I.; Motwakel, A. Differential Evolution with Arithmetic Optimization Algorithm Enabled Multi-Hop Routing Protocol. Comput. Syst. Sci. Eng. 2023, 45, 1759–1773. [Google Scholar] [CrossRef]
  28. Pashaei, E.; Pashaei, E. Hybrid binary arithmetic optimization algorithm with simulated annealing for feature selection in high-dimensional biomedical data. J. Supercomput. 2022, 78, 15598–15637. [Google Scholar] [CrossRef]
  29. Bhowmik, S.; Acharyya, S. Image encryption approach using improved chaotic system incorporated with differential evolution and genetic algorithm. J. Inf. Secur. Appl. 2023, 72, 103391. [Google Scholar] [CrossRef]
  30. Panahizadeh, V.; Hamidi, E.; Daneshpayeh, S.; Saeifar, H. Optimization of impact strength and elastic modulus of polyamide-based nanocomposites: Using particle swarm optimization method. J. Elastomers Plast. 2024, 56, 244–261. [Google Scholar] [CrossRef]
  31. Kim, K.H.; Jung, Y.H.; Shin, Y.J.; Shin, J.H. Optimizing the drainage system of subsea tunnels using the PSO algorithm. Mar. Georesources Geotechnol. 2024, 42, 266–278. [Google Scholar] [CrossRef]
  32. Pal, K.; Verma, K.; Gandotra, R. Optimal location of FACTS devices with EVCS in power system network using PSO. e-Prime—Adv. Electr. Eng. Electron. Energy 2024, 7, 100482. [Google Scholar] [CrossRef]
  33. Kang, Z.; Duan, R.; Zheng, Z.; Xiao, X.; Shen, C.; Hu, C.; Tang, S.; Qin, W. Grid aided combined heat and power generation system for rural village in north China plain using improved PSO algorithm. J. Clean. Prod. 2024, 435, 140461. [Google Scholar] [CrossRef]
  34. Sirisumrannukul, S.; Intaraumnauy, T.; Piamvilai, N. Optimal control of cooling management system for energy conservation in smart home with ANNs-PSO data analytics microservice platform. Heliyon 2024, 10, e26937. [Google Scholar] [CrossRef] [PubMed]
  35. Sathasivam, K.; Garip, I.; Saeed, S.H.; Yais, Y.; Alanssari, A.I.; Hussein, A.A.; Hammoode, J.A.; Lafta, A.M. A Novel MPPT Method Based on PSO and ABC Algorithms for Solar Cell. Electr. Power Compon. Syst. 2024, 52, 653–664. [Google Scholar] [CrossRef]
  36. Hadi, N.; Jalal, B. Investigation of river water pollution using Muskingum method and particle swarm optimization (PSO) algorithm. Appl. Water Sci. 2024, 14, 68. [Google Scholar]
  37. Shaikh, M.S.; Raj, S.; Babu, R.; Kumar, S.; Sagrolikar, K. A hybrid moth–flame algorithm with particle swarm optimization with application in power transmission and distribution. Decis. Anal. J. 2023, 6, 100182. [Google Scholar] [CrossRef]
  38. Makhija, D.; Sudhakar, C.; Reddy, P.B.; Kumari, V. Workflow Scheduling in Cloud Computing Environment by Combining Particle Swarm Optimization and Grey Wolf Optimization. Comput. Sci. Eng. Int. J. 2022, 12, 1–10. [Google Scholar] [CrossRef]
  39. Adekilekun, M.T.; Abiola, G.A.; Olajide, M.O. Hybrid Optimization Technique for Solving Combined Economic Emission Dispatch Problem of Power Systems. Turk. J. Electr. Power Energy Syst. 2022, 2, 158–167. [Google Scholar]
  40. Osei-Kwakye, J.; Han, F.; Amponsah, A.A.; Ling, Q.; Abeo, T.A. A hybrid optimization method by incorporating adaptive response strategy for Feedforward neural network. Connect. Sci. 2022, 34, 578–607. [Google Scholar] [CrossRef]
  41. Wang, N.; Wang, J.S.; Zhu, L.F.; Wang, H.Y.; Wang, G. A novel dynamic clustering method by integrating marine predators algorithm and particle swarm optimization algorithm. IEEE Access 2020, 9, 3557–3569. [Google Scholar] [CrossRef]
  42. Samantaray, S.; Sahoo, P.; Sahoo, A.; Satapathy, D.P. Flood discharge prediction using improved ANFIS model combined with hybrid particle swarm optimisation and slime mould algorithm. Environ. Sci. Pollut. Res. 2023, 30, 83845–83872. [Google Scholar] [CrossRef]
  43. Wang, D.; Liu, L.; Ben, Y.; Dai, P.; Wang, J. Seabed Terrain-Aided Navigation Algorithm Based on Combining Artificial Bee Colony and Particle Swarm Optimization. Appl. Sci. 2023, 13, 1166. [Google Scholar] [CrossRef]
  44. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Gazelle optimization algorithm: A novel nature-inspired metaheuristic optimizer. Neural Comput. Appl. 2023, 35, 4099–4131. [Google Scholar] [CrossRef]
  45. Lu, W.; Shi, C.; Fu, H.; Xu, Y. A Power Transformer Fault Diagnosis Method Based on Improved Sand Cat Swarm Optimization Algorithm and Bidirectional Gated Recurrent Unit. Electronics 2023, 12, 672. [Google Scholar] [CrossRef]
  46. Nan, A.; Liyong, B. Particle Swarm Algorithm Based on Homogenized Logistic Mapping and Its Application in Antenna Parameter Optimization. Int. J. Inf. Commun. Sci. 2022, 7, 1–6. [Google Scholar]
  47. Yang, D.D.; Mei, M.; Zhu, Y.J.; He, X.; Xu, Y.; Wu, W. Coverage Optimization of WSNs Based on Enhanced Multi-Objective Salp Swarm Algorithm. Appl. Sci. 2023, 13, 11252. [Google Scholar] [CrossRef]
  48. Wei, X.; Zhang, Y.; Zhao, Y. Evacuation path planning based on the hybrid improved sparrow search optimization algorithm. Fire 2023, 6, 380. [Google Scholar] [CrossRef]
  49. Zheng, X.; Nie, B.; Chen, J.; Du, Y.; Zhang, Y.; Jin, H. An improved particle swarm optimization combined with double-chaos search. Math. Biosci. Eng. 2023, 20, 15737–15764. [Google Scholar] [CrossRef]
  50. Ozcan, E.; Mohan, C.K. Particle swarm optimization: Surfing the waves. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 3, pp. 1939–1944. [Google Scholar]
  51. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  52. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. -Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  53. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  54. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  55. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  56. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  57. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
Figure 1. Piecewise mapping distribution plot and histogram. (a) Piecewise mapping distribution histogram; (b) Piecewise mapping distribution scatter plot; (c) Logistic map distribution histogram; (d) Logistic mapping distribution scatter plot.
Figure 1. Piecewise mapping distribution plot and histogram. (a) Piecewise mapping distribution histogram; (b) Piecewise mapping distribution scatter plot; (c) Logistic map distribution histogram; (d) Logistic mapping distribution scatter plot.
Electronics 13 01580 g001aElectronics 13 01580 g001b
Figure 2. MPSOGOA flow chart.
Figure 2. MPSOGOA flow chart.
Electronics 13 01580 g002
Figure 3. Schematic diagram of welded beam design.
Figure 3. Schematic diagram of welded beam design.
Electronics 13 01580 g003
Figure 4. Schematic diagram of compression spring design problem.
Figure 4. Schematic diagram of compression spring design problem.
Electronics 13 01580 g004
Figure 5. Schematic diagram of pressure vessel design issues.
Figure 5. Schematic diagram of pressure vessel design issues.
Electronics 13 01580 g005
Figure 6. CEC2005 test function convergence diagram.
Figure 6. CEC2005 test function convergence diagram.
Electronics 13 01580 g006aElectronics 13 01580 g006b
Figure 7. Convergence diagram of CEC2014 and CEC2017 group and test functions.
Figure 7. Convergence diagram of CEC2014 and CEC2017 group and test functions.
Electronics 13 01580 g007
Figure 8. Convergence results of ablation experiments based on CEC2005.
Figure 8. Convergence results of ablation experiments based on CEC2005.
Electronics 13 01580 g008aElectronics 13 01580 g008b
Figure 9. Convergence curve of WBD.
Figure 9. Convergence curve of WBD.
Electronics 13 01580 g009
Figure 10. Convergence curve of CSD.
Figure 10. Convergence curve of CSD.
Electronics 13 01580 g010
Figure 11. Convergence curve of PVD.
Figure 11. Convergence curve of PVD.
Electronics 13 01580 g011
Table 1. Algorithm parameter settings for comparison.
Table 1. Algorithm parameter settings for comparison.
AlgorithmParameterParameter Value
GOAPSRs0.34
S0.88
GWOa[0, 2]
r1f r2[0, 1]
SCAa2
AOA α 5
μ 0.05
PSOC1, C22
Wmax0.9
Wmin0.2
DELower bound of scale factor0.2
Upper bound of scale factor0.8
BBOnKeep0.2
Pmutation0.9
Table 2. Classic test function.
Table 2. Classic test function.
IDFunctionDimRangeGlobal
F1 f ( x ) = i = 1 n x i 2 30[−100, 100]0
F2 f ( x ) = i = 0 n x i + i = 0 n x i 30[−10, 10]0
F3 f ( x ) = i = 1 d j = 1 i x j 2 30[−100, 100]0
F4 f ( x ) = max i x i , 1 i n 30[−100, 100]0
F5 f ( x ) = i = 1 n 1 100 ( x i x i + 1 ) 2 + ( 1 x i ) 2 30[−30, 30]0
F6 f ( x ) = i = 1 n ( [ x i 0.5 ] ) 2 30[−100, 100]0
F7 f ( x ) = i = 1 n i x 1 4 + r a n d 0 , 1 30[−128, 128]0
F8 f ( x ) = i = 1 n x i sin ( x i ) 30[−500, 500]418.9829 Dim
F9 f ( x ) = 10 + i = 1 n ( x i 2 10 cos ( 2 π x i ) ) 30[−5.12, 5.12]0
F10 f ( x ) = a exp 0.02 n 1 i = 1 n x i 2 exp n 1 i = 1 n cos ( 2 π x i ) + a + e , a = 20 30[−32, 32]0
F11 f ( X ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos x i i 30[−600, 600]0
F12 f ( x ) = π n 10 sin ( π y i ) + i = 1 n ( y i 1 ) 2 1 + 10 sin 2 ( π y i + 1 ) + i = 1 n u ( x i , 10 , 100 , 4 ) , y i 1 + x i + 1 4 , u ( x i , a , k , m ) K ( x i a ) m i f x i > a 0 a x i a K ( x i a ) m a x i 30[−50, 50]0
F13 f ( x ) = 0.1 sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] + i = 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
F14 f ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x j a i j 6 1 2[−65, 65]1
F15 f ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
F16 f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F17 f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5, 5]0.398
F18 f ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F19 f ( x ) = i = 1 4 c 1 exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[1, 3]−3.86
F20 f ( x ) = i = 1 4 c 1 exp ( j = 1 3 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
Table 3. CEC2014 and CEC2017 combined test functions.
Table 3. CEC2014 and CEC2017 combined test functions.
IDFunctionFi
F21Rotated High Conditioned Elliptic Function (CEC 2014 F1)100
F22Shifted and Rotated Bent Cigar Function (CEC 2017 F1)100
F23Shifted and Rotated Rosenbrock’s Function (CEC 2017 F3)300
F24Shifted and Rotated Rastrigin’s Function (CEC 2017 F4)400
F25Shifted and Rotated Expanded Scaffer’s F6 Function (CEC 2017 F5)500
F26Shifted and Rotated Weierstrass Function (CEC 2014 F6)600
F27Shifted and Rotated Lunacek Bi_ Rastrigin Function (CEC 2017 F6)600
F28Shifted and Rotated Non-Continuous Rastrigin’sFunction (CEC 2017 F7)700
F29Shifted Rastrigin’s Function (CEC 2014 F8)800
F30Shifted and Rotated Levy Function (CEC 2017 F8)800
F31Shifted and Rotated Schwefel’s Function (CEC 2017 F9)900
F32Shifted Schwefel’s Function (CEC 2014 F10)1000
F33Shifted and Rotated Schwefel’s Function (CEC 2014 F11)1100
F34Hybrid Function 2 (n = 3) (CEC 2017 F11)1100
F35Shifted and Rotated Expanded Scaffers F6 Function (CEC 2014 F16)1600
Table 4. CEC2005 test results.
Table 4. CEC2005 test results.
FunctionValueMPSOGOAGOAGWOSCAAOAPSODEChimpBBOGJO
F1Best2.7433 × 10−2682.89154 × 10−888.50069 × 10−732.54974 × 10−81.0643 × 10−2079.47085 × 10−1251.6481.3579 × 10−270.3244891.2726 × 10−132
Worst1.5559 × 10−2143.40688 × 10−241.00555 × 10−690.02434542.9067 × 10−959.19223 × 10−8261.8832.01606 × 10−181.187245.5191 × 10−127
Average5.1864 × 10−2161.13563 × 10−251.63935 × 10−700.00137882.59489 × 10−963.99524 × 10−9131.8432.11538 × 10−190.6191295.5742 × 10−128
SD06.22009 × 10−252.59621 × 10−700.004505277.96026 × 10−961.66386 × 10−849.4215.13648 × 10−190.1666531.3812 × 10−127
Median5.0964 × 10−2525.30668 × 10−785.55451 × 10−718.70929 × 10−58.3471 × 10−1316.60832 × 10−10124.272.16579 × 10−220.5793779.8842 × 10−130
F2Best1.5987 × 10−1379.12414 × 10−579.04044 × 10−429.4068 × 10−0906.63994 × 10−718.94278.76387 × 10−190.1580916.77923 × 10−76
Worst1.006 × 10−1171.5473 × 10−204.17611 × 10−406.56174 × 10−51.3602 × 10−129177.15555.60081.66733 × 10−120.3180386.08005 × 10−73
Average3.3535 × 10−1195.58832 × 10−226.87572 × 10−414.5299 × 10−64.5341 × 10−1315.9052641.19551.32586 × 10−130.2451526.65278 × 10−74
SD1.8366 × 10−1182.82639 × 10−219.28251 × 10−411.25192 × 10−52.4834 × 10−13032.34399.344873.44014 × 10−130.03866381.37309 × 10−73
Median6.7837 × 10−1337.41398 × 10−493.6684 × 10−414.08327 × 10−75.8246 × 10−2101.47627 × 10−542.70291.31203 × 10−140.2514661.30119 × 10−74
F3Best4.35826 × 10−692.63083 × 10−122.90856 × 10−2427.84780236.09422,892.96.69748 × 10−1034.00532.07975 × 10−58
Worst4.07582 × 10−430.003077543.26631 × 10−1710,369.52.50525 × 10−452541.1338,833.80.019964147.6771.01149 × 10−43
Average1.40494 × 10−440.000211692.07546 × 10−182522.168.35084 × 10−47929.729,602.20.001031185.05813.37788 × 10−45
SD7.4369 × 10−440.000621577.24266 × 10−182409.214.57394 × 10−46484.5894207.890.0036336631.14031.84661 × 10−44
Median7.55093 × 10−621.02692 × 10−71.63874 × 10−211486.212.2905 × 10−102889.35728,5644.07225 × 10−577.91587.89966 × 10−50
F4Best1.9656 × 10−1083.9303 × 10−289.91516 × 10−190.9291861.03144 × 10−733.1822643.39476.74566 × 10−70.5543261.03235 × 10−41
Worst1.70792 × 10−813.0358 × 10−92.82651 × 10−1649.19545.65999 × 10−2413.118279.22220.007000250.9762237.45118 × 10−37
Average5.69308 × 10−831.68167 × 10−102.73887 × 10−1718.21351.88688 × 10−256.5154660.020.000447060.7937453.86938 × 10−38
SD3.11823 × 10−825.9787 × 10−105.608 × 10−1712.38221.03336 × 10−242.429737.540420.001303820.105421.36229 × 10−37
Median4.1655 × 10−1048.56731 × 10−227.15996 × 10−1816.41551.13063 × 10−515.7003660.14976.26622 × 10−050.796262.99064 × 10−39
F5Best21.446822.935424.684727.593628.60678.5603411,068.228.081631.145125.3295
Worst23.574624.465328.72361751.628.7969661.168108,59028.9703351.30828.631
Average22.780323.676126.5531142.03228.694179.480641,41928.862693.347327.1006
SD0.5415460.3355860.896212323.7320.0569614119.22322,907.20.21811570.88550.720972
Median22.807523.692926.203941.66728.692439.447736,874.428.940393.437627.1859
F6Best7.89475 × 10−50.002419551.15757 × 10−53.596344.845864.65703 × 10−1180.98342.037470.3485941.25039
Worst0.03250620.03911951.239054.851335.643243.29787 × 10−8196.0873.355351.019233.73296
Average0.009322580.01571470.4133284.253245.256322.17583 × 10−9125.6772.575390.6289942.4552
SD0.008200610.01138960.2843070.3013150.2006766.15369 × 10−928.84180.3677160.151430.547726
Median0.007134860.01187180.2521594.272345.295635.46017 × 10−10117.8462.610860.6244072.50017
F7Best7.187 × 10−50.000539060.000166180.002974772.47227 × 10−70.01581950.09534632.13809 × 10−50.001384424.75219 × 10−6
Worst0.001952960.003642850.001092140.04542010.000182640.07540050.3824860.001505940.006179760.00038551
Average0.000693910.001390310.000546270.01760462.62781 × 10−50.03892880.2284530.000494920.00340840.00011308
SD0.000414250.000796470.000218840.01059553.57008 × 10−50.01373570.06112220.000390980.000929439.95327 × 10−5
Median0.000615550.001141250.000484830.01420621.47276 × 10−50.03714720.2220440.000458120.003237326.96101 × 10−5
F8Best−8138.49−8515.22−7424.74−4468.92−3828.82−37,835.8−5715.02−5945.94−10770.3−7659.2
Worst−6975.58−6920.39−3296.85−3572.19−2443.18−20,840.3−4765.72−5690.15−7869.18−2614.69
Average−7460.98−7716.17−6220.63−3997.62−3263.11−29484.8−5256.55−5782.94−8909.67−4542.83
SD263.699343.027815.569244.64360.7253770.12234.33260.4447549.3641181.45
Median−7428.94−7655.97−6340.2−3970.16−3347.24−28,410.3−5220.09−5773.53−8934.88−4562.16
F9Best0001.16642 × 10−5019.9018209.2016.17420
Worst003.2257164.4795065.6672264.14714.85186.76990
Average000.21429713.2237038.7374245.4731.9937336.99310
SD000.81553920.1794011.858713.17113.3197914.11020
Median0000.0824535037.8085246.9931.51663 × 10−0536.14310
F10Best8.88178 × 10−168.88178 × 10−167.99361 × 10−159.30212 × 10−58.88178 × 10−161.028 × 10−619.148919.95710.1512324.44089 × 10−15
Worst4.44089 × 10−154.44089 × 10−151.5099 × 10−1420.25178.88178 × 10−161.1551519.909719.96330.3179797.99361 × 10−15
Average4.20404 × 10−152.54611 × 10−151.36779 × 10−1412.65148.88178 × 10−160.11554119.729119.9610.2331174.55932 × 10−15
SD9.01352 × 10−161.8027 × 10−152.39689 × 10−159.4335300.352460.2114680.001634270.04044896.48634 × 10−16
Median4.44089 × 10−158.88178 × 10−161.5099 × 10−1420.04258.88178 × 10−167.0606 × 10−619.827419.96140.22934.44089 × 10−15
F11Best0009.0681 × 10−701.86905 × 10−101.3221500.3808730
Worst000.01303450.7873731.36796 × 10−100.08587242.832670.05627250.7628870
Average000.001525910.1946437.8508 × 10−120.01490532.181310.0139130.5691530
SD000.003973920.2478332.90085 × 10−110.02096380.3414060.01708190.08207240
Median0000.052465300.009860982.171470.005168040.5659460
F12Best2.26979 × 10−60.000150771.02012 × 10−60.3629990.8226741.96181 × 10−1026.14880.1210750.000685180.0577601
Worst0.000880750.001652250.07056217.718220.9850450.51825812793.20.8275380.002397390.293
Average0.000282980.000662600.02606830.9778780.9157260.03818411118.690.2548040.001344620.162571
SD0.000247350.000417910.0148281.352670.03951650.1035292670.460.1439640.000345930.0669722
Median0.000238770.000523060.02567780.5872820.9103363.41822 × 10−744.90910.2239830.001299260.16061
F13Best6.44767 × 10−60.001937272.21166 × 10−51.984262.698131.43267 × 10−10410.6392.482560.01063841.13193
Worst0.08818330.05056030.61202522.45762.97910.04394891659372.996630.03772971.80974
Average0.02713880.01965440.2567313.573182.94420.0040296830,685.32.875680.02497771.49769
SD0.0220660.01516910.1382423.705980.06247790.0088851137,310.40.126310.007911710.168996
Median0.02544430.01440740.2781292.604722.976239.38257 × 10−817,134.52.8940.02490561.49676
F14Best0.9980040.9980040.9980040.9980040.9980040.9980040.9980040.9980040.9980040.998004
Worst0.9980040.99800410.76322.9821112.67050.9980040.9980040.99802515.503812.6705
Average0.9980040.9980042.796121.5272510.47610.9980040.9980040.9980074.756833.80826
SD07.1417 × 10−173.28520.892314.02764004.58599 × 10−63.918953.82079
Median0.9980040.9980040.9980040.9980512.67050.9980040.9980040.9980053.968252.98211
F15Best0.000307480.000307480.000307480.000356150.000315080.000307480.000307480.001229120.000414790.00030749
Worst0.000307480.000307480.02084870.001451880.1118420.001076880.001223170.001313890.02036330.00122336
Average0.000307480.000307480.004365360.000838470.02064970.000748420.000551660.001253360.002597080.00041668
SD1.84314 × 10−146.92721 × 10−130.008178980.000376380.03292580.000306310.000411852.14723 × 10−50.006025550.00028961
Median0.000307480.000307480.000307490.000761720.004499860.000863590.000307480.001247670.000614810.00030753
F16Best−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163
Worst−1.03163−1.03163−1.03163−1.03159−1.03163−1.03163−1.03163−1.03161−1.03163−1.03163
Average−1.03163−1.03163−1.03163−1.03162−1.03163−1.03163−1.03163−1.03162−1.03163−1.03163
SD6.32085 × 10−166.32085 × 10−161.83844 × 10−91.01345 × 10−52.58411 × 10−116.71219 × 10−166.77522 × 10−163.90178 × 10−66.17114 × 10−163.7554 × 10−8
Median−1.03163−1.03163−1.03163−1.03162−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163
F17Best0.3978870.3978870.3978870.3979090.3978890.3978870.3978870.3978880.3978870.397887
Worst0.3978870.3978870.3978890.40040.3987530.3978870.3978870.3989640.3978870.397937
Average0.3978870.3978870.3978880.3985420.3980210.3978870.3978870.39810.3978870.397893
SD004.12949 × 10−70.000522260.00022284000.000238834.16358 × 10−119.20093 × 10−6
Median0.3978870.3978870.3978880.3984340.3979340.3978870.3978870.3980060.3978870.39789
F18Best3333333333
Worst333.000013.0000685.3767333.00008303.00001
Average3333.0000112.9459333.000015.73
SD1.20918 × 10−151.2452 × 10−152.38556 × 10−61.25267 × 10−525.18785.83118 × 10−161.72587 × 10−151.90032 × 10−58.238471.57663 × 10−6
Median33333333.0000133
F19Best−3.86278−3.86278−3.86278−3.8624−3.86266−3.85208−3.86278−3.86237−3.86278−3.86278
Worst−3.86278−3.86278−3.85498−3.8533−3.8549−3.03321−3.86278−3.85423−3.86278−3.85489
Average−3.86278−3.86278−3.86248−3.85597−3.86006−3.66744−3.86278−3.85506−3.86278−3.85965
SD2.71009 × 10−152.71009 × 10−150.00142170.002760950.002586250.1747992.71009 × 10−150.00142382.36432 × 10−150.00388555
Median−3.86278−3.86278−3.86278−3.85483−3.86138−3.69565−3.86278−3.85477−3.86278−3.86275
F20Best3.322−3.322−3.32199−3.13096−3.29032−2.27284−3.322−3.30839−3.322−3.32199
Worst−3.322−3.322−3.08391−1.6919−2.89712−0.784013−3.2031−1.92068−3.2031−3.0156
Average−3.322−3.322−3.24742−2.8927−3.16648−1.69532−3.20707−2.74374−3.2784−3.18099
SD1.48895 × 10−154.29004 × 10−120.07573510.360070.07317640.4874570.02170680.3702370.05827340.0828684
Median−3.322−3.322−3.20301−3.01207−3.17547−1.68691−3.2031−2.79275−3.322−3.19719
Table 5. Friedman ranking of CEC2005.
Table 5. Friedman ranking of CEC2005.
AlgorithmFriedman Mean RankGeneral Mean Rank
MPSOGOA2.5251
GOA3.1752
GWO4.754
SCA8.1510
AOA5.65
PSO5.8256
DE7.559
Chimp6.98
BBO6.17
GJO4.4253
Table 6. Wilcoxon signed-rank test results for CEC2005.
Table 6. Wilcoxon signed-rank test results for CEC2005.
FunctionGOAGWOSCAAOAPSODEChimpBBOGJO
F13.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F23.01986 × 10−113.01986 × 10−113.01986 × 10−112.43954 × 10−103.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F33.01986 × 10−113.01986 × 10−113.01986 × 10−112.0338 × 10−93.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−117.65879 × 10−5
F43.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F54.57257 × 10−93.01986 × 10−113.01986 × 10−113.01986 × 10−118.84109 × 10−73.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F60.02236011.06657 × 10−73.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F76.2828 × 10−60.1857673.01986 × 10−114.07716 × 10−113.01986 × 10−113.01986 × 10−110.05187713.68973 × 10−117.38029 × 10−10
F80.0006912521.41098 × 10−93.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−114.97517 × 10−113.4742 × 10−10
F9NaN0.02156931.21178 × 10−12NaN1.21178 × 10−121.21178 × 10−124.55563 × 10−121.21178 × 10−12NaN
F109.55053 × 10−56.53336 × 10−132.36384 × 10−127.15185 × 10−132.36384 × 10−122.36384 × 10−122.36384 × 10−122.36384 × 10−120.0885796
F11NaN0.04192621.21178 × 10−120.04192621.21178 × 10−121.21178 × 10−128.86583 × 10−71.21178 × 10−12NaN
F125.97056 × 10−55.57265 × 10−103.01986 × 10−113.01986 × 10−110.02324343.01986 × 10−113.01986 × 10−114.50432 × 10−113.01986 × 10−11
F130.2458145.57265 × 10−103.01986 × 10−113.01986 × 10−114.68563 × 10−83.01986 × 10−113.01986 × 10−110.9234423.01986 × 10−11
F140.08140421.21178 × 10−121.21178 × 10−121.20094 × 10−12NaNNaN1.21178 × 10−124.4986 × 10−121.21178 × 10−12
F151.19287 × 10−63.01608 × 10−113.01608 × 10−113.01608 × 10−113.01608 × 10−110.001923053.01608 × 10−113.01608 × 10−113.01608 × 10−11
F1617.57407 × 10−127.57407 × 10−127.57407 × 10−120.02463740.005466037.57407 × 10−125.21998 × 10−97.57407 × 10−12
F17NaN1.21178 × 10−121.21178 × 10−121.21178 × 10−12NaNNaN1.21178 × 10−122.93292 × 10−51.21178 × 10−12
F180.6905622.34656 × 10−112.34656 × 10−112.34505 × 10−110.0817591.63048 × 10−52.34656 × 10−112.56049 × 10−92.34656 × 10−11
F19NaN1.21178 × 10−121.21178 × 10−121.21178 × 10−121.21178 × 10−12NaN1.21178 × 10−122.64199 × 10−81.21178 × 10−12
F202.36567 × 10−122.36567 × 10−122.36567 × 10−122.36567 × 10−122.36567 × 10−123.26247 × 10−132.36567 × 10−121.37344 × 10−112.36567 × 10−12
Table 7. CEC2014 and CEC2017 combined test function results.
Table 7. CEC2014 and CEC2017 combined test function results.
FunctionValueMPSOGOAGOAGWOSCAAOAPSODEChimpBBOGJO
F21Best100100.0024973032.8751 × 1061.8327 × 107706.164305.2888.0177 × 1062420.75406266
Worst100.013100.0721.7767 × 1071.9090 × 1075.8279 × 1081.1233 × 106924.8121.6256 × 107625,2391.2559 × 107
Average100.004100.0176.0640 × 1068.9227 × 1061.4787 × 108204,591564.5051.2709 × 10758,1416.0484 × 106
SD0.002903330.01672864.8059 × 1064.0258 × 1061.4225 × 108262,795166.6331.9064 × 106118,9133.4408 × 106
Median100.003100.0114.3561 × 1068.9623 × 1068.8048 × 107140,538530.481.2823 × 10716,1206.3206 × 106
F22Best100.014100.3524954.714.0935 × 1083.3875 × 109121.6391001.1187 × 108104.42542,465.7
Worst101.712109.6562.4920 × 1081.0862 × 1091.8657 × 101041,644.1100.0084.5520 × 1093737.58.4987 × 108
Average100.253101.871.0144 × 1076.9655 × 1081.0337 × 10107004.88100.0021.1031 × 109909.0212.3983 × 108
SD0.328651.78764.5633 × 1071.9790 × 1084.5134 × 100911,046.50.001853751.1161 × 109985.8752.2961 × 108
Median100.16101.4137981.77.1394 × 1089.3369 × 10092040.91100.0028.7509 × 108536.8393.2297 × 108
F23Best300300304.02590.6857784.51300300.002865.755300.003369.827
Worst30030010060.84488.0544,057.3300300.145496.47300.39411,775.7
Average3003002221.821464.0719086.5300300.0442668.99300.0753419.88
SD2.10829 × 10−62.32747 × 10−52162.58933.6596226.111.25015 × 10−100.038889929.8530.09975023166.9
Median3003001738.231133.2218232300300.0282430.42300.0352529.64
F24Best400400406.856420.009605.459401.883400.001437.39400.036402.817
Worst400.002400.06464.024530.4243059.65407.88400.005866.152406.017517.397
Average400400.012415.454446.7431428.42404.726400.002580.915404.132433.261
SD0.000395690.01288416.030420.2286618.5790.8757490.00090655130.3191.7886326.7036
Median400400.007407.635444.7041315.07404.746400.002523.902404.847421.237
F25Best504.727506.666504.029528.916551.155503.98520.522540.112504.975512.722
Worst513.63514.617537.064560.343625.244519.902540.786587.365524.874555.559
Average508.511510.379514.963548.622583.175510.771531.554554.881513.234529.777
SD1.930912.043488.197236.5742120.00044.034494.532369.205635.2801411.9866
Median508.738510.071513.347549.388581.339511.472531.372553.097511.939526.234
F26Best600.053600.691600.452604.75608.165600600.015604.828600.102601.404
Worst600.557603.544604.933609.084612.301602.197606.581610.065605.684606.633
Average600.238602.438601.716606.656610.676600.363603.293607.317602.333603.773
SD0.1379440.600680.9509691.094671.012310.4926872.180720.936551.523711.38629
Median600.202602.562601.605606.452610.915600.235603.032607.242602.21603.802
F27Best600.029600.136600.031611.913625.002600600614.988600600.128
Worst600.213600.626603.186622.485666.45600600.001649.476600.007617.374
Average600.099600.314600.865616.557646.758600600626.284600.002606.311
SD0.04394810.09683760.8635633.196811.19534.82533 × 10−60.000126529.090130.001801325.32735
Median600.091600.321600.513615.99646.768600600624.155600.001604.569
F28Best717.354715.188711.123744.22789.468706.599734.649745.94714.339727.013
Worst730.508729.692749.49794.554846.893729.782753.924837.24730.971771.148
Average722.498723.365727.096771.986818.838719.003745.343801.477721.941748.372
SD3.173433.781710.04679.9686113.27016.202775.2839819.29284.7620111.2874
Median721.983723.411724.864772.418819.667719.282746.894807.692721.982748.563
F29Best801.871803.593803.001816.145864.897800815.332823.349802.985805.041
Worst806.832809.639821.893850.156922.149804.975827.947873.689816.914854.547
Average804.63806.501809.096839.185887.82802.487822.194840.304807.495824.501
SD1.09831.605184.801487.4468910.83911.472293.324679.60113.6648813.7031
Median804.763806.404807.964839.885886.478801.99821.908838.181806.467820.065
F30Best802.303803.97804.998820.51831.099802.985824.307826.267802.985815.053
Worst811.626814.144823.188856.539887.042818.904846.226857.065833.829848.545
Average807.658809.333812.491840.278861.966809.571834.215839.906814.825825.801
SD2.03942.312464.988938.4329914.23663.831445.429619.016367.280289.38006
Median807.596809.14811.194839.749862.644809.95834.559839.66814.429821.687
F31Best900900.002900.012937.241062.16900900969.749900900.473
Worst900.001900.129965.5591076.541795.069009001798.099001223.07
Average900900.042907.6921013.151428.29009001368.72900968.069
SD0.000138290.034960916.588633.8893206.9255.58548 × 10−146.18317 × 10−11210.660.0001377074.1475
Median900900.027900.641018.31433.519009001330.54900962.495
F32Best1019.921050.821015.371686.931556.41006.891158.781357.151007.021234.96
Worst1156.871189.751781.652398.982429.741417.131733.542556.451572.391945.8
Average1093.571092.341285.752014.232015.771164.41355.842015.191205.31567.49
SD51.464938.6527163.457183.646239.863117.962139.823263.348136.006216.077
Median1089.81076.4812651998.832014.741148.061328.512003.171190.881553.16
F33Best1149.861226.221123.751857.142183.581222.371758.821754.371332.321139.14
Worst1624.681744.551970.532813.923097.352116.452604.72814.132716.322373.31
Average1387.061470.591539.52401.942655.471566.412306.492378.161854.51747
SD118.75128.246158.502219.785234.239256.108218.937240.691316.273265.919
Median1381.091473.41557.32440.762675.111548.762388.022379.041777.541712.93
F34Best1100.71101.921106.381139.931510.681100.091105.721152.881103.571104.14
Worst1103.911106.451245.361299.5523,686.51112.661112.171454.551178.051410.05
Average1102.571103.771134.51195.366201.991105.261109.081297.521125.371167.14
SD0.7840661.1018933.769235.3195397.033.233311.58061103.31218.430260.1217
Median1102.71103.881126.761188.54645.151105.071109.151320.931119.591148.68
F35Best1601.281601.731601.311602.71602.851601.181603.271602.341601.521601.45
Worst1602.51603.141603.521603.851604.291603.131603.741603.521603.441603.47
Average1602.121602.71602.561603.351603.71602.391603.481603.141602.731602.81
SD0.297350.3341470.5092880.2597550.339640.527190.1263280.2432510.4230910.410734
Median1602.161602.781602.541603.351603.731602.381603.441603.221602.741602.85
Table 8. Wilcoxon signed-rank test results of CEC2014 and CEC2017 combined test functions.
Table 8. Wilcoxon signed-rank test results of CEC2014 and CEC2017 combined test functions.
FunctionGOAGWOSCAAOAPSODEChimpBBOGJO
F215.59991 × 10−73.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F227.38029 × 10−103.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F231.09367 × 10−103.01986 × 10−113.01986 × 10−113.01986 × 10−112.8502 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F241.77691 × 10−103.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.82016 × 10−103.01986 × 10−113.01986 × 10−113.01986 × 10−11
F250.0006548650.0002005813.01986 × 10−113.01986 × 10−110.01692123.01986 × 10−113.01986 × 10−112.27802 × 10−53.33839 × 10−11
F263.01986 × 10−114.50432 × 10−113.01986 × 10−113.01986 × 10−110.9234428.89099 × 10−103.01986 × 10−115.46175 × 10−93.01986 × 10−11
F278.15274 × 10−111.74791 × 10−53.01986 × 10−113.01986 × 10−111.53022 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−117.38908 × 10−11
F280.2580510.06145193.01986 × 10−113.01986 × 10−110.01628483.01986 × 10−113.01986 × 10−110.5894514.97517 × 10−11
F291.24932 × 10−56.2828 × 10−63.01986 × 10−113.01986 × 10−111.25245 × 10−63.01986 × 10−113.01986 × 10−110.0009520741.09367 × 10−10
F300.005084228.66343 × 10−53.01986 × 10−113.01986 × 10−110.04057553.01986 × 10−113.01986 × 10−114.35308 × 10−53.01986 × 10−11
F313.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−117.57407 × 10−123.01041 × 10−113.01986 × 10−110.08235723.01986 × 10−1
F320.5792942.02829 × 10−73.01986 × 10−113.01986 × 10−110.03916713.01986 × 10−113.01986 × 10−110.003338613.01986 × 10−11
F330.02068070.0001247713.01986 × 10−113.01986 × 10−110.007288363.01986 × 10−113.01986 × 10−114.57257 × 10−95.0922 × 10−8
F345.97056 × 10−53.01986 × 10−113.01986 × 10−113.01986 × 10−114.35308 × 10−53.01986 × 10−113.01986 × 10−113.33839 × 10−113.01986 × 10−11
F353.64589 × 10−83.15727 × 10−53.01986 × 10−113.01986 × 10−110.0176493.01986 × 10−116.06576 × 10−116.01039 × 10−87.11859 × 10−9
Table 9. Friedman ranking of CEC2014 and CEC2017 combination functions.
Table 9. Friedman ranking of CEC2014 and CEC2017 combination functions.
AlgorithmFriedman Mean RankGeneral Mean Rank
MPSOGOA1.71
GOA33
GWO5.133336
SCA88
AOA1010
PSO2.866672
DE4.85
Chimp8.666679
BBO4.14
GJO6.733337
Table 10. Ablation experiment results based on CEC2005.
Table 10. Ablation experiment results based on CEC2005.
FunctionValueMPSOGOAGOA1GOA2GOA3GOA4GOA5GOA6GOA
F1Best3.075 × 10−2662.152 × 10−1333.138 × 10−2294.569 × 10−1775.990 × 10−2301.352 × 10−1502.931 × 10−2501.070 × 10−83
Worst1.420 × 10−2139.226 × 10−431.819 × 10−1764.172 × 10−848.633 × 10−1825.474 × 10−481.106 × 10−1933.534 × 10−27
Average4.734 × 10−2153.095 × 10−446.063 × 10−1781.390 × 10−853.15 × 10−1831.824 × 10−493.690 × 10−1951.178 × 10−28
SD01.684 × 10−4307.617 × 10−8509.995 × 10−4906.452 × 10−28
Median4.110 × 10−2565.247 × 10−1167.028 × 10−2151.204 × 10−1611.009 × 10−2211.328 × 10−1428.663 × 10−2366.043 × 10−72
F2Best3.097 × 10−1371.831 × 10−792.573 × 10−1192.799 × 10−877.213 × 10−1192.021 × 10−871.374 × 10−1272.420 × 10−54
Worst8.387 × 10−1206.225 × 10−346.162 × 10−951.217 × 10−341.759 × 10−952.160 × 10−372.404 × 10−1013.423 × 10−15
Average6.067 × 10−1212.610 × 10−352.739 × 10−964.078 × 10−365.864 × 10−977.203 × 10−398.01 × 10−1031.141 × 10−16
SD2.074 × 10−1201.163 × 10−341.173 × 10−952.222 × 10−353.212 × 10−963.94 × 10−384.390 × 10−1026.251 × 10−16
Median1.840 × 10−1292.150 × 10−752.667 × 10−1144.423 × 10−805.312 × 10−1141.104 × 10−801.919 × 10−1221.904 × 10−49
F3Best8.481 × 10−671.966 × 10−325.444 × 10−452.346 × 10−232.787 × 10−498.461 × 10−253.134 × 10−572.760 × 10−14
Worst6.098 × 10−430.00632082.767 × 10−245.087 × 10−52.126 × 10−250.00020437.001 × 10−400.0948753
Average2.074 × 10−440.00021669.225 × 10−261.695 × 10−67.0912 × 10−278.407 × 10−062.635 × 10−410.0050304
SD1.112 × 10−430.00115315.052 × 10−259.287 × 10−63.882 × 10−263.759 × 10−51.281 × 10−400.0197871
Median9.692 × 10−594.349 × 10−91.046 × 10−372.756 × 10−162.276 × 10−381.822 × 10−164.302 × 10−491.543 × 10−8
F4Best6.678 × 10−1087.057 × 10−338.905 × 10−896.233 × 10−404.292 × 10−898.042 × 10−402.157 × 10−991.138 × 10−26
Worst1.712 × 10−893.916 × 10−73.050 × 10−672.644 × 10−81.110 × 10−688.146 × 10−103.119 × 10−765.2 × 10−8
Average5.718 × 10−911.394 × 10−81.017 × 10−688.827 × 10−103.702 × 10−703.205 × 10−111.039 × 10−771.826 × 10−9
SD3.127 × 10−907.145 × 10−85.569 × 10−684.827 × 10−92.027 × 10−691.502 × 10−105.695 × 10−779.489 × 10−9
Median7.126 × 10−1031.578 × 10−201.885 × 10−841.168 × 10−357.537 × 10−839.933 × 10−341.822 × 10−912.177 × 10−20
F5Best20.864122.821622.631622.604221.958322.600821.415422.8925
Worst23.691824.212624.32124.065823.757824.171223.69224.5825
Average22.768523.712123.236823.364122.972323.446122.873423.7886
SD0.6350530.343150.3890830.3344550.4137170.4004160.5617240.393173
Median22.849723.80723.256823.389323.045923.503722.964423.7909
F6Best4.644 × 10−50.00031500.00051360.00069990.00023190.00013090.00077150.0012891
Worst0.04288360.04977340.05620910.03921580.03449830.04271880.04286220.0485372
Average0.01295480.01566480.01260480.01050640.01141110.01123480.01045770.0180974
SD0.01139520.01310130.01170020.00992580.00859110.01095030.01001270.0119936
Median0.00953700.01294090.00950880.00698330.00935440.00953990.00763430.0182215
F7Best7.929 × 10−50.0002510.00029960.00017540.00016500.00023880.00015230.0005179
Worst0.00243730.00380450.00264640.00496460.00279730.00441720.00264990.0042818
Average0.00062660.00135920.00108300.00101650.00112260.00087950.00087720.0014611
SD0.00050030.00081200.00058750.00110250.00063320.00083330.00055450.0008009
Median0.00048000.00123860.00098660.00056660.00101340.00057960.00083530.0013893
F8Best−8146.04−8365.17−8564.12−8148.34−8264.39−8093.99−8737.25−8132.17
Worst−6929.54−7210.73−7268.81−6990.89−7101.98−6783.8−7018.35−7031.04
Average−7447.88−7632.9−7775.89−7504.24−7543.32−7505.6−7812.09−7618.5
SD308.101269.635361.109228.649255.94316.665386.591321.743
Median−7483.99−7602.55−7725.42−7490.31−7497.32−7591.64−7731.38−7690.26
F9Best00000000
Worst00000000
Average00000000
SD00000000
Median00000000
F10Best8.881 × 10−168.881 × 10−164.440 × 10−158.881 × 10−168.881 × 10−164.440 × 10−158.881 × 10−168.881 × 10−16
Worst4.440 × 10−159.325 × 10−144.440 × 10−154.440 × 10−154.440 × 10−154.440 × 10−154.440 × 10−152.930 × 10−14
Average4.322 × 10−155.033 × 10−154.440 × 10−153.967 × 10−154.322 × 10−154.440 × 10−154.322 × 10−153.730 × 10−15
SD6.486 × 10−161.674 × 10−1401.228 × 10−156.486 × 10−1606.486 × 10−165.144 × 10−15
Median4.440 × 10−158.881 × 10−164.440 × 10−154.440 × 10−154.440 × 10−154.440 × 10−154.440 × 10−154.440 × 10−15
F11Best00000000
Worst00000000
Average00000000
SD00000000
Median00000000
F12Best2.961 × 10−64.071 × 10−53.114 × 10−59.267 × 10−66.116 × 10−54.519 × 10−58.489 × 10−60.0001418
Worst0.00099580.00099890.00086600.00124030.00134230.00109390.00188990.0026304
Average0.00035070.00036800.00028440.00038910.00035730.00044160.00032970.0007276
SD0.00029540.00024870.00022380.00030470.00026270.00031040.00040300.0005522
Median0.00025360.00034680.00023120.00035820.00032020.00048770.00018380.0005805
F13Best5.369 × 10−50.00013310.0026880.00179030.00078650.00126200.00185260.0030045
Worst0.1539730.0581770.09424760.06506850.07882670.03811570.1142820.0634467
Average0.0497720.00998840.02448420.0152260.02730830.01449150.02661470.0213521
SD0.0386780.0122370.02139960.01492230.02293250.01108370.02582220.015971
Median0.0363020.00630700.02051270.01140160.02090730.01025290.01862820.0188452
F14Best0.9980040.9980040.9980040.9980040.9980040.9980040.9980040.998004
Worst0.9980040.9980040.9980040.9980040.9980040.9980040.9980040.998004
Average0.9980040.9980040.9980040.9980040.9980040.9980040.9980040.998004
SD005.831 × 10−177.141 × 10−175.831 × 10−171.009 × 10−167.141 × 10−171.090 × 10−16
Median0.9980040.9980040.9980040.9980040.9980040.9980040.9980040.998004
F15Best0.00030740.00030740.00030740.00030740.00030740.000307480.00030740.0003074
Worst0.00030740.00030740.00030740.00030740.00030740.000307480.00030740.0003074
Average0.00030740.00030740.00030740.00030740.00030740.000307480.00030740.0003074
SD7.778 × 10−146.976 × 10−133.604 × 10−134.829 × 10−137.947 × 10−139.449 × 10−133.945 × 10−125.754 × 10−13
Median0.00030740.00030740.00030740.00030740.00030740.000307480.00030740.0003074
Table 11. Experimental results of WBD.
Table 11. Experimental results of WBD.
hltbBestWorstAverageSDMedianp
MPSOGOA0.1988323.337379.192020.1988321.670221.670221.670226.9276 × 10−71.67022N/A
GOA0.1988323.337369.192030.1988321.670221.670231.670222.7146 × 10−61.670224.117 × 10−6
GWO0.1986763.340559.192510.198861.67071.676021.672240.00138031.671733.019 × 10−11
SCA0.1893173.509899.367150.1987951.707641.85221.796780.03358411.798923.019 × 10−11
AOA0.1880493.7992100.1961861.828392.687912.340350.1770382.366693.019 × 10−11
PSO0.1692294.899499.144510.273822.431737.482645.064021.454275.010093.019 × 10−11
DE0.1988323.337379.192020.1988321.670221.670221.670221.7000 × 10−161.670221.665 × 10−11
Chimp0.1964373.407669.165690.2023051.698171.79571.754330.02228021.758773.019 × 10−11
BBO0.251492.803368.17060.2516641.858162.727232.18780.2125092.191463.019 × 10−11
GJO0.1988123.337229.19830.1988411.671281.694531.67570.004637491.674293.019 × 10−11
Table 12. Experimental results of CSD.
Table 12. Experimental results of CSD.
DDPBestWorstAverageSDMedianp
MPSOGOA0.05169050.35675211.2870.01266520.01266550.01266535.92315 × 10−80.0126653N/A
GOA0.05168220.35655111.29870.01266530.01266580.01266541.14583 × 10−70.01266540.000300589
GWO0.05149580.35202411.57690.01267410.01285590.01271913.08647 × 10−50.01272133.01986 × 10−11
SCA0.05232360.37037510.60950.0127860.01320590.0129660.0001284810.01294853.01986 × 10−11
AOA0.050.310434150.01319340.03058240.01413620.003672120.01319743.0123 × 10−11
PSO0.0731410.769225.522280.03095448.64748 × 1091.1754 × 1092.05791 × 1091.51764 × 1083.01986 × 10−11
DE0.05168910.35671811.2890.01266520.01266520.01266522.69513 × 10−180.01266521.5476 × 10−11
Chimp0.050.31731614.04390.01272740.0143330.01297730.0003651260.01283983.01986 × 10−11
BBO0.05509470.4443167.540430.0128670.01784520.0151470.001665550.01474533.01986 × 10−11
GJO0.05120380.34501412.02910.01269030.01290660.01274094.6908 × 10−50.01273183.01986 × 10−11
Table 13. Experimental results of PVD.
Table 13. Experimental results of PVD.
TsThRLBestWorstAverageSDMedianp
MPSOGOA0.7781680.38464940.31962005885.335885.335885.330.00034215885.33N/A
GOA0.7781680.38464940.31962005885.335885.345885.330.0021405885.336.526 × 10−7
GWO0.7785550.3847840.32052005888.656578.335937.79135.0145897.343.019 × 10−11
SCA0.909120.44667946.4753130.016236.748485.176854.85559.7876671.163.019 × 10−11
AOA0.9065070.55283341.7535192.4587429.0530,02112,804.85522.9910978.93.019 × 10−11
PSO2.5967511.60568.0305178.337128,0191.13904 × 106484,168275,483383,2063.019 × 10−11
DE0.7781680.38464940.31962005885.335885.335885.332.775 × 10−125885.331.211 × 10−12
Chimp0.8421620.4596740.79292006659.648041.967655.43285.9167718.163.019 × 10−11
BBO0.8176760.40417742.3666173.3465956.457265.3764773536515.183.019 × 10−11
GJO0.7789390.38641240.3355199.8975896.417308.786198.12462.6345955.643.019 × 10−11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, S.; Zeng, H.; Sun, W.; Wu, J.; Yang, J. Multi-Strategy Improved Particle Swarm Optimization Algorithm and Gazelle Optimization Algorithm and Application. Electronics 2024, 13, 1580. https://doi.org/10.3390/electronics13081580

AMA Style

Qin S, Zeng H, Sun W, Wu J, Yang J. Multi-Strategy Improved Particle Swarm Optimization Algorithm and Gazelle Optimization Algorithm and Application. Electronics. 2024; 13(8):1580. https://doi.org/10.3390/electronics13081580

Chicago/Turabian Style

Qin, Santuan, Huadie Zeng, Wei Sun, Jin Wu, and Junhua Yang. 2024. "Multi-Strategy Improved Particle Swarm Optimization Algorithm and Gazelle Optimization Algorithm and Application" Electronics 13, no. 8: 1580. https://doi.org/10.3390/electronics13081580

APA Style

Qin, S., Zeng, H., Sun, W., Wu, J., & Yang, J. (2024). Multi-Strategy Improved Particle Swarm Optimization Algorithm and Gazelle Optimization Algorithm and Application. Electronics, 13(8), 1580. https://doi.org/10.3390/electronics13081580

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop