1. Introduction
Swarm intelligence algorithms are computational methods inspired by collective behaviors observed in nature. They simulate interactions and cooperation among individuals within a group to solve complex problems. The development of swarm intelligence algorithms stems from recognizing the limitations of traditional, individual-based algorithms in tackling complex challenges. Conventional algorithms often rely on deterministic, single-agent methods, which may struggle to find global optima or encounter significant computational complexity for certain problems [
1,
2]. To address these challenges, researchers have introduced a new algorithmic paradigm: swarm intelligence algorithms. These algorithms collectively solve problems by simulating information exchange, cooperation, and competition among group members. Notable examples of swarm intelligence algorithms include genetic algorithms, ant colony algorithms, and bee swarm algorithms, which have demonstrated significant success across various fields.
Particle swarm optimization (PSO), initially proposed by Eberhart and Kennedy in 1995 [
3], is inspired by the behavior of bird flocks and fish schools. In PSO, solutions to problems are represented by a swarm of particles, with each particle embodying a solution, and the swarm collectively representing a potential set of solutions within the solution space. Particles optimize their positions and velocities by continuously searching and iterating within the solution space, learning from and memorizing the best solutions found. PSO offers several advantages: it has a rapid convergence rate and robust global search capabilities; its algorithmic structure is simple, facilitating easy implementation and application; and it adapts well to the continuity and non-linearity of various problems. Consequently, PSO has been extensively applied in domains such as function optimization, combinatorial optimization [
4], machine learning [
5], and image processing, among others.
Since its introduction, PSO has evolved through years of research and development, leading to numerous improvements. Some notable enhancements include the following:
- 1.
Modifications to parameters, such as constant and random inertia weights [
6,
7], time-varying inertia [
8,
9,
10], quadratic inertia weights [
11], adaptive inertia weights [
12,
13], and sine chaos inertia weights [
14] to enhance PSO’s performance. Improvements to acceleration coefficients include time-varying acceleration coefficients [
15], acceleration update strategies based on the Sigmoid function [
16,
17], and the addition of Gaussian white noise to acceleration coefficients [
18] to balance the trade-off between individual experience and group collaboration;
- 2.
Enhancements in the topological structure, which concerns how individuals within the algorithm communicate with each other. Proposals include selecting the nearest particles as neighbors in each generation [
19], recombining randomly divided sub-populations after evolving for certain generations [
20], and optimizing algorithms using ring topological structures. The proposed HIDMS-PSO variant enhances the traditional fixed topology of the unit structure with new master- and slave-dominated topologies [
21], a random triplet topological structure that allows each particle to communicate with two random particles in the group based on its best position [
22], and the particle swarm optimization algorithm with an extra gradient method (EPSO) [
23] have been introduced to increase algorithmic diversity;
- 3.
Improvements to learning strategies include the integration of four particle swarm search strategies into a strategy dynamics-based optimization method [
24], a particle swarm optimization with multi-dimensional learning strategy based on adaptive inter-weight [
25], and an adaptive hierarchical update particle swarm optimization algorithm employing a multi-choice comprehensive learning strategy comprising weighted composite sub-strategies and average evolutionary sub-strategies. Enhanced learning strategies and crossover operators (PSOLC) [
26], and a multi-objective particle swarm with alternate learning strategies (MOPSOALS) [
27], utilizing adaptive parameter updating exploratory and exploitative role learning strategies to improve optimization performance, have been proposed. To address large-scale variable problems, a multi-strategy learning particle swarm optimization algorithm (MSL-PSO) [
28] employing different learning strategies at various stages has been introduced. To tackle deficiencies in solving complex optimization problems, a hybrid particle swarm optimization algorithm based on an adaptive strategy (ASPSO) [
29] has been put forward.
- 4.
Hybrid research involving the integration of other algorithms with PSO to better address complex optimization problems includes combining genetic factors with PSO, such as selection operators [
30], crossover operators [
31,
32], and mutation operators [
33]. Similarly, merging the simulated annealing algorithm to improve the global optimum update strategy in PSO [
34], hybridizing with the grey wolf optimization algorithm [
35], pattern search algorithm [
36], crow search algorithm [
37], hybrid particle filtering (PF) [
38], and hybrid improved symbiotic organisms search (MSOS) [
39] to fully leverage each algorithm’s strengths.
Despite the maturity and successful application of these classic swarm intelligence algorithms in solving real-world optimization challenges, issues like premature convergence and suboptimal performance persist in complex application areas. Moreover, as societal demands evolve, real-world optimization problems increasingly exhibit high-dimensionality, multiple constraints, and multimodality, posing new challenges to classical optimization algorithms such as swarm intelligence algorithms. Against this backdrop, enhancing swarm intelligence algorithms to better tackle increasingly complex optimization problems has emerged as a research focal point.
To augment the particle swarm algorithm’s capacity to solve complex optimization problems and to more effectively balance exploration and exploitation, this paper introduces an improved particle swarm algorithm dubbed VASPSO. VASPSO contributes in five ways as described below:
Introducing a time-varying inertia coefficient to enhance the algorithm’s convergence speed and global search capability;
Adopting velocity pausing concepts to mitigate premature convergence issues in particle swarm algorithms;
Employing an adaptive strategy to dynamically fine-tune the balance between exploration and exploitation, thus boosting algorithm performance;
Incorporating symmetric cooperative swarms concepts, treating different particles with varied strategies to aid the algorithm in discovering optimal solutions;
Implementing a terminal replacement mechanism to excise the least performing particles, thereby refining the accuracy of the final solution;
Moreover, to ascertain the viability of the newly proposed VASPSO algorithm, it was tested against 29 benchmark functions from CEC2017, comparing its performance with five PSO variants and seven other swarm intelligence algorithms. Experimental outcomes reveal VASPSO’s superiority over other algorithms in most benchmark functions, underscoring its success.
The remainder of this paper is structured as follows:
Section 2 elucidates the standard particle swarm algorithm. The VASPSO algorithm is expounded in
Section 3, with
Section 4 showcasing the test functions and outcomes of experimental simulations. Conclusions and directions for future work are presented in
Section 5.
2. Standard Particle Swarm Optimization
Particle swarm optimization (PSO) is a swarm intelligence-based optimization algorithm inspired by the foraging behavior of bird flocks [
3]. The core principle of PSO is to optimize a search process by simulating the behavior of particles within a swarm. Each particle in the swarm represents a candidate solution, navigating the search space by updating its position and velocity based on both its own experience and the collective information from the swarm. Specifically, the position of a particle corresponds to a candidate solution’s value, while its velocity determines the direction and magnitude of the search. Initially, particles are dispersed randomly throughout the search space with varying velocities. Their performance is assessed using a fitness function, which informs the updates to both the best position discovered by each particle (personal best) and the best overall position found by the swarm (global best). Throughout the algorithm, particles dynamically adjust their velocity and position, leveraging insights from their personal best achievements and the global best discoveries, in pursuit of optimal solutions. Below is the initialization of particles in the particle swarm optimization algorithm:
The
w in Formula (4) is called the inertia weight and is derived from Formula (3):
The formula for updating the velocity and position of the particle is as follows:
where
and
are called acceleration coefficients,
represents the velocity of the particle at time
t,
represents the position of the particle at time
t,
represents the individual best position of the particle, and
represents the global best position of the group.
3. Particle Swarm Optimization Algorithm Using Velocity Pausing and Adaptive Strategy
The proposed approach aims to mitigate premature convergence and enhance the optimization performance of particle swarm optimization (PSO) by integrating velocity pausing and adaptive strategy. In VASPSO, five key enhancements are elaborated below.
3.1. Time-Varying Inertia Weight
The modification of the inertia weight is a critical aspect of the particle swarm optimization (PSO) algorithm that significantly enhances its efficacy. Initially, the introduction of velocity pausing aimed to augment the global search capabilities of PSO and to mitigate premature convergence to local optima. However, while this strategy can enhance exploration, it may also, over time, lead to a slower convergence rate and, in some instances, convergence challenges. To address these challenges, particularly in the later stages of execution, a time-decreasing weight function is often implemented to adjust the inertia weight. This function is designed to progressively reduce the inertia weight across iterations, maintaining robust global search capabilities while simultaneously facilitating quicker convergence towards the optimal solution. By dynamically modulating the inertia weight, PSO strikes an improved balance between exploration and exploitation, thereby optimizing both the convergence speed and the algorithm’s overall global search performance.
The inertia weight is calculated as follows:
where
T is the maximum number of iterations and
b is the set constant.
3.2. Velocity Pausing
In 2023, Tareq M. Shami introduced the concept of velocity pausing into the particle swarm optimization (PSO) algorithm, marking a significant advancement in its performance [
40]. This modification allows each particle the option to temporarily suspend velocity updates during iterations, instead selecting from three predefined velocities: slow, fast, and constant. This addition of a third velocity option diverges from traditional PSO mechanics and increases the algorithm’s flexibility, promoting a more balanced approach between exploration and exploitation. Such enhancements address prevalent challenges in classical PSO algorithms, including premature convergence and the tendency to become trapped in local optima.
The careful selection of the pausing coefficient is critical. When the pausing coefficient approaches 1, the particle’s velocity update closely mirrors that of the classical PSO. Conversely, when the coefficient is set very low, it forces the particle’s velocity to remain constant, potentially leading to slower convergence rates and difficulties in discovering superior solutions.
The formula for velocity pausing is as follows:
where
a is the coefficient of velocity pausing, and
and
are the learning factors.
The particle position update formula is as follows:
3.3. Adaptive Strategy
Traditional algorithms that employ a single-strategy approach for position updates often struggle to achieve a suitable balance between exploration and exploitation, particularly in complex scenarios characterized by multiple local optima or highly multimodal objective functions. In contrast, adaptive strategy position updates can significantly enhance the algorithm’s search efficiency. This approach involves dynamically optimizing the search space that may yield the best solution by adaptively selecting the position update strategy during each iteration. Such improvements have proven to markedly increase search efficiency and have been widely adopted in enhancing both traditional and classical algorithms [
29].
The core of the adaptive strategy for position updates resides in the iterative modification of the parameter p, which is adjusted based on changes in the fitness value. This responsive adjustment enables the algorithm to better navigate the solution landscape, effectively balancing the dual imperatives of exploring new possibilities and exploiting known good solutions.
The formula for parameter
p is as follows:
where
N is the total population.
is the fitness of the corresponding particle, and
is the ratio of the current fitness of the particle to the average fitness of the population. An estimate is obtained in each iteration.
The location update formula is as follows:
When is small, the adaptation value of particle i exceeds the average level of the population. In this scenario, to augment the global exploration capabilities of particle i, the location update strategy is employed. Conversely, when is large, indicating that particle i is performing below the average level of the population, the location update strategy is implemented to boost its local exploitation capabilities.
3.4. Symmetric Cooperative Swarms
Symmetry is a prominent characteristic in algorithms and their applications, playing a crucial role across various domains. In algorithms, symmetry often reveals patterns within data, aiding in simplifying complex problems and enhancing algorithm efficiency. One standout example is the divide-and-conquer approach. By leveraging this principle, the search space is decomposed for exploration, thereby reducing time complexity. In image processing, symmetry is frequently utilized to optimize rendering and image manipulation algorithms. For instance, in 3D graphics, objects typically exhibit symmetrical properties, allowing for reduced computational overhead in rendering and processing, thus improving graphics processing efficiency. In database optimization, exploiting data symmetry can reduce redundant storage and enhance query efficiency. Techniques such as normalization and denormalization are employed to eliminate duplicate and redundant information from data, while indexing and caching techniques accelerate data access and query operations.
In this paper, we also draw inspiration from the concept of symmetry. During each iteration, the entire particle swarm is divided based on the fitness of particles, segregating them into superior and inferior particle clusters. This symmetrical partitioning enables particles to undertake different responsibilities and tasks, facilitating better exploration of the solution space.
The elite group implements advanced strategies to finely balance exploration and exploitation. Initially, a velocity pausing strategy is applied, temporarily halting the velocity updates under specific conditions to allow prolonged investigation within local solution spaces, thereby enhancing the exploration of potentially superior solutions. Additionally, an adaptive strategy modifies the positional behavior based on the problem’s characteristics and the group’s current state, dynamically adjusting the balance between exploration and exploitation to suit the complexity and diversity of the problem.
Conversely, the non-elite group primarily updates positions by referencing the global best solution, fostering broader exploration of the solution space. This group’s focus is on uncovering new potential solutions, thereby injecting diversity into the swarm. By orienting their position updates solely around the global best, the non-elite particles can explore more aggressively, expanding the search range for the algorithm.
Through this cooperative strategy, the particle swarm optimization algorithm achieves a more effective balance between exploration and exploitation. The elite group conducts targeted exploration and refined exploitation using sophisticated strategies, such as velocity pausing and adaptive positional updates. In contrast, the non-elite group pursues extensive exploration driven by the global best solution. This structured approach allows the algorithm to search more comprehensively for viable solutions, thereby enhancing both the performance and the efficacy of the algorithm.
A detailed flowchart of the subgroup collaborative process is shown in
Figure 1: Subgroup Collaborative Process Flowchart.
3.5. Terminal Replacement Mechanism
The terminal replacement mechanism, also known as the ’last-place elimination mechanism,’ is a widely used algorithmic method for selecting optimal solutions under conditions of limited resources or intense competition. Inspired by the biological principle of natural selection, exemplified by the concept of survival of the fittest, this mechanism is prevalent in various optimization and genetic algorithms, aiding in the efficient identification of superior solutions.
Within this mechanism, an initial set of candidate solutions is generated and subsequently ranked according to predefined evaluation criteria. Based on these rankings, candidates with the lowest scores are progressively eliminated, while those with higher scores are retained for additional evaluation rounds. This iterative process is repeated until the optimal solution is identified or a predetermined stopping condition is reached.
To better mimic the unpredictability of real-world scenarios and enhance the algorithm’s robustness, the concept of an elimination factor has been incorporated into the particle swarm optimization (PSO) algorithm. This modification involves selectively replacing particles according to specific rules, with a predetermined probability that permits some particles to avoid elimination due to various factors. If a randomly generated number falls below the established elimination factor, a replacement operation is executed; otherwise, the particle is preserved for further evaluation. By integrating this elimination factor, the algorithm not only replicates the inherent uncertainties of real-life situations but also significantly enhances its adaptability and robustness.
Among them, Gbad is the least fit particle among all the individual optimal particles.
Nbest is generated by the intersection of the global best particle and any two individual best particles.
The fitness values of the new cross-generated particle Nbest and Gbad are compared. If Nbest is better than Gbad, Gbad is replaced by Nbest; otherwise, Gbad remains unchanged, and the eliminated particle is selected.
In summary, VASPSO is proposed as a fusion variant, and its pseudo-code is shown in Algorithm 1.
Algorithm 1 VASPSO Algorithm. |
- 1:
Initialize parameters, population size N, maximum iterations T - 2:
for to T do - 3:
Calculate the fitness of each particle - 4:
Sort all particles and divide them into two groups: elite and non-elite based on fitness values - 5:
For the first elite group - 6:
for to do - 7:
if random number then - 8:
Update velocity according to velocity pausing Formula (7) - 9:
end if - 10:
Update position according to the adaptive strategy Formula (10) - 11:
end for - 12:
For the second non-elite group - 13:
for to N do - 14:
Update position according to velocity pausing Formula (8) - 15:
end for - 16:
% Last-place elimination mechanism - 17:
for to N do - 18:
Update according to Formula (11) - 19:
Update according to Formula (12) - 20:
if rand<0.99 then - 21:
if then - 22:
- 23:
- 24:
end if - 25:
end if - 26:
end for - 27:
Evaluate fitness and update personal best and global best solutions. - 28:
for to N do - 29:
if then - 30:
- 31:
- 32:
end if - 33:
if then - 34:
- 35:
- 36:
end if - 37:
end for - 38:
end for - 39:
Return
|
5. Conclusions
This study aims to mitigate several prevalent issues associated with the particle swarm optimization algorithm, such as premature convergence and the imbalance between global exploration and local exploitation, through the introduction of an enhanced version known as VASPSO. In our approach, we incorporate velocity pausing and a terminal replacement mechanism, which are specifically designed to prevent premature convergence. Additionally, VASPSO utilizes time-varying inertia coefficients, the concept of symmetric cooperative swarms, and adaptive strategies for modulating the search step length, which collectively contribute to a more balanced optimization process.
To rigorously evaluate the effectiveness of VASPSO in addressing complex optimization challenges, we conducted a series of comparative experiments using the CEC2017 benchmark. The results from these experiments suggest that VASPSO not only improves upon the performance of many existing PSO variants but also shows promising capabilities when compared with other well-regarded swarm intelligence algorithms.
However, VASPSO still has some limitations:
- 1.
While VASPSO has shown promising results in experimental settings, the theoretical underpinnings and formal proofs supporting the VASPSO algorithm are still somewhat limited. Further theoretical analysis is crucial to fully understand the mechanisms and principles driving its performance;
- 2.
The classification of particles within the VASPSO algorithm currently employs a rigid structure that might oversimplify the inherent complexities of diverse optimization problems. This method of division may not adequately capture the nuanced characteristics of different scenarios, potentially limiting the algorithm’s adaptability and effectiveness;
- 3.
Certain parameters within the algorithm may require adjustments to effectively respond to variations in the problem’s characteristics. A static parameter setting might not always align optimally with the dynamic nature of different optimization challenges;
- 4.
VASPSO performs poorly on some benchmark functions (such as f5, f6). Despite achieving good results on some optimization problems, the algorithm may exhibit relatively poorer performance on certain specific benchmark functions.
Future work will unfold in the following aspects:
- 1.
Applying the VASPSO algorithm to real-world problems offers a significant opportunity to test and enhance its utility. For instance, in the field of engineering optimization, VASPSO could be particularly beneficial for complex tasks such as robot path planning and image processing. Integrating VASPSO with these practical applications not only allows for a robust evaluation of the algorithm’s performance but also showcases its potential to provide effective and efficient solutions;
- 2.
Integrating VASPSO with other optimization algorithms could significantly enhance its performance and robustness. By combining the strengths of multiple algorithms, a more effective balance between global exploration and local exploitation can be achieved. This hybrid approach not only improves the overall search capability of VASPSO but also enhances its convergence performance;
- 3.
Combining VASPSO with neural networks can create a formidable optimization framework that leverages the strengths of both methodologies. By integrating the robust global search capabilities of VASPSO with the adaptive learning abilities of neural networks, this hybrid approach can facilitate more efficient parameter optimization and model training. This synergy enhances the algorithm’s effectiveness in navigating complex parameter spaces and accelerates the convergence towards optimal solutions.