Next Article in Journal
Continued Fraction Expansions of Stable Discrete-Time Systems of Difference Equations
Next Article in Special Issue
The Arcsine Kumaraswamy-Generalized Family: Bayesian and Classical Estimates and Application
Previous Article in Journal
Noether and Lie Symmetry for Singular Systems Involving Mixed Derivatives
Previous Article in Special Issue
Intelligent Dendritic Neural Model for Classification Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Equilibrium Optimizer with a Decreasing Equilibrium Pool

1
School of Computer Information and Engineering, Changzhou Institute of Technology, Changzhou 213032, China
2
School of Software and Big Data, Changzhou College of Information Technology, Changzhou 213164, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(6), 1227; https://doi.org/10.3390/sym14061227
Submission received: 21 May 2022 / Revised: 8 June 2022 / Accepted: 10 June 2022 / Published: 13 June 2022

Abstract

:
Big Data is impacting and changing the way we live, and its core lies in the use of machine learning to extract valuable information from huge amounts of data. Optimization problems are a common problem in many steps of machine learning. In the face of complex optimization problems, evolutionary computation has shown advantages over traditional methods. Therefore, many researchers are working on improving the performance of algorithms for solving various optimization problems in machine learning. The equilibrium optimizer (EO) is a member of evolutionary computation and is inspired by the mass balance model in environmental engineering. Using particles and their concentrations as search agents, it simulates the process of finding equilibrium states for optimization. In this paper, we propose an improved equilibrium optimizer (IEO) based on a decreasing equilibrium pool. IEO provides more sources of information for particle updates and maintains a higher population diversity. It can discard some exploration in later stages to enhance exploitation, thus achieving a better search balance. The performance of IEO is verified using 29 benchmark functions from IEEE CEC2017, a dynamic economic dispatch problem, a spacecraft trajectory optimization problem, and an artificial neural network model training problem. In addition, the changes in population diversity and computational complexity brought by the proposed method are analyzed.

1. Introduction

Thanks to the ability of machine learning to extract and analyze information, many applications in the context of big data have brought convenience to people’s lives. Whether it is the recommendation information received when shopping online, or face recognition payment, etc., all of these things cannot be done without the support of machine learning behind. Machine learning plays a vital role in this era of massive data. Optimization problems, as one of the core components, affect the performance and efficiency of machine learning models [1,2]. Since traditional methods have difficulty solving complex optimization problems, evolutionary computation is widely preferred due to its low computational cost and ability to jump out of local optima. Metaheuristic algorithms (MHAs) as a member of evolutionary computation are therefore widely used in many areas of machine learning. This includes, for example, the discussion of the symmetry of feature selection methods [3], the fuzzy point symmetric-based clustering [4], hyperparameter setting for support vector machines [5], weight setting for artificial neural networks [6], weight setting for ensemble classifiers [7], and so on. Therefore, improving the performance of optimization algorithms is also an important aspect to promote the development of machine learning.
In addition to machine learning, optimization problems are everywhere in our lives, whether it is a personal holiday schedule or a large industrial project [8]. It refers to the problem of finding the best solution from a set of feasible solutions to maximize the benefits under limited resources [9]. Most of the time, it is not as simple as scheduling a holiday, but comes in the form of NP-hard problems [10]. Since traditional methods are difficult to use to solve high-dimensional, large-scale, nonlinear optimization problems, researchers have proposed metaheuristic algorithms inspired by phenomena or laws in nature since the 1960s [11,12,13]. This includes, for example, the ant colony algorithm inspired by the relationship between ant colony foraging behavior and pheromones [14,15]; the genetic algorithm inspired by evolutionary theory and genetics [16]; the brain storm optimization algorithm inspired by group decision-making [17], etc. MHAs reduce computational complexity by sacrificing accuracy, thus achieving a balance between computational cost and effectiveness.
Researchers have proposed a very large number of MHAs. These algorithms can be broadly classified into four categories according to their source of inspiration: (1) evolutionary algorithms, (2) swarm intelligence algorithms, (3) physics-inspired algorithms, and (4) human behavior-inspired algorithms. One representative of evolutionary algorithms is the genetic algorithm, whose core operators crossover, mutation and selection are derived from evolutionary theory and genetics [18,19]. The important concept of the swarm intelligence class lies in the interaction of information. For example, in bird flock foraging behavior, birds will combine their own experience to approach the one closest to the food; inspired by this, the particle swarm optimization algorithm was proposed [20,21]. Ants leave pheromones on the foraging path to find the shortest path, which inspired the ant colony algorithm [22,23]. One of the typical physics-based algorithms, the gravitational search algorithm (GSA) [24,25,26,27], correlates the inertial mass of the particles with the quality of solutions and uses the interaction of masses to find the optima. Human behavior-based algorithms are represented by the brain storm optimization algorithm [28,29,30,31]. It simulates human brainstorming behavior and uses the methods and features of group decision-making to find the optima. Because of their simplicity and robustness, MHAs are now popular in various fields, such as radio frequency identification networks planning [32], electromagnetics [33], organic solar photovoltaics [34], vehicular ad hoc networks [35], and so on. These applications bring improvements in production technology and also mean protection of the earth’s environment. [36]
The equilibrium optimizer (EO) is a novel physics-based algorithm [37]. Its source of inspiration is the mass balance model of the control volume. Mass balance is a common model for environmental estimation [38]. The mass balance model used by EO needs to consider the mass changes in the control volume, which includes entering, leaving and generated particles, to estimate the equilibrium state of the system. The particles and their concentrations within the volume act as search agents, and the way they are updated depends on an important concept, the equilibrium pool. The equilibrium pool of EO contains five particles: the best four particles in the population and one particle composed of the mean of these four. Using the relationship between the equilibrium pool and the particles, EO obtains excellent search performance. However, as a young algorithm, EO is considered to have room for improvement.
The approach to improve EO starts with a better balance of exploration and exploitation. Exploration is expressed as leaving the current region and stepping into a new region, which usually implies the ability to jump out of the local optima. Exploitation is expressed as a re-search of a known region, with the intention of improving the quality of the current solution. However, an imbalance between the two can also lead to convergence stagnation or getting stuck in local optima [39]. Therefore, better search performance implies a more appropriate ratio of exploration and exploitation, but determining this ratio is a very difficult task [40]. In this aspect, there seems to be a consensus in much of the research that a preference for exploration in the early stage and exploitation in the later stage is an effective strategy. This strategy is usually implemented in two ways: (1) decreasing the search step size and (2) decreasing the size of the population or the number of elite individuals. In the grey wolf optimization algorithm [41,42,43], a linear decreasing parameter from 2 to 0 is used to control the search step size. In a variant of the differential evolution algorithm [44,45], CJADE [46], the scope of the local search decreases with iterations. L-SHADE uses a linearly decreasing population size to improve its search performance [47,48]. In ladder spherical evolution [49], the right to produce offspring is gradually reduced for individuals with lower ranking. Similarly, in chaotic spherical evolution [50], the chance of retaining information for individuals with a chaotic local search is increased by reducing the chance of poorer quality individuals participating in the update. There are also algorithms that use both; for example, in GSA [51,52,53,54], the gravitational constant and the number of particles used for updates both decrease with iterations.
In order to achieve changes in preferences for exploration and exploitation at different stages, population diversity is used as a control tool. Population diversity describes the differences between individuals, and a population with a suitable diversity is considered to be beneficial for the search performance of the algorithm [55]. In MHAs, an increase in diversity usually implies exploration, and a decrease in diversity implies exploitation [39]. Therefore, we hope to achieve a change in the preference between exploration and exploitation through the control of population diversity. With this in mind, the work focuses on one of the important components of EO, the equilibrium pool. The equilibrium pool is associated with each particle update, and changing its composition can affect the algorithm’s preference for exploration and exploitation. This improved equilibrium optimizer is called IEO for short.
The contribution of this paper is that it proposes a simple but effective method to improve the performance of EO. It uses a decreasing constant instead of a fixed constant in the conventional EO to control the number of particles in the equilibrium pool. This changes the trend of population diversity and makes the population more diverse in the early stage. The aim is to bring a better balance between exploration and exploitation, helping the algorithm to jump out of the local optima and thus improve the search ability.
The remainder of this paper is summarized below. Section 2 describes the composition and features of EO. Section 3 introduces the operation of IEO. Section 4 verifies the effectiveness of the improvement method and the competitiveness of IEO by benchmark function set IEEE CEC2017, and then describes the performance of IEO by three real-world optimization problems. In Section 5, the population diversity trends and computational complexity of IEO and EO are analyzed. In Section 6, the work of this paper is summarized.

2. Equilibrium Optimizer

Mass balance is one of the key elements of environmental engineering [56]. It serves as a starting point for environmental analysis and makes it possible to track the movement or transformation of material in the environment. The mass balance equation that inspired EO describes the concentration change of a non-reactive constituent with time in a control volume. This equation describes the amount of material change in the control volume equals to the amount entering plus the amount generated minus the amount leaving, which can be shown as:
V d x d t = l x e l x + G
where V means the control volume, and x denotes the concentration of a material in it. V d x d t can express the rate of mass change. l is the flow rate entering or leaving this system. x e denotes the concentration at an equilibrium state, in which no generation occurs in this system. G denotes the mass generated in this system. When the rate of mass change is zero, i.e., V d x d t = 0, the system reaches a steady equilibrium state. By deformation, Equation (1) can be considered as an equation of l V . Expressing l V as ρ , ρ can be called the turnover rate. The mass balance equation can be transformed as:
d x ρ x e ρ x + G V = d t
whose integration over time can be expressed as:
x 0 x d x ρ x e ρ x + G V = t 0 t d t
This results in:
x = x e + ( x 0 x e ) H + G ρ V ( 1 H )
where in Equation (4), H can be represented as:
H = e ρ ( t t 0 )
In Equations (3)–(5), x 0 and t 0 denote the initial concentration and initial time. Inspired by these equations, EO uses the particle and its concentration as the search agent to find the optimal solution, and its structure can be summarized into three parts, which are initialization, equilibrium pool construction, and population update.

2.1. Initialization

The initial concentration is usually generated in a random way, and it is only required that all particles are within the given range. x 0 denotes the initial particles, and x m i n and x m a x denote the lower and upper limits of the feasible range, respectively. Thus, Equation (6) shows how the initial particles are generated.
x i , d 0 = x d m i n + r i , d 0 · ( x d m a x x d m i n ) i = 1 , , N d = 1 , , D
where r 0 denotes a random number between 0 and 1. N is the size of the population and D is the number of dimension.

2.2. Equilibrium Pool Construction

In terms of metaheuristics, the search target of EO is the equilibrium state of the system. However, information about the concentration at this state is not known at the beginning, so EO picks some particles that may be close to the equilibrium state as candidates. These candidates include the four best particles in the population as well as a particle generated by the mean of these four particles. The set of five particles is called the equilibrium pool, which can be expressed as:
X e = x 1 e , x 2 e , x 3 e , x 4 e , x m e
where x 1 e - x 4 e mean the four best particles, and x m e is the particle generated by the mean. The update of each particle in the EO is related to the equilibrium pool, and the retention of the four best particles provides the EO with the ability to explore, while the mean particle provides the ability to exploit.

2.3. Population Update

The population update equation of EO is based on Equation (4). It sets the control volume V to 1 and the time-varying turnover rate ρ to a random value between 0 and 1. Considering the current number of iterations k and the maximum number of iterations K, the population update equation is shown as:
x i , d ( k + 1 ) = x i , d e ( k ) + ( x i , d ( k ) x i , d e ( k ) ) · H i , d ( k ) + G i , d ( k ) ρ i , d ( k ) · V · ( 1 H i , d ( k ) ) i = 1 , 2 , 3 , , N d = 1 , 2 , 3 , , D k = 1 , 2 , 3 , , K
where x e denotes one particle randomly selected from the equilibrium pool. An accurate setting of H can help EO balance exploration and exploitation. Here, H can be expressed as:
H = e ρ ( t t 0 )
where both t 0 and t are related to time. t 0 is defined as:
t 0 = 1 ρ ln [ a × f ( r 1 0.5 ) · ( 1 e ρ t ) ] + t
where a is a constant to adjust the exploration; the larger its value, the greater the exploration ability. r 1 is a random number between 0 and 1. f ( ) is a sign function; its value relies on the positive or negative nature of the variable. t is defined as a function of the current number of iterations k and maximum number of iterations K with a constant b. Similarly, b is used to adjust exploitation; the higher the value, the higher the exploitation ability.
t = ( 1 k K ) ( b k K )
The generation rate G is another important term in Equation (8), which improves the quality of the solution by enhancing the exploitation. It is also a value that varies with iterations, and it can be described as:
G = G 0 e ρ ( t t 0 )
where G 0 denotes the initial value. It can be calculated by:
G i , d 0 = R i , d · ( x i , d e ρ i , d · x i , d ) i = 1 , 2 , . N d = 1 , 2 , , D
R = 0.5 r G r P P G 0 r P < P G
where r G and r P are two random values between 0 and 1. R is used to control the generation term in the particle update, which both determines whether it takes effect and controls the extent of its impact. P G denotes the generation probability; when it takes the value of 0, the generation term always works, and when it takes the value of 1, the generation term does not work. Therefore, it is set to 0.5 to balance exploration and exploitation.

3. Improved Equilibrium Optimizer

It is widely accepted that the search performance of an algorithm is closely related to its exploration and exploitation behavior [39,57]. Exploration refers to the ability to find various solutions in space and is expressed as the act of leaving the current region and visiting new regions. It helps the algorithm to jump out of the local optima, but also brings a reduction of convergence speed. Exploitation refers to the re-searching and mining in the current region, with the intention of improving the quality of the solution. It can increase the convergence speed, but also brings the crisis of falling into local optima. Thus there exists an accepted viewpoint that an appropriate ratio of exploration and exploitation is necessary to guarantee the performance of the algorithm [40].
Maintaining population diversity is a simple method to balance exploration and exploitation [58]. Diversity implies differences among individuals in a population, and it is considered to be closely related to the search performance of the algorithm. Some arguments suggest that increasing diversity implies exploration and decreasing diversity implies exploitation [39]. As mentioned earlier, a preference for exploration early and exploitation later helps improve algorithm performance. IEO is expected to provide more exploration by maintaining higher diversity in the early stage, and to favor exploitation by reducing diversity in the later stage.
In IEO, we propose a decreasing equilibrium pool to replace the conventional fixed pool. This decreasing equilibrium pool can keep more candidates in the early stage, and the number of candidates decreases with iterations. This decreasing equilibrium pool can be expressed as:
X e = x 1 e , x 2 e , x 3 e , , x j e , x m e
where x j e represents the top j ranked particles in the population, and j decreases with iterations, which can be expressed as:
j = μ × N × ( 1 k K )
x m e = x 1 e + x 2 e + x 3 e + + x j e j
where μ is a parameter used to control the initial number of particles in the equilibrium pool, and it takes values in the range of (0, 1). When it takes the value of 1, the pool contains the entire population.
As shown in Algorithm 1, after evaluating all particles, IEO calculates the value of j, and then picks the j best particles out to construct the equilibrium pool. In EO, j is a preset constant. In IEO, j decreases with iteration, which means that the particles in the equilibrium pool gradually decrease. In the update equation, Equation (8), of EO, each particle gets information from the particles in the equilibrium pool when updated. More particles in the pool means more sources to obtain information, and the diversity of information is a prerequisite for higher individual differences. Figure 1 shows a schematic diagram of a fixed equilibrium pool versus a decreasing equilibrium pool. Here, the blue dots represent the best particles in the population, and the light orange dots indicate the particles generated by the mean. The black dots represent the current population. The dotted line connects the current particle with the source of updating information. In the case of a fixed equilibrium pool, there are always only three sources of information for population updates. In the case of a decreasing equilibrium pool, the sources of information decrease from five to one. This allows IEO to maintain a higher diversity in the early stage and reduces the probability of falling into local optima. In the later stage, IEO gradually discards exploration and pours more resources into exploitation to improve the quality of solutions. Figure 2 depicts the flow chart of IEO.
Algorithm 1: IEO
Symmetry 14 01227 i001

4. Experiment and Analysis

To verify the performance of IEO, the experiments were divided into two main parts. In the first part, we used 29 benchmark functions from IEEE CEC2017 as the test set, and the peers for comparison included the conventional EO as well as several mainstream algorithms, including GGSA [59], HGSA [60], RGBSO [61], CBSO [62], GLPSO [63,64], SCA [65,66] and WFS [67,68]. GGSA enhances exploitation with additional updates on the population with the best particle. HGSA introduces a log-sigmoid gravitational constant and improves the search performance from the population topology viewpoint. RGBSO replaces clustering with random grouping to reduce time complexity and introduces dynamic step size to reduce the parameter setting load. CBSO uses chaotic local search to avoid stagnation in the exploitation phase. GLPSO is a hybridization of particle swarm optimization and genetic evolution. SCA controls the individual motion using sine and cosine functions to find the optima. WFS is an algorithm inspired by the wingsuit flying sport, which simulates landing to the lowest point to find the optima. The comparison with the conventional EO was used to illustrate the degree of improvement of IEO, and the comparison with other mainstream algorithms was used to illustrate the competitiveness. In the second part, to verify the practicality, we compared the performance of IEO and EO on three real-world optimization problems: the dynamic economic dispatch problem, the spacecraft trajectory optimization problem and dendritic neuron model training.

4.1. Experiment Setup

For the benchmark function experiments, the search range was [−100, 100], and the population size wa 100, except for WFS, which was set to 10,000 according to this paper [67]. The algorithms were terminated when the number of function evaluations reached D × 10 4 , where D denotes the number of dimensions, which depends on the problem. For real-world optimization problems, the settings depend on the problem. Details will be given in the problem description. To avoid chance, each set of experiments was executed 51 times independently, and the mean and standard deviation were used as the final result. The device running these algorithms was a PC with a 3.00 GHz Intel(R) Core(TM) i7-9700 CPU. The software was MATLAB.

4.2. Comparison on Benchmark Functions

In the experiments, 29 benchmark functions from IEEE CEC2017 were used to test the performance of IEO [69]. Among these functions, F1 and F2 are unimodal functions, F3–F9 are simple multimodal functions, F10–F19 are hybrid functions, and F20–F29 are composition functions. The performance of IEO was tested on these 29 benchmark functions with 30, 50, and 100 dimensions, respectively. The performance of the algorithms was expressed by non-parametric statistical tests, convergence graphs, and box-and-whisker diagrams.

4.2.1. Discussion of the Parameter μ

As a parameter that controls the initial number of particles in the equilibrium pool, it has a theoretical value in the range of (0, 1). When the value is 1, then the equilibrium pool size is the same as the population size. The size of the equilibrium pool is highly correlated with the convergence of the algorithm, so it is important to determine an appropriate initial value. The test results on IEEE CEC2017 with 30 dimensions were used to pick out an appropriate value. The preset values of μ were 1 64 , 2 64 , 4 64 , 8 64 , 16 64 , 32 64 , 64 64 . The experimental results are shown in Table 1.
The Wilcoxon rank-sum test was used to perform the non-parametric statistical tests. It is a one-to-one comparison that is used to describe significance. In the experiment, its null hypothesis was that IEO is not significantly better than its opponent, and the significance level was set at 0.05. When the value is less than 0.05, the hypothesis can be rejected; that is, IEO is significantly better than its opponent. The winning side was assigned the marker “+”, and the losing side was assigned the marker “−”; if there was no significant difference between the two, then it was recorded as “≈”. w / t / l was used to record the results of the test, and they indicate the number of wins/ties/losses, respectively. In Table 1, the mean values of 51 runs with different μ values are presented separately. w / t / l indicates the results of the Wilcoxon rank-sum tests of IEO vs. EO with different parameters. The result is 22 / 4 / 3 for the μ value of 4 64 , which is relatively good among these results. In the following experiments, the value of μ were all set to 4 64 .

4.2.2. Experimental Data on Benchmark Functions

Table 2, Table 3 and Table 4 give the experimental results on IEEE CEC2017 with 30, 50 and 100 dimensions. They contain the mean and standard deviation (std) of 51 runs, in which the best results on each problem are bolded. IEO achieved the best solution on many problems. Take 30 dimensions as an example. The results from the Wilcoxon rank-sum test show that IEO significantly outperforms EO on 22 problems. These problems include simple multimodal functions, hybrid functions and composition functions. This expresses the ability of IEO to solve complex landscape problems, and the search ability has been enhanced. It proves that the improvement of IEO is effective. In addition, IEO compared to GGSA, HGSA, RGBSO, CBSO, GLPSO, SCA and WFS also achieved 23, 19, 22, 23, 26, 29 and 27 wins, respectively. This means that IEO is also very competitive.
Similarly, in higher dimensions, IEO still performs well. In 50 and 100 dimensions, IEO obtains 22 and 24 wins over EO, respectively. IEO also achieves the best mean and standard deviation on several problems compared to other mainstream algorithms, including, for example, F3–F5, F7–F11, F15, F16, F19–F23, F26, F28, and F29 on 50 dimensions and F3–F5, F7, F8, F10, F11, F15, F16, F18–F20, F22, F23, and F26–F29 on 100 dimensions. This shows that the improvement in IEO is still applicable in higher dimensions, and it offers the possibility to solve higher dimensional problems. The limitation of IEO is also obvious, because it fails to obtain superiority on unimodal functions, whether with 30, 50 or 100 dimensions. This situation may be due to population divergence. IEO maintains a larger equilibrium pool in the early stage, which also makes its population more divergent in the early stage. As a result, IEO allocates fewer resources to exploitation than EO, which causes it to lose on unimodal functions.
To visualize the search process of IEO, Figure 3, Figure 4 and Figure 5 are presented to show the convergence graphs of IEO and its peers on several problems with 30, 50, and 100 dimensions. Convergence graphs are employed to visually represent the process of searching for the optima of each algorithm. Its horizontal axis indicates the number of function evaluations, and the vertical axis indicates the average optimization error so far. The smaller the optimization error, the stronger the search ability of the algorithm. Among the graphs, F4 and F7 are simple multimodal functions, F16 and F19 are hybrid functions, and F20 and F22 are composition functions. It can be seen that IEO does not have an excellent convergence speed in the early stage, but there is a steep convergence process in the later stage. This situation is generally considered as the algorithm finds a better region. The reason for this phenomenon may be that the behavior of maintaining population diversity in the early stage sacrifices the convergence speed, but at the same time, enhances the probability of jumping out of the local optima. At the late convergence stage, IEO obtained the best results among the nine algorithms, indicating that the strategy of maintaining diversity effectively prevented premature convergence and significantly improved the search performance. Similar situations are shown on 30, 50 and 100 dimensions, which also demonstrate that the strategy remains effective on high-dimensional problems.
To analyze the distribution and quality of the solutions found by IEO, Figure 6, Figure 7 and Figure 8 show the box-and-whisker diagrams of the above problems with 30, 50, and 100 dimensions. Box-and-whisker diagrams are used to observe the distribution and quality of the solutions. The observed data are the final searched solutions with 51 runs. The top edge of the box indicates the first quartile, and the bottom edge indicates the third quartile. The red horizontal line indicates the median of this data set. The short black line above the box indicates the maximum, and the short black line below indicates the minimum. The red ”+” indicates an outlier. The lower the position of this diagram, the better quality of the solution found; the shorter the shape of the diagram, the better stability of the algorithm. The quality of IEO’s solution is better than its peers, both in terms of median and overall position of the graph. In addition, the shorter distribution of IEO indicates its stability.

4.3. Comparison on Real-World Optimization Problems

4.3.1. Dynamic Economic Dispatch Problem

The energy issue has been one of the factors of development and dispute, and the direction for solving this problem is divided into two aspects. Taking electricity as an example, a part of the research is dedicated to developing new sources of power, such as the use of wind and solar power [70]. The other part of the research is dedicated to reducing losses in the power system and improving the efficiency of electricity utilization. Since the 1920s, people have been concerned with how to make generator units meet the load demand at the least cost with safety in mind [71]. The problem of arranging the generation mix of generators to meet the load demand at minimum cost within a certain time period only is known as the static economic dispatch (SED) problem [72]. However, due to the ramp limitation, the generator units cannot meet the huge demand variation. To solve this problem, it is necessary to study the dynamic economic dispatch (DED) problem [73,74]. DED schedules generator units on a 24-hour cycle, seeking to minimize costs while meeting ramp limits and other constraints [75]. The objective function of DED is given as:
M i n i m i z e : C ( E ) = t = 1 T h i = 1 N g S i ( E i , t ) i = 1 , , 2 , 3 , , N g t = 1 , , 2 , 3 , , T h
where
S i ( E i , t ) = a i + b i E i , t + c i E i , t 2 + p i s i n ( q i ( E i m i n E i , t ) )
In Equations (18) and (19), C ( E ) represents the cost function. t denotes the current time period, and T h denotes the full time period. N g is the number of generators. Thus, E i , t denotes the power output of the i-th generator in the t-th time period. S ( E ) is the cost function, usually approximated as a quadratic function of the power output. Here, a i , b i , c i , p i , and q i denote cost coefficients. E i m i n denotes the minimum output power of the i-th generator. Considering the energy balance as well as the dynamic nature, the DED problem is subject to power balance constraints, generator constraints and ramp rate limits.
Power balance constraints can be expressed as:
i = 1 N g E i , t = E t l + E t s
where E t l and E t s represent total system loads and losses. E t s can be calculated with B-coefficients, shown as:
E t s = i = 1 N g j = 1 N g E i , t B i , j E j , t j = 1 , 2 , 3 , N g
Generator constraints can be expressed as:
E i m i n E i , t E i m a x
This inequality expresses that the output power of each generator has a lower limit E i m i n and an upper limit E i m a x .
Ramp rate limits can be expressed as:
E i , t E i , t 1 R i u
E i , t 1 E i , t R i l
In reality, the change in output power of the generator is not instantaneous and the ramp rate limit restricts the rate of change of output power to an acceptable range. E i , t 1 indicates the output power of the previous time period. R i u denotes the upper ramp rate limit of the i-th generator, and R i u denotes its lower ramp rate limit.
To evaluate the quality of the solutions found by IEO and EO, the evaluation equation used is given as:
F = t = 1 T h i = 1 N g S i ( E i , t ) + φ ( t = 1 T h i = 1 N g E i , t E t l ) 2 + ϱ ( t = 1 T h i = 1 N g E i , t E m ) 2
where φ and ϱ are penalty parameters that make the solution that violates the constraint correspond to a higher cost value, instead of directly negating this solution [76]. The E m is defined by:
E m = E i , t 1 R i l , E i , t < E i , t 1 R i l E i , t 1 + R i u , E i , t > E i , t 1 + R i u E i , t , o t h e r w i s e
The parameters and the experimental results of the DED instance used are shown in Table 5 and Table 6. The algorithms terminate when the number of function evaluations reaches D × 10 4 . It can be seen that the mean and standard deviation of IEO is the smallest, which can indicate that IEO outperforms its peers on this DED instance.

4.3.2. Spacecraft Trajectory Optimization Problem

The spacecraft trajectory optimization (STO) problem has been posed for many years; it describes the problem of finding the best trajectory from one celestial body to another that can satisfy the mission requirements [77]. In order to save propellant consumption during the trip, deep-space maneuvering (DSM) techniques are utilized [78]. This technique allows the spacecraft to push the engine once at any time between each trajectory leg, resulting in a better trajectory design. The multiple gravity assist (MGA) maneuver technique using this deep space maneuver is referred to as MGADSM [79]. The trajectory design of MGADSM contains many variables, such as the date and time of launch, the planets to fly by, the number, time, and direction of deep space maneuvers, and so on. Therefore, STO problems usually have high dimensions and a large number of local optima regions [80]. Optimization using MHAs can perform preliminary quantitative calculations for this problem.
The Cassini 2 mission is an instance using MGADSM [81]. It was designed by NASA to reach the planet Saturn from Earth, with a flyby sequence of Earth-Venus-Venus-Earth-Jupiter-Saturn (shown in Figure 9). The problem has 22 dimensions, and more details can be found in paper [76]. The algorithms terminate when the number of function evaluations reaches D × 10 4 . Table 7 gives the experimental results of IEO and its peers on this STO problem, which can prove that IEO is still very competitive on this problem.

4.3.3. Dendritic Neuron Model Training

Artificial neural networks (ANN) have achieved great success in many realistic optimization fields [82,83,84]. The dendritic neural model (DNM) is a new model that takes into account the nonlinear information processing capability of dendrites [85,86]. This neural model has the ability to prune dendrites and synapses and to detect the neural morphology of the task; it has been shown to be effective in classification problems [87]. However, the training of this neural network is difficult due to the limitations of local optima and saddle points. In the training of DNM, the weight and threshold are the two optimization terms, and the goal is to make the sum of errors minimal. Here, we train DNM with IEO and EO to demonstrate the search performance. The test problem is XOR, selected from the University of California’s machine learning repository [88]. The number of attributes is set to 3, the number of training samples is 8, the number of test samples is 8, and the classes are 2. The reasonable parameters are M = 6 , k = 25 , and θ s = 0.3 . According to this paper [87], the population size is set to 50 and the algorithms terminate when the number of iterations reaches 250. However, WFS is not suitable for this experiment because its performance is related to the population size, and if the population size is set to 10,000 by default, it will lead to huge unfairness. Table 8 shows the training results, and it can be found that the error of IEO is the smallest, which proves that IEO has an excellent ability to train DNM.

5. Discussion and Analysis

5.1. Population Diversity Discussion

As stated earlier, population diversity is relevant for exploration and exploitation, and it affects the performance of the algorithm [89]. Therefore, the starting point for IEO improvement comes from consideration and control of diversity. This section is used to discuss the trends of IEO and EO’s diversity and to analyze whether the improvement approach works. IEEE CEC2017 is used as an example to observe the diversity of the two on 30, 50 and 100 dimensions. The equation used to calculate the population diversity is given below:
Z = 1 N · i = 1 N ( x i x ¯ ) 2
x ¯ = 1 N · i = 1 N C i
where Z denotes the population diversity, x i is the i-th particle, and x ¯ is the average of the particles.
Figure 10 depicts the change in IEO and EO’s diversity during the convergence process. From the general trend, IEO can maintain higher population diversity than EO in the early stage, while they have similar diversity levels in the later stage. Taking 30 dimensions as an example, according to Equation (16), when the number of iterations reaches about 1080, the equilibrium pool size of IEO is the same as that of EO. However, at this time, the diversity of IEO is higher than that of EO. When the number of iterations reaches about 2000, their diversity is similar, but the equilibrium pool size of IEO is smaller than that of EO. This proves that a larger equilibrium pool size in the early stage can provide more information sources for particle updates, which is beneficial to maintaining population diversity. After this, even though the size of IEO’s equilibrium pool becomes smaller, the diversity of IEO is not significantly lower than that of EO. This proves that a larger equilibrium pool does not always mean a higher diversity in the late convergence stage. This may be caused by the fact that in the late stage of convergence, the particles in EO are also mostly trapped in one region. Therefore, it is reasonable for IEO to discard exploration in favor of exploitation in the late stage. In other words, the decreasing equilibrium pool does achieve a better balance of exploration and exploitation by controlling population diversity, and this approach makes IEO outperform EO.

5.2. Computational Complexity Analysis

The performance of IEO is verified in the experiments above, and whether it brings higher computational complexity needs to be analyzed. To compare clearly, the complexity calculation method in the original EO is used here. To compare clearly, we use the complexity calculation method in the original EO, which is divided into 4 aspects: problem definition, initialization, function evaluation, memory saving, and population update. The population size is denoted by N, the number of problem dimensions is denoted by D, the number of iterations is denoted by K, and the cost of the function evaluation is denoted by C.
Firstly, the computational complexity of EO can be expressed as:
O ( p r o b l e m d e f i n i t i o n ) + O ( i n i t i a l i z a t i o n ) + K [ O ( f u n c t i o n e v a l u a t i o n s ) + O ( m e m o r y s a v i n g ) + O ( p o p u l a t i o n u p d a t e ) ] = O ( 1 ) + O ( D N ) + O ( K C N ) + O ( K N ) + O ( K D N ) O ( K D N + K C N )
Then, IEO computes the number of equilibrium pool particles once in each iteration, which leads to an O(K) complexity, so the computational complexity of IEO is expressed as:
O ( K D N + K C N ) + O ( K ) O ( K D N + K C N )
Therefore, it can be concluded that IEO does not significantly increase the computational complexity while improving the search performance.

6. Conclusions

In this paper, an improved equilibrium optimizer based on a decreasing equilibrium pool is proposed. This decreasing equilibrium pool can retain more particles in the early stage of convergence and provide more information sources for population updates. This allows the algorithm to maintain a higher population diversity, which is beneficial to improve the exploration ability. In addition, this pool retains fewer particles in the late stage, enhancing exploitation by discarding some resources for exploration. This simple but effective method brings a better balance between exploration and exploitation for optimization.
The performance of IEO is demonstrated on 29 benchmark functions in IEEE CEC2017 and three real-world optimization problems. The results on the benchmarking functions show that IEO is capable of handling various optimization problems and is competitive with its peers. However, it also exposes the limitation that it is difficult for IEO to gain advantage on unimodal functions. In addition, compared with the mainstream algorithms, IEO is still competitive on the dynamic economic dispatch problem, spacecraft trajectory optimization problem, and artificial neural network model training problem. Finally, the population diversity analysis shows that the decreasing equilibrium pool works as envisaged, and the complexity analysis shows that the method does not significantly increase the computational complexity. The current experimental results have verified the performance of IEO. However, the population structure of IEO is a fully connected network; this structure still limits the performance. In the future, we will adopt different population structures to further improve the performance. In addition, IEO is also expected to demonstrate capabilities in some applications, such as image segmentation [90], image classification [91], flow shop scheduling problems [92,93,94], and some modeled economics problems [95,96,97].

Author Contributions

Conceptualization, L.Y. and Z.X.; Investigation, Y.L. and G.T.; Methodology, L.Y. and Z.X.; Software, L.Y. and G.T.; Supervision, Z.X.; Validation, Y.L. and G.T.; Visualization, Y.L.; Writing–original draft, L.Y.; Writing–review & editing, L.Y. and Z.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Technology Plan Project of Changzhou grant number CJ20210155, Natural Science Foundation of the Jiangsu Higher Education Institutions of China grant number 21KJD520002 and Jiangsu Province “333” project grant number BRA2020152.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to show the acknowledgments to Technology Plan Project of Changzhou (CJ20210155), Natural Science Foundation of the Jiangsu Higher Education Institutions of China (21KJD520002), and Jiangsu Province “333” project(BRA2020152) for the funding support.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AcronymDefinition
ANNArtificial neural network
CBSOBrain storm optimization with chaotic local search
CJADEChaotic local search-based differential evolution
DEDDynamic economic dispatch
DNMDendritic neural model
DSMDeep-space maneuvering
EOEquilibrium optimizer
GGSAGrouping gravitational search algorithm
GLPSOGenetic learning particle swarm optimization
GSAGravitational search algorithm
HGSAHierarchical gravitational search algorithm
IEOImproved equilibrium optimizer
L-SHADESuccess-history-based parameter adaptation for differential evolution using
linear population size reduction
MNFEsMaximum number of function evaluations
MGAMultiple gravity assist
MHAMetaheuristic algorithm
NFEsNumber of function evaluations
RGBSORandom grouping brain storm optimization
SCASine cosine algorithm
SEDStatic economic dispatch
STOSpacecraft trajectory optimization
WFSWingsuit flying search

References

  1. Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization methods for large-scale machine learning. Siam Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
  2. Sun, S.; Cao, Z.; Zhu, H.; Zhao, J. A survey of optimization methods from a machine learning perspective. IEEE Trans. Cybern. 2019, 50, 3668–3681. [Google Scholar] [CrossRef] [Green Version]
  3. Abitha, R.; Vennila, S.M. A Swarm Based Symmetrical Uncertainty Feature Selection Method for Autism Spectrum Disorders. In Proceedings of the 2019 Third International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 10–11 January 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 665–669. [Google Scholar]
  4. Das, R.; Saha, S. Gene expression classification using a fuzzy point symmetry based PSO clustering technique. In Proceedings of the 2015 Second International Conference on Soft Computing and Machine Intelligence (ISCMI), Hong Kong, 23–24 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 69–73. [Google Scholar]
  5. Ren, Y.; Bai, G. Determination of optimal SVM parameters by using GA/PSO. J. Comput. 2010, 5, 1160–1168. [Google Scholar] [CrossRef]
  6. Panigrahi, S.; Behera, H. Time Series Forecasting Using Differential Evolution-Based ANN Modelling Scheme. Arab. J. Sci. Eng. 2020, 45, 11129–11146. [Google Scholar] [CrossRef]
  7. Aburomman, A.A.; Reaz, M.B.I. A novel SVM-kNN-PSO ensemble method for intrusion detection system. Appl. Soft Comput. 2016, 38, 360–372. [Google Scholar] [CrossRef]
  8. Hussain, K.; Mohd Salleh, M.N.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef] [Green Version]
  9. Gogna, A.; Tayal, A. Metaheuristics: Review and application. J. Exp. Theor. Artif. Intell. 2013, 25, 503–526. [Google Scholar] [CrossRef]
  10. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic algorithms: A comprehensive review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar]
  11. Ma, L.; Huang, M.; Yang, S.; Wang, R.; Wang, X. An adaptive localized decision variable analysis approach to large-scale multiobjective and many-objective optimization. IEEE Trans. Cybern. 2021. Available online: https://ieeexplore.ieee.org/abstract/document/9332241 (accessed on 10 March 2022).
  12. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  13. Beheshti, Z.; Shamsuddin, S.M.H. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft Comput. Appl. 2013, 5, 1–35. [Google Scholar]
  14. Gao, S.; Wang, W.; Dai, H.; Li, F.; Tang, Z. Improved clonal selection algorithm combined with ant colony optimization. IEICE Trans. Inf. Syst. 2008, 91, 1813–1823. [Google Scholar] [CrossRef] [Green Version]
  15. Engin, O.; Guçlu, A. A new hybrid ant colony optimization algorithm for solving the no-wait flow shop scheduling problems. Appl. Soft Comput. 2018, 72, 166–176. [Google Scholar] [CrossRef]
  16. Maleki, N.; Zeinali, Y.; Niaki, S.T.A. A k-NN method for lung cancer prognosis with the use of a genetic algorithm for feature selection. Expert Syst. Appl. 2021, 164, 113981. [Google Scholar] [CrossRef]
  17. Ma, L.; Cheng, S.; Shi, Y. Enhancing learning efficiency of brain storm optimization via orthogonal learning design. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 6723–6742. [Google Scholar] [CrossRef]
  18. Zhao, H.; Liu, K.; Li, S.; Yang, F.; Cheng, S.; Eldeeb, H.H.; Kang, J.; Xu, G. shielding optimization of ipt system based on genetic algorithm for efficiency promotion in EV wireless charging applications. IEEE Trans. Ind. Appl. 2021, 58, 1190–1200. [Google Scholar] [CrossRef]
  19. Zhou, M.; Long, Y.; Zhang, W.; Pu, Q.; Wang, Y.; Nie, W.; He, W. Adaptive genetic algorithm-aided neural network with channel state information tensor decomposition for indoor localization. IEEE Trans. Evol. Comput. 2021, 25, 913–927. [Google Scholar] [CrossRef]
  20. Cui, Z.; Zhang, J.; Wu, D.; Cai, X.; Wang, H.; Zhang, W.; Chen, J. Hybrid many-objective particle swarm optimization algorithm for green coal production problem. Inf. Sci. 2020, 518, 256–271. [Google Scholar] [CrossRef]
  21. Fang, S.; Wang, Y.; Wang, W.; Chen, Y.; Chen, Y. Design of permanent magnet synchronous motor servo system based on improved particle swarm optimization. IEEE Trans. Power Electron. 2022, 37, 5833–5846. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Chen, X.; Lv, D.; Zhang, Y. Optimization of urban heat effect mitigation based on multi-type ant colony algorithm. Appl. Soft Comput. 2021, 112, 107758. [Google Scholar] [CrossRef]
  23. Di Caprio, D.; Ebrahimnejad, A.; Alrezaamiri, H.; Santos-Arteaga, F.J. A novel ant colony algorithm for solving shortest path problems with fuzzy arc weights. Alex. Eng. J. 2022, 61, 3403–3415. [Google Scholar] [CrossRef]
  24. Gao, S.; Vairappan, C.; Wang, Y.; Cao, Q.; Tang, Z. Gravitational search algorithm combined with chaos for unconstrained numerical optimization. Appl. Math. Comput. 2014, 231, 48–62. [Google Scholar] [CrossRef]
  25. Lei, Z.; Gao, S.; Gupta, S.; Cheng, J.; Yang, G. An aggregative learning gravitational search algorithm with self-adaptive gravitational constants. Expert Syst. Appl. 2020, 152, 113396. [Google Scholar] [CrossRef]
  26. Wang, Y.; Gao, S.; Zhou, M.; Yu, Y. A multi-layered gravitational search algorithm for function optimization and real-world problems. IEEE/CAA J. Autom. Sin. 2021, 8, 1–16. [Google Scholar] [CrossRef]
  27. Sharma, S.K.; Konki, S.K.; Khambampati, A.K.; Kim, K.Y. Bladder boundary estimation by gravitational search algorithm using electrical impedance tomography. IEEE Trans. Instrum. Meas. 2020, 69, 9657–9667. [Google Scholar] [CrossRef]
  28. Yu, Y.; Gao, S.; Wang, Y.; Cheng, J.; Todo, Y. ASBSO: An Improved Brain Storm Optimization With Flexible Search Length and Memory-Based Selection. IEEE Access 2018, 6, 36977–36994. [Google Scholar] [CrossRef]
  29. Wang, Y.; Gao, S.; Yu, Y.; Xu, Z. The discovery of population interaction with a power law distribution in brain storm optimization. Memetic Comput. 2019, 11, 65–87. [Google Scholar] [CrossRef]
  30. Yu, Y.; Gao, S.; Wang, Y.; Lei, Z.; Cheng, J.; Todo, Y. A multiple diversity-driven brain storm optimization algorithm with adaptive parameters. IEEE Access 2019, 7, 126871–126888. [Google Scholar] [CrossRef]
  31. Jiang, Y.; Chen, X.; Zheng, F.C.; Niyato, D.; You, X. Brain storm optimization-based edge caching in fog radio access networks. IEEE Trans. Veh. Technol. 2021, 70, 1807–1820. [Google Scholar] [CrossRef]
  32. Ma, L.; Wang, X.; Huang, M.; Lin, Z.; Tian, L.; Chen, H. Two-level master–slave RFID networks planning via hybrid multiobjective artificial bee colony optimizer. IEEE Trans. Syst. Man Cybern. Syst. 2017, 49, 861–880. [Google Scholar] [CrossRef]
  33. Aldhafeeri, A.; Rahmat-Samii, Y. Brain storm optimization for electromagnetic applications: Continuous and discrete. IEEE Trans. Antennas Propag. 2019, 67, 2710–2722. [Google Scholar] [CrossRef]
  34. Mathew, D.; Ram, J.P.; Pillai, D.S.; Kim, Y.J.; Elangovan, D.; Laudani, A.; Mahmud, A. Parameter Estimation of Organic Photovoltaic Cells–A Three-Diode Approach Using Wind-Driven Optimization Algorithm. IEEE J. Photovoltaics 2021, 12, 327–336. [Google Scholar] [CrossRef]
  35. Cheng, J.; Yuan, G.; Zhou, M.; Gao, S.; Huang, Z.; Liu, C. A connectivity-prediction-based dynamic clustering model for VANET in an urban scene. IEEE Internet Things J. 2020, 7, 8410–8418. [Google Scholar] [CrossRef]
  36. Kranina, E.I. China on the way to achieving carbon neutrality. Finans. Financ. J. 2021, 5, 51–61. [Google Scholar] [CrossRef]
  37. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  38. Warn, A.; Brew, J. Mass balance. Water Res. 1980, 14, 1427–1434. [Google Scholar] [CrossRef]
  39. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. (CSUR) 2013, 45, 1–33. [Google Scholar] [CrossRef]
  40. Morales-Castañeda, B.; Zaldivar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  41. Xu, Z.; Yang, H.; Li, J.; Zhang, X.; Lu, B.; Gao, S. Comparative study on single and multiple chaotic maps incorporated grey wolf optimization algorithms. IEEE Access 2021, 9, 77416–77437. [Google Scholar] [CrossRef]
  42. Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
  43. Gupta, S.; Deep, K. A novel random walk grey wolf optimizer. Swarm Evol. Comput. 2019, 44, 101–112. [Google Scholar] [CrossRef]
  44. Sun, J.; Liu, X.; Back, T.; Xu, Z. Learning adaptive differential evolution algorithm from optimization experiences by policy gradient. IEEE Trans. Evol. Comput. 2021, 9, 77416–77437. [Google Scholar] [CrossRef]
  45. Li, J.; Yang, L.; Yi, J.; Yang, H.; Todo, Y.; Gao, S. A Simple but Efficient Ranking-Based Differential Evolution. IEICE Trans. Inf. Syst. 2022, 105, 189–192. [Google Scholar] [CrossRef]
  46. Gao, S.; Yu, Y.; Wang, Y.; Wang, J.; Cheng, J.; Zhou, M. Chaotic local search-based differential evolution algorithms for optimization. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 3954–3967. [Google Scholar] [CrossRef]
  47. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1658–1665. [Google Scholar]
  48. Polakova, R. L-SHADE with competing strategies applied to constrained optimization. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia-San Sebastián, Spain, Piscataway, NJ, USA, 5–8 June 2017; pp. 1683–1689. [Google Scholar]
  49. Yang, H.; Gao, S.; Wang, R.L.; Todo, Y. A ladder spherical evolution search algorithm. IEICE Trans. Inf. Syst. 2021, 104, 461–464. [Google Scholar] [CrossRef]
  50. Yang, L.; Gao, S.; Yang, H.; Cai, Z.; Lei, Z.; Todo, Y. Adaptive chaotic spherical evolution algorithm. Memetic Comput. 2021, 13, 383–411. [Google Scholar] [CrossRef]
  51. Shilaja, C.; Arunprasath, T. Optimal power flow using moth swarm algorithm with gravitational search algorithm considering wind power. Future Gener. Comput. Syst. 2019, 98, 708–715. [Google Scholar]
  52. Sabri, N.M.; Puteh, M.; Mahmood, M.R. A review of gravitational search algorithm. Int. J. Adv. Soft Comput. Appl 2013, 5, 1–39. [Google Scholar]
  53. Younes, Z.; Alhamrouni, I.; Mekhilef, S.; Reyasudin, M. A memory-based gravitational search algorithm for solving economic dispatch problem in micro-grid. Ain Shams Eng. J. 2021, 12, 1985–1994. [Google Scholar] [CrossRef]
  54. Song, Z.; Gao, S.; Yu, Y.; Sun, J.; Todo, Y. Multiple chaos embedded gravitational search algorithm. IEICE Trans. Inf. Syst. 2017, 100, 888–900. [Google Scholar] [CrossRef] [Green Version]
  55. Sudholt, D. The benefits of population diversity in evolutionary algorithms: A survey of rigorous runtime analyses. In Theory of Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2020; pp. 359–404. [Google Scholar]
  56. Nazaroff, W.W.; Alvarez-Cohen, L. Environmental Engineering Science; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  57. Črepinšek, M.; Mernik, M.; Liu, S.H. Analysis of exploration and exploitation in evolutionary algorithms by ancestry trees. Int. J. Innov. Comput. Appl. 2011, 3, 11–19. [Google Scholar] [CrossRef]
  58. Gupta, D.; Ghafir, S. An overview of methods maintaining diversity in genetic algorithms. Int. J. Emerg. Technol. Adv. Eng. 2012, 2, 56–60. [Google Scholar]
  59. Dowlatshahi, M.B.; Nezamabadi-Pour, H. GGSA: A grouping gravitational search algorithm for data clustering. Eng. Appl. Artif. Intell. 2014, 36, 114–121. [Google Scholar] [CrossRef]
  60. Wang, Y.; Yu, Y.; Gao, S.; Pan, H.; Yang, G. A hierarchical gravitational search algorithm with an effective gravitational constant. Swarm Evol. Comput. 2019, 46, 118–139. [Google Scholar] [CrossRef]
  61. Cao, Z.; Shi, Y.; Rong, X.; Liu, B.; Du, Z.; Yang, B. Random grouping brain storm optimization algorithm with a new dynamically changing step size. In Proceedings of the International Conference in Swarm Intelligence, Beijing, China, 25–28 June 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 357–364. [Google Scholar]
  62. Yu, Y.; Gao, S.; Cheng, S.; Wang, Y.; Song, S.; Yuan, F. CBSO: A memetic brain storm optimization with chaotic local search. Memetic Comput. 2017, 10, 353–367. [Google Scholar] [CrossRef]
  63. Gong, Y.J.; Li, J.J.; Zhou, Y.; Li, Y.; Chung, H.S.H.; Shi, Y.H.; Zhang, J. Genetic learning particle swarm optimization. IEEE Trans. Cybern. 2015, 46, 2277–2290. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Lin, A.; Sun, W.; Yu, H.; Wu, G.; Tang, H. Global genetic learning particle swarm optimization with diversity enhancement by ring topology. Swarm Evol. Comput. 2019, 44, 571–583. [Google Scholar] [CrossRef]
  65. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  66. Singh, N.; Singh, S. A novel hybrid GWO-SCA approach for optimization problems. Eng. Sci. Technol. Int. J. 2017, 20, 1586–1601. [Google Scholar] [CrossRef]
  67. Covic, N.; Lacevic, B. Wingsuit flying search—A novel global optimization algorithm. IEEE Access 2020, 8, 53883–53900. [Google Scholar] [CrossRef]
  68. Mao, Y.; Xu, F.; Zhao, X.; Yan, X. A gearbox fault feature extraction method based on wingsuit flying search algorithm-optimized orthogonal matching pursuit with a compound time-frequency atom dictionary. J. Mech. Sci. Technol. 2021, 35, 4825–4833. [Google Scholar] [CrossRef]
  69. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P. Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization. Tech. Rep. 2016. [Google Scholar]
  70. Bushukina, V.I. Specific Features of Renewable Energy Development in the World and Russia. Finans. Financ. J. 2021, 5, 93–107. [Google Scholar] [CrossRef]
  71. Xia, X.; Elaiw, A. Optimal dynamic economic dispatch of generation: A review. Electr. Power Syst. Res. 2010, 80, 975–986. [Google Scholar] [CrossRef]
  72. Elattar, E.E. A hybrid genetic algorithm and bacterial foraging approach for dynamic economic dispatch problem. Int. J. Electr. Power Energy Syst. 2015, 69, 18–26. [Google Scholar] [CrossRef]
  73. Ross, D.W.; Kim, S. Dynamic economic dispatch of generation. IEEE Trans. Power Appar. Syst. 1980, 6, 2060–2068. [Google Scholar] [CrossRef]
  74. Attaviriyanupap, P.; Kita, H.; Tanaka, E.; Hasegawa, J. A hybrid EP and SQP for dynamic economic dispatch with nonsmooth fuel cost function. IEEE Trans. Power Syst. 2002, 17, 411–416. [Google Scholar] [CrossRef]
  75. Zaman, M.; Elsayed, S.M.; Ray, T.; Sarker, R.A. Evolutionary algorithms for dynamic economic dispatch problems. IEEE Trans. Power Syst. 2015, 31, 1486–1495. [Google Scholar] [CrossRef]
  76. Das, S.; Suganthan, P.N. Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization problems. Jadavpur Univ. Nanyang Technol. Univ. Kolkata. pp. 341–359. Available online: https://al-roomi.org/multimedia/CEC_Database/CEC2011/CEC2011_TechnicalReport.pdf (accessed on 10 March 2022).
  77. Rosa Sentinella, M.; Casalino, L. Cooperative evolutionary algorithm for space trajectory optimization. Celest. Mech. Dyn. Astron. 2009, 105, 211–227. [Google Scholar] [CrossRef]
  78. Vasile, M.; Minisci, E.; Locatelli, M. An inflationary differential evolution algorithm for space trajectory optimization. IEEE Trans. Evol. Comput. 2011, 15, 267–281. [Google Scholar] [CrossRef] [Green Version]
  79. Zhu, Y.; Wang, H.; Zhang, J. Spacecraft multiple-impulse trajectory optimization using differential evolution algorithm with combined mutation strategies and boundary-handling schemes. Math. Probl. Eng. 2015, 2015, 949480. [Google Scholar] [CrossRef] [Green Version]
  80. Darani, S.A.; Abdelkhalik, O. Space trajectory optimization using hidden genes genetic algorithms. J. Spacecr. Rocket. 2018, 55, 764–774. [Google Scholar] [CrossRef]
  81. Danoy, G.; Dorronsoro, B.; Bouvry, P. New state-of-the-art results for Cassini2 global trajectory optimization problem. Acta Futur. 2012, 5, 65–72. [Google Scholar]
  82. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  83. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  85. He, H.; Gao, S.; Jin, T.; Sato, S.; Zhang, X. A seasonal-trend decomposition-based dendritic neuron model for financial time series prediction. Appl. Soft Comput. 2021, 108, 107488. [Google Scholar] [CrossRef]
  86. Xu, Z.; Wang, Z.; Li, J.; Jin, T.; Meng, X.; Gao, S. Dendritic neuron model trained by information feedback-enhanced differential evolution algorithm for classification. Knowl. Syst. 2021, 233, 107536. [Google Scholar] [CrossRef]
  87. Gao, S.; Zhou, M.; Wang, Y.; Cheng, J.; Yachi, H.; Wang, J. Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 601–614. [Google Scholar] [CrossRef]
  88. Blake, C. UCI Repository of Machine Learning Databases. 1998. Available online: http://www.ics.uci.edu/mlearn/MLRepository.html (accessed on 10 March 2022).
  89. Cheng, S.; Shi, Y.; Qin, Q.; Zhang, Q.; Bai, R. Population diversity maintenance in brain storm optimization algorithm. J. Artif. Intell. Soft Comput. Res. 2014, 4, 83–97. [Google Scholar] [CrossRef] [Green Version]
  90. He, L.; Huang, S. An efficient krill herd algorithm for color image multilevel thresholding segmentation problem. Appl. Soft Comput. 2020, 89, 106063. [Google Scholar] [CrossRef]
  91. Narmatha, C.; Eljack, S.M.; Tuka, A.A.R.M.; Manimurugan, S.; Mustafa, M. A hybrid fuzzy brain-storm optimization algorithm for the classification of brain tumor MRI images. J. Ambient. Intell. Humaniz. Comput. 2020, 1–9. [Google Scholar] [CrossRef]
  92. Tang, L.; Wang, X. An Improved Particle Swarm Optimization Algorithm for the Hybrid Flowshop Scheduling to Minimize Total Weighted Completion Time in Process Industry. IEEE Trans. Control Syst. Technol. 2010, 18, 1303–1314. [Google Scholar] [CrossRef]
  93. Marichelvam, M.K.; Prabaharan, T.; Yang, X.S. A discrete firefly algorithm for the multi-objective hybrid flowshop scheduling problems. IEEE Trans. Evol. Comput. 2014, 18, 301–305. [Google Scholar] [CrossRef]
  94. Zhang, G.; Ma, X.; Wang, L.; Xing, K. Elite archive-assisted adaptive memetic algorithm for a realistic hybrid differentiation flowshop scheduling problem. IEEE Trans. Evol. Comput. 2022, 26, 100–114. [Google Scholar] [CrossRef]
  95. Moiseev, N.; Mikhaylov, A.; Varyash, I.; Saqib, A. Investigating the relation of GDP per capita and corruption index. Entrep. Sustain. Issues 2020, 8, 780. [Google Scholar] [CrossRef]
  96. Mutalimov, V.; Kovaleva, I.; Mikhaylov, A.; Stepanova, D. Assessing regional growth of small business in Russia. Entrep. Bus. Econ. Rev. 2021, 9, 119–133. [Google Scholar] [CrossRef]
  97. Matveeva, N.S. Legislative Regulation Financial Statement Preparation by Micro Entities: International Experience. Finans. Financ. J. 2021, 5, 125–138. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of a fixed equilibrium pool and a decreasing equilibrium pool.
Figure 1. Schematic diagram of a fixed equilibrium pool and a decreasing equilibrium pool.
Symmetry 14 01227 g001
Figure 2. Flowchart of IEO.
Figure 2. Flowchart of IEO.
Symmetry 14 01227 g002
Figure 3. Convergence graphs on IEEE CEC2017 with 30 dimensions.
Figure 3. Convergence graphs on IEEE CEC2017 with 30 dimensions.
Symmetry 14 01227 g003aSymmetry 14 01227 g003b
Figure 4. Convergence graphs on IEEE CEC2017 with 50 dimensions.
Figure 4. Convergence graphs on IEEE CEC2017 with 50 dimensions.
Symmetry 14 01227 g004
Figure 5. Convergence graphs on IEEE CEC2017 with 100 dimensions.
Figure 5. Convergence graphs on IEEE CEC2017 with 100 dimensions.
Symmetry 14 01227 g005
Figure 6. Box-and-whisker diagrams on IEEE CEC2017 with 30 dimensions.
Figure 6. Box-and-whisker diagrams on IEEE CEC2017 with 30 dimensions.
Symmetry 14 01227 g006
Figure 7. Box-and-whisker diagrams on IEEE CEC2017 with 50 dimensions.
Figure 7. Box-and-whisker diagrams on IEEE CEC2017 with 50 dimensions.
Symmetry 14 01227 g007aSymmetry 14 01227 g007b
Figure 8. Box-and-whisker diagrams on IEEE CEC2017 with 100 dimensions.
Figure 8. Box-and-whisker diagrams on IEEE CEC2017 with 100 dimensions.
Symmetry 14 01227 g008
Figure 9. The flight trajectory of Cassini 2.
Figure 9. The flight trajectory of Cassini 2.
Symmetry 14 01227 g009
Figure 10. Population diversity analysis on IEEE CEC2017.
Figure 10. Population diversity analysis on IEEE CEC2017.
Symmetry 14 01227 g010
Table 1. Experimental data of IEO with different μ values.
Table 1. Experimental data of IEO with different μ values.
EOIEO
μ -1/642/644/648/6416/6432/6464/64
F13.8201E + 035.5418E + 034.2338E + 033.6199E + 034.0864E + 033.0915E + 032.3701E + 032.6264E + 03
F25.0923E + 015.1493E + 024.4414E + 027.0833E + 021.7886E + 032.7093E + 034.1765E + 036.1440E + 03
F38.4854E + 017.9318E + 018.8599E + 018.8394E + 019.1840E + 019.7913E + 011.0293E + 029.2376E + 01
F46.2329E + 013.2797E + 012.4340E + 012.0598E + 011.8921E + 011.6563E + 011.7072E + 011.7498E + 01
F57.8148E - 031.2427E - 032.7188E - 053.6545E - 067.8805E - 073.2616E - 062.0632E - 068.9197E - 06
F69.0975E + 015.9836E + 015.2140E + 014.9094E + 014.6480E + 014.5556E + 014.5263E + 014.7800E + 01
F75.9598E + 013.5167E + 012.7320E + 012.2282E + 012.0348E + 012.1128E + 011.9193E + 011.8427E + 01
F88.9579E + 001.0975E + 002.6256E - 015.6830E - 024.7921E - 027.1135E - 021.9538E - 021.9572E - 02
F93.2687E + 032.8894E + 032.9942E + 032.6223E + 032.6354E + 032.7862E + 032.8170E + 032.8198E + 03
F105.0615E + 014.3128E + 014.5207E + 013.6975E + 012.7141E + 013.0934E + 014.0697E + 014.4323E + 01
F118.2715E + 047.4212E + 045.0784E + 043.8584E + 044.8540E + 041.0787E + 052.3749E + 052.3750E + 05
F121.9975E + 043.2302E + 041.8792E + 041.9494E + 041.8296E + 041.9240E + 042.1367E + 042.3298E + 04
F135.5963E + 031.1393E + 049.0457E + 031.1780E + 041.5284E + 043.2635E + 043.2804E + 042.7922E + 04
F145.7821E + 037.9448E + 033.5718E + 032.2946E + 032.6415E + 032.3349E + 031.8578E + 033.4511E + 03
F156.0395E + 023.4120E + 021.3976E + 021.0357E + 021.5098E + 021.4537E + 021.1289E + 021.8262E + 02
F161.7232E + 021.2927E + 028.1288E + 015.9592E + 014.9528E + 019.6392E + 011.1846E + 021.4430E + 02
F171.4721E + 051.6634E + 051.7057E + 052.1896E + 052.1736E + 053.3293E + 053.0677E + 052.9835E + 05
F187.2785E + 039.2941E + 034.7905E + 036.2676E + 034.6660E + 033.3079E + 032.9895E + 034.1361E + 03
F192.2445E + 021.3147E + 021.0329E + 028.3284E + 011.0319E + 029.4965E + 011.1937E + 021.2740E + 02
F202.5448E + 022.3210E + 022.2283E + 022.1799E + 022.1482E + 022.1219E + 022.1296E + 022.1334E + 02
F211.0827E + 036.5254E + 022.4967E + 022.5006E + 021.7928E + 021.0000E + 021.5333E + 021.4876E + 02
F224.0823E + 023.8659E + 023.7385E + 023.6466E + 023.6114E + 023.6058E + 023.6207E + 023.6691E + 02
F234.7204E + 024.5447E + 024.4314E + 024.3535E + 024.3182E + 024.3105E + 024.3167E + 024.3573E + 02
F243.8681E + 023.8741E + 023.8692E + 023.8616E + 023.8678E + 023.8649E + 023.8704E + 023.8767E + 02
F251.4754E + 031.2563E + 031.0936E + 031.0218E + 031.0104E + 031.0194E + 031.0443E + 031.0683E + 03
F265.1388E + 025.1375E + 025.1058E + 025.1130E + 025.0935E + 025.0991E + 025.0735E + 025.0456E + 02
F273.5272E + 023.6856E + 023.4200E + 023.3158E + 023.3168E + 023.4552E + 023.9282E + 024.0875E + 02
F285.9015E + 025.3879E + 025.0609E + 024.8857E + 024.8070E + 024.9965E + 025.1428E + 025.1134E + 02
F295.8267E + 037.0522E + 034.8887E + 034.0673E + 034.1686E + 034.1643E + 033.9059E + 039.9617E + 03
w/t/l-15/9/516/11/222/4/321/4/418/6/518/4/714/6/9
Table 2. Experimental data of IEO vs other algorithms with 30 dimensions.
Table 2. Experimental data of IEO vs other algorithms with 30 dimensions.
IEO EO GGSA HGSA RGBSO
MeanStdMeanStdw/t/lMeanStdw/t/lMeanStdw/t/lMeanStdw/t/l
F13.6199E + 033.5151E + 033.8201E + 034.2554E + 031.8451E + 039.6247E + 022.8402E + 032.5681E + 032.5390E + 032.7946E + 03
F27.0833E + 025.6770E + 025.0923E + 018.1049E + 015.8158E + 046.5549E + 03+4.4830E + 043.5988E + 03+1.8937E + 001.3501E + 01
F38.8394E + 011.7620E + 018.4854E + 011.9797E + 011.3046E + 021.9501E + 01+1.1909E + 022.1410E + 00+8.0098E + 013.1504E + 01
F42.0598E + 015.0532E + 006.2329E + 012.0740E + 01+1.1040E + 021.1557E + 01+1.5123E + 021.3641E + 01+2.1843E + 024.3945E + 01+
F53.6500E - 065.8693E - 067.8150E - 033.6618E - 02+7.9756E + 003.9141E + 00+8.9075E + 005.8584E + 00+5.7551E + 011.0113E + 01+
F64.9094E + 016.9315E + 009.0975E + 011.8113E + 01+3.7082E + 011.9707E + 004.0258E + 012.3497E + 007.2251E + 021.5304E + 02+
F72.2282E + 015.9701E + 005.9598E + 011.6208E + 01+8.5254E + 011.1052E + 01+1.0424E + 028.5489E + 00+1.5915E + 022.8817E + 01+
F85.6830E - 021.4176E - 018.9579E + 002.3641E + 01+1.1100E - 131.5919E - 14+5.4700E - 145.7356E - 143.9163E + 031.1922E + 03+
F92.6223E + 035.4304E + 023.2687E + 037.9000E + 02+3.3096E + 033.8220E + 02+3.1832E + 034.9038E + 02+4.4514E + 035.6905E + 02+
F103.6975E + 012.9939E + 015.0615E + 013.7544E + 01+1.4518E + 023.2270E + 01+9.6132E + 012.9653E + 01+1.5266E + 025.2182E + 01+
F113.8584E + 041.8746E + 048.2715E + 049.5750E + 04+9.5566E + 052.9198E + 06+1.3525E + 057.0641E + 04+8.8516E + 056.1130E + 05+
F121.9494E + 041.8469E + 041.9975E + 041.8023E + 041.7726E + 044.8150E + 031.2462E + 045.2102E + 035.6412E + 042.4798E + 04+
F131.1780E + 041.1625E + 045.5963E + 033.8567E + 032.3074E + 059.8247E + 04+7.2466E + 035.0104E + 033.5561E + 033.0188E + 03
F142.2946E + 032.5517E + 035.7821E + 038.7840E + 03+2.8972E + 031.5176E + 03+7.4044E + 025.7494E + 023.6757E + 043.4187E + 04+
F151.0357E + 021.6551E + 026.0395E + 022.8590E + 02+1.1826E + 032.2901E + 02+1.1531E + 031.8358E + 02+1.5567E + 034.2614E + 02+
F165.9592E + 014.1194E + 011.7232E + 021.2639E + 02+1.0199E + 032.0203E + 02+1.0441E + 031.8983E + 02+8.6791E + 023.1558E + 02+
F172.1896E + 052.0519E + 051.4721E + 051.3929E + 051.5767E + 057.4682E + 046.1170E + 041.9302E + 041.1082E + 057.3741E + 04
F186.2676E + 038.6774E + 037.2785E + 031.1474E + 044.2492E + 031.4513E + 032.8665E + 031.1666E + 035.9510E + 042.5619E + 04+
F198.3284E + 015.7696E + 012.2445E + 021.6266E + 02+8.9584E + 021.7032E + 02+9.0784E + 021.9053E + 02+8.9560E + 022.1587E + 02+
F202.1799E + 025.8174E + 002.5448E + 021.6317E + 01+3.1568E + 021.8209E + 01+3.2085E + 023.5376E + 01+4.1761E + 023.9279E + 01+
F212.5006E + 026.1378E + 021.0827E + 031.6526E + 03+1.0000E + 021.4750E - 10+1.9106E + 026.4388E + 02+4.5386E + 031.6627E + 03+
F223.6466E + 028.6511E + 004.0823E + 022.0902E + 01+5.5997E + 023.5956E + 01+4.7313E + 021.2753E + 02+1.0049E + 031.2095E + 02+
F234.3535E + 025.6508E + 004.7204E + 021.7450E + 01+5.0816E + 023.3266E + 01+5.1817E + 023.9012E + 01+1.1547E + 031.0207E + 02+
F243.8616E + 021.6961E + 003.8681E + 022.3189E + 00+4.2705E + 021.2163E + 01+3.9169E + 028.5953E + 00+3.9042E + 021.2946E + 01
F251.0218E + 036.5066E + 011.4754E + 034.1166E + 02+3.6445E + 025.8097E + 022.5294E + 024.9913E + 016.0681E + 031.0408E + 03+
F265.1130E + 029.6133E + 005.1388E + 028.9043E + 00+6.7680E + 024.5096E + 01+5.5523E + 022.2742E + 01+1.2333E + 032.5234E + 02+
F273.3158E + 025.1621E + 013.5272E + 025.0946E + 01+4.2941E + 022.2637E + 01+3.0973E + 022.6779E + 013.4336E + 025.7683E + 01
F284.8857E + 027.4136E + 015.9015E + 021.4017E + 02+1.4059E + 032.2830E + 02+1.1974E + 032.1161E + 02+1.6752E + 033.4528E + 02+
F294.0673E + 032.4775E + 035.8267E + 033.6953E + 03+4.0222E + 041.5936E + 04+7.4280E + 031.7065E + 03+2.1671E + 051.6109E + 05+
22/4/3 23/4/2 19/4/6 22/2/5
MeanStdMeanStdw/t/lMeanStdw/t/lMeanStdw/t/lMeanStdw/t/l
F13.6199E + 033.5151E + 033.4547E + 032.8831E + 039.8546E + 044.7405E + 051.2300E+101.8861E + 09+7.0600E + 083.3218E + 08+
F27.0833E + 025.6770E + 022.8012E + 003.0761E + 002.1908E + 045.1452E + 03+3.5239E + 046.4800E + 03+1.4999E + 044.3852E + 03+
F38.8394E + 011.7620E + 019.2781E + 011.9072E + 012.9136E + 029.2432E + 01+9.4294E + 022.4076E + 02+2.4543E + 025.2236E + 01+
F42.0598E + 015.0532E + 001.9165E + 023.6548E + 01+1.7613E + 021.9160E + 01+2.7811E + 022.0565E + 01+1.4475E + 022.9777E + 01+
F53.6500E - 065.8693E - 064.9007E + 017.8882E + 00+5.0867E + 002.0623E + 00+4.9563E + 015.7827E + 00+2.5497E + 015.9342E + 00+
F64.9094E + 016.9315E + 004.2736E + 029.7850E + 01+1.6204E + 025.4063E + 01+4.2351E + 023.8486E + 01+2.3444E + 023.7474E + 01+
F72.2282E + 015.9701E + 001.4414E + 022.8042E + 01+1.5346E + 023.8183E + 01+2.5010E + 021.8564E + 01+1.3497E + 023.0966E + 01+
F85.6830E - 021.4176E - 013.1811E + 037.3984E + 02+1.4093E + 019.2473E + 00+4.3222E + 039.1786E + 02+1.7691E + 031.1261E + 03+
F92.6223E + 035.4304E + 024.2537E + 035.5536E + 02+6.5419E + 033.3508E + 02+7.2163E + 032.8934E + 02+4.5989E + 036.4967E + 02+
F103.6975E + 012.9939E + 011.3272E + 024.7078E + 01+1.3224E + 026.0069E + 01+1.0336E + 034.3619E + 02+3.2107E + 026.9430E + 01+
F113.8584E + 041.8746E + 041.9301E + 061.1817E + 06+7.8399E + 061.3277E + 07+1.0400E + 092.3930E + 08+9.8021E + 077.8849E + 07+
F121.9494E + 041.8469E + 045.1355E + 043.4987E + 04+5.5042E + 042.2968E + 054.1700E + 081.8508E + 08+7.4742E + 058.1424E + 05+
F131.1780E + 041.1625E + 042.0972E + 032.3221E + 033.5265E + 048.1008E + 04+1.3739E + 057.2931E + 04+7.6726E + 037.8219E + 03
F142.2946E + 032.5517E + 032.6543E + 041.5272E + 04+8.4908E + 038.3134E + 03+1.2024E + 079.8201E + 06+1.5527E + 051.5805E + 05+
F151.0357E + 021.6551E + 021.1887E + 032.7675E + 02+1.3590E + 032.0549E + 02+2.0011E + 032.1658E + 02+9.7168E + 022.4958E + 02+
F165.9592E + 014.1194E + 014.7674E + 021.8831E + 02+2.7774E + 021.6462E + 02+6.9894E + 021.6541E + 02+3.2220E + 021.0678E + 02+
F172.1896E + 052.0519E + 058.6355E + 044.5953E + 046.9469E + 057.4988E + 05+3.3325E + 061.5395E + 06+2.1557E + 051.4811E + 05
F186.2676E + 038.6774E + 038.7262E + 045.2872E + 04+9.5482E + 031.3932E + 042.4667E + 071.3341E + 07+1.1035E + 061.1299E + 06+
F198.3284E + 015.7696E + 015.0451E + 021.2919E + 02+2.7930E + 021.3767E + 02+6.3236E + 021.2777E + 02+3.9184E + 021.0843E + 02+
F202.1799E + 025.8174E + 003.7707E + 024.3225E + 01+3.7432E + 022.3446E + 01+4.5630E + 021.6812E + 01+3.3317E + 022.8495E + 01+
F212.5006E + 026.1378E + 023.1234E + 032.1178E + 03+1.0209E + 022.3186E + 00+5.8758E + 032.5111E + 03+3.0270E + 028.4357E + 01+
F223.6466E + 028.6511E + 007.0083E + 021.2602E + 02+5.9302E + 022.0985E + 01+6.8450E + 022.3934E + 01+5.3104E + 023.9876E + 01+
F234.3535E + 025.6508E + 007.2306E + 021.2555E + 02+6.5646E + 022.1803E + 01+7.6405E + 022.4139E + 01+5.7911E + 023.6609E + 01+
F243.8616E + 021.6961E + 003.8961E + 028.8160E + 004.3297E + 022.1262E + 01+7.0532E + 027.4080E + 01+5.2346E + 023.6169E + 01+
F251.0218E + 036.5066E + 013.6956E + 032.0393E + 03+2.9447E + 039.3611E + 02+4.3330E + 033.1722E + 02+2.5369E + 037.0180E + 02+
F265.1130E + 029.6133E + 006.7133E + 021.5932E + 02+6.6664E + 022.1532E + 01+7.0512E + 024.1231E + 01+6.1876E + 023.0913E + 01+
F273.3158E + 025.1621E + 013.8304E + 024.5887E + 01+5.4646E + 027.7220E + 01+1.0195E + 031.1502E + 02+6.3251E + 027.8679E + 01+
F284.8857E + 027.4136E + 011.3243E + 032.8811E + 02+8.6819E + 021.7833E + 02+1.7531E + 032.5222E + 02+1.0053E + 031.4405E + 02+
F294.0673E + 032.4775E + 033.9254E + 052.1093E + 05+9.1695E + 041.5387E + 05+6.6719E + 072.4207E + 07+6.8953E + 065.6303E + 06+
23/3/3 26/2/1 29/0/0 27/1/1
Table 3. Experimental data of IEO vs other algorithms with 50 dimensions.
Table 3. Experimental data of IEO vs other algorithms with 50 dimensions.
IEO EO GGSA HGSA RGBSO
MeanStdMeanStdw/t/lMeanStdw/t/lMeanStdw/t/lMeanStdw/t/l
F11.5311E + 032.1089E + 032.8318E + 032.7107E + 03+8.1505E + 021.1902E + 037.9448E + 021.1149E + 032.3789E + 033.4021E + 03
F21.1263E + 043.3268E + 032.8438E + 032.2159E + 031.3567E + 051.0190E + 04+1.1802E + 059.5141E + 03+1.7085E + 033.1764E + 03
F36.9262E + 015.1900E + 018.6346E + 014.3053E + 01+1.8517E + 025.4256E + 01+1.9578E + 023.9702E + 01+1.4475E + 025.6851E + 01+
F45.1383E + 011.0053E + 011.5076E + 022.8656E + 01+2.2679E + 022.0587E + 01+2.6760E + 022.0650E + 01+3.6480E + 027.3741E + 01+
F52.3217E - 052.9363E - 051.3231E - 012.7165E - 01+2.4551E + 014.6891E + 00+2.3622E + 014.3942E + 00+6.1796E + 015.6440E + 00+
F68.9595E + 011.1126E + 011.9324E + 023.7980E + 01+6.5646E + 012.8082E + 007.0971E + 014.0368E + 001.3907E + 032.7022E + 02+
F75.0186E + 011.0946E + 011.5495E + 022.4796E + 01+2.3216E + 022.0943E + 01+2.9176E + 021.5818E + 01+3.6140E + 025.5489E + 01+
F88.1466E - 011.7331E + 001.8562E + 023.6280E + 02+6.5047E + 024.5085E + 02+1.1331E + 018.0126E + 011.1631E + 042.2277E + 03+
F94.9947E + 037.8524E + 026.1778E + 039.8142E + 02+5.8562E + 035.2612E + 02+5.7472E + 035.3555E + 02+7.6524E + 037.8010E + 02+
F103.1614E + 013.2042E + 001.2871E + 026.0573E + 01+4.0265E + 028.6827E + 01+1.2616E + 021.3519E + 01+2.0207E + 025.2416E + 01+
F117.2344E + 053.9616E + 058.9088E + 056.6327E + 051.4256E + 063.6763E + 05+8.3355E + 053.7484E + 053.7723E + 061.6649E + 06+
F124.0820E + 035.0743E + 037.0713E + 036.3861E + 03+1.3334E + 042.0854E + 03+5.6931E + 026.3584E + 027.8496E + 045.1728E + 04+
F133.8673E + 043.5577E + 045.1641E + 044.4116E + 04+7.8635E + 043.4919E + 04+2.2901E + 041.2824E + 042.5919E + 042.0255E + 04
F141.1740E + 046.7497E + 031.2340E + 047.6100E + 037.2377E + 032.0667E + 037.8448E + 031.7914E + 033.1964E + 041.6977E + 04+
F156.2370E + 022.7447E + 021.2528E + 033.8490E + 02+1.7589E + 033.0075E + 02+1.8275E + 033.0475E + 02+2.4551E + 035.0773E + 02+
F165.1489E + 022.5757E + 021.0606E + 033.1414E + 02+1.7329E + 033.0440E + 02+1.6705E + 033.0949E + 02+2.0775E + 033.4784E + 02+
F176.0450E + 052.8371E + 052.9733E + 051.9233E + 058.7793E + 056.7728E + 05+1.8151E + 056.7786E + 041.2329E + 057.0267E + 04
F182.1759E + 041.2094E + 041.7492E + 041.3755E + 041.4687E + 042.7956E + 031.4233E + 043.3363E + 031.7090E + 056.3585E + 04+
F193.7736E + 022.0103E + 028.5430E + 023.1313E + 02+1.2186E + 032.8644E + 02+1.3147E + 033.1073E + 02+1.7775E + 033.7329E + 02+
F202.4096E + 029.6413E + 003.2797E + 022.8137E + 01+4.4946E + 023.0677E + 01+4.5614E + 022.6590E + 01+6.8585E + 028.1442E + 01+
F214.9011E + 031.8469E + 036.6503E + 039.7776E + 02+7.7292E + 034.7552E + 02+7.9016E + 035.1888E + 02+8.1619E + 038.8037E + 02+
F224.5500E + 021.3867E + 015.4712E + 023.4378E + 01+8.6327E + 029.3925E + 01+1.0709E + 031.9643E + 02+1.6931E + 032.0448E + 02+
F235.2300E + 021.0887E + 016.1229E + 022.7051E + 01+8.2711E + 024.5751E + 01+8.8663E + 024.8946E + 01+1.8208E + 031.9817E + 02+
F245.6810E + 022.2904E + 015.4884E + 023.6187E + 016.5579E + 022.6512E + 01+5.8126E + 021.4932E + 01+5.4576E + 024.2036E + 01
F251.3522E + 031.0061E + 022.3819E + 034.2440E + 02+3.1675E + 021.1965E + 023.0000E + 027.5964E - 131.1242E + 041.2418E + 03+
F265.8967E + 024.4291E + 016.3278E + 025.5208E + 01+1.3409E + 031.6470E + 02+1.4008E + 032.7882E + 02+2.8731E + 035.5545E + 02+
F274.9820E + 022.5442E + 014.9363E + 022.3803E + 016.2778E + 028.2392E + 01+5.0229E + 022.1101E + 01+5.0109E + 022.1552E + 01+
F284.9978E + 021.0608E + 029.4849E + 023.2370E + 02+2.2300E + 032.8897E + 02+1.7059E + 032.7060E + 02+2.7013E + 034.7793E + 02+
F298.5568E + 051.0185E + 051.0347E + 062.0391E + 05+3.0024E + 074.6679E + 06+1.3401E + 069.6083E + 04+8.9103E + 061.1678E + 06+
22/3/4 24/1/4 19/2/8 24/1/4
MeanStdMeanStdw/t/lMeanStdw/t/lMeanStdw/t/lMeanStdw/t/l
F11.5311E + 032.1089E + 036.1453E + 035.6080E + 03+6.1377E + 064.0451E + 07+3.8443E+105.6635E + 09+1.6722E + 095.8565E + 08+
F21.1263E + 043.3268E + 035.5073E + 012.3696E + 017.8274E + 041.1009E + 04+1.0058E + 051.5734E + 04+3.3961E + 045.7518E + 03+
F36.9262E + 015.1900E + 011.7370E + 025.0161E + 01+8.6434E + 022.4895E + 02+5.6612E + 031.3537E + 03+4.7246E + 021.1917E + 02+
F45.1383E + 011.0053E + 013.2583E + 025.2614E + 01+3.5662E + 024.1859E + 01+5.5115E + 022.9196E + 01+2.7539E + 024.7818E + 01+
F52.3217E - 052.9363E - 055.7724E + 016.9034E + 00+1.4716E + 012.9611E + 00+6.8656E + 014.9617E + 00+3.1695E + 018.1165E + 00+
F68.9595E + 011.1126E + 019.1815E + 021.4937E + 02+3.6769E + 028.1704E + 01+9.0661E + 026.4475E + 01+4.3104E + 026.0698E + 01+
F75.0186E + 011.0946E + 013.3282E + 024.6484E + 01+3.6202E + 023.8549E + 01+5.5121E + 023.0426E + 01+2.7269E + 024.1814E + 01+
F88.1466E - 011.7331E + 009.8839E + 031.6464E + 03+1.4885E + 031.1040E + 03+2.0795E + 043.7093E + 03+7.8349E + 034.1955E + 03+
F94.9947E + 037.8524E + 027.3650E + 038.4749E + 02+1.2306E + 044.4125E + 02+1.3345E + 044.1019E + 02+8.5816E + 031.0360E + 03+
F103.1614E + 013.2042E + 002.0338E + 024.7596E + 01+7.2701E + 026.0065E + 02+4.8016E + 031.2095E + 03+6.7360E + 021.1938E + 02+
F117.2344E + 053.9616E + 051.5017E + 079.5050E + 06+1.2508E + 083.1452E + 08+1.1488E+102.6297E + 09+3.3153E + 081.7748E + 08+
F124.0820E + 035.0743E + 036.3678E + 043.7539E + 04+2.8804E + 061.2757E + 07+2.6609E + 098.4513E + 08+3.3839E + 063.2260E + 06+
F133.8673E + 043.5577E + 042.6805E + 041.7148E + 042.8087E + 054.3822E + 05+1.9363E + 069.1740E + 05+1.1147E + 059.3776E + 04+
F141.1740E + 046.7497E + 032.7605E + 041.6839E + 04+6.1836E + 036.4364E + 033.4000E + 081.4620E + 08+9.2249E + 051.0206E + 06+
F156.2370E + 022.7447E + 022.1824E + 034.7183E + 02+2.7853E + 034.0306E + 02+3.7893E + 033.4367E + 02+1.7932E + 034.8106E + 02+
F165.1489E + 022.5757E + 021.6074E + 033.5387E + 02+1.5051E + 032.7256E + 02+2.5752E + 032.4376E + 02+1.2852E + 032.4662E + 02+
F176.0450E + 052.8371E + 051.7196E + 058.7116E + 043.8682E + 064.3089E + 06+1.4218E + 076.7183E + 06+1.6011E + 069.3822E + 05+
F182.1759E + 041.2094E + 043.8592E + 052.3659E + 05+6.4785E + 043.8132E + 052.1989E + 081.0579E + 08+1.9542E + 061.4034E + 06+
F193.7736E + 022.0103E + 021.2868E + 032.8977E + 02+1.3129E + 033.1924E + 02+1.7730E + 031.8854E + 02+9.6689E + 022.3457E + 02+
F202.4096E + 029.6413E + 006.3133E + 028.2812E + 01+5.7560E + 022.5771E + 01+7.6197E + 023.2250E + 01+4.7299E + 024.9388E + 01+
F214.9011E + 031.8469E + 038.0476E + 039.6464E + 02+1.1221E + 043.8418E + 03+1.3661E + 043.7140E + 02+8.1801E + 032.0803E + 03+
F224.5500E + 021.3867E + 011.1103E + 032.0499E + 02+9.8609E + 024.8815E + 01+1.2083E + 035.6210E + 01+8.0644E + 026.4120E + 01+
F235.2300E + 021.0887E + 011.0726E + 032.3991E + 02+1.0551E + 034.6065E + 01+1.2627E + 035.2719E + 01+8.4928E + 027.0577E + 01+
F245.6810E + 022.2904E + 015.6859E + 023.0893E + 019.2068E + 021.1215E + 02+3.4264E + 035.3420E + 02+9.0381E + 029.0221E + 01+
F251.3522E + 031.0061E + 029.2425E + 032.0678E + 03+5.9581E + 036.9379E + 02+9.0578E + 036.1890E + 02+4.7418E + 035.0167E + 02+
F265.8967E + 024.4291E + 011.3231E + 033.8036E + 02+1.4188E + 031.1258E + 02+1.6723E + 031.6561E + 02+1.0691E + 031.1196E + 02+
F274.9820E + 022.5442E + 015.0882E + 022.2504E + 01+1.2070E + 032.1986E + 02+3.5329E + 034.5303E + 02+1.2604E + 033.0987E + 02+
F284.9978E + 021.0608E + 022.5527E + 034.5029E + 02+2.0372E + 034.9190E + 02+4.2606E + 035.9193E + 02+1.9815E + 033.9379E + 02+
F298.5568E + 051.0185E + 051.1466E + 073.5251E + 06+1.2361E + 076.8582E + 06+5.8302E + 081.8371E + 08+1.9952E + 083.7544E + 07+
25/0/0 27/0/2 29/0/0 29/0/0
Table 4. Experimental data of IEO vs other algorithms with 100 dimensions.
Table 4. Experimental data of IEO vs other algorithms with 100 dimensions.
IEO EO GGSA HGSA RGBSO
MeanStdMeanStdw/t/lMeanStdw/t/lMeanStdw/t/lMeanStdw/t/l
F16.3560E + 037.0892E + 036.1231E + 038.0980E + 033.9078E + 032.6480E + 033.7074E + 033.2112E + 035.4653E + 036.2828E + 03
F29.4784E + 041.2626E + 045.6136E + 041.0045E + 042.9105E + 051.2583E + 04+2.7438E + 051.5520E + 04+5.1279E + 043.1942E + 04
F32.0382E + 024.1646E + 012.2498E + 024.6841E + 01+5.7636E + 021.4310E + 02+2.7355E + 023.8463E + 01+2.4073E + 025.8214E + 01+
F41.7111E + 022.1521E + 014.5075E + 026.2832E + 01+6.2118E + 022.6841E + 01+7.2869E + 023.1368E + 01+8.6633E + 028.0687E + 01+
F52.0472E - 024.5042E - 024.8417E + 003.5677E + 00+3.7170E + 013.5150E + 00+3.1128E + 012.9471E + 00+6.4996E + 014.5777E + 00+
F62.4255E + 022.6057E + 016.2468E + 021.0645E + 02+1.6851E + 021.1553E + 011.5180E + 028.2089E + 003.6453E + 035.0853E + 02+
F71.4878E + 022.2124E + 014.1900E + 026.2955E + 01+6.7253E + 024.4384E + 01+7.9541E + 023.0846E + 01+9.8890E + 021.0064E + 02+
F88.8737E + 011.5820E + 021.0093E + 043.5341E + 03+7.0162E + 039.1907E + 02+2.0808E + 037.1719E + 02+2.3938E + 042.2587E + 03+
F91.2888E + 041.5988E + 031.4130E + 041.4406E + 03+1.2375E + 049.5799E + 021.2248E + 048.7137E + 021.5558E + 041.1400E + 03+
F105.3423E + 029.6716E + 017.4730E + 022.2796E + 02+1.6298E + 042.5086E + 03+4.3935E + 031.4049E + 03+1.2969E + 032.8194E + 02+
F111.3244E + 065.1110E + 051.9871E + 069.0780E + 05+3.8930E + 066.0602E + 06+1.3288E + 064.3456E + 051.6867E + 076.2537E + 06+
F124.3951E + 034.7481E + 036.7403E + 034.1743E + 03+1.3638E + 042.4606E + 03+3.0313E + 032.0629E + 034.1171E + 041.3895E + 04+
F132.3920E + 055.8755E + 043.1875E + 051.1362E + 05+4.4307E + 051.5668E + 05+2.0370E + 053.6212E + 049.2066E + 043.0697E + 04
F141.2903E + 031.3229E + 033.5340E + 033.0669E + 03+2.2918E + 036.8143E + 02+8.6006E + 026.5469E + 023.7644E + 041.7841E + 04+
F152.1082E + 036.1595E + 023.3109E + 037.1837E + 02+4.8087E + 034.4354E + 02+4.9042E + 035.0102E + 02+5.1883E + 038.1451E + 02+
F161.6771E + 034.4497E + 023.1685E + 036.1243E + 02+3.0927E + 034.3703E + 02+3.2029E + 033.9429E + 02+3.8282E + 035.8557E + 02+
F177.6071E + 052.7588E + 056.3651E + 052.8662E + 053.6839E + 057.6764E + 042.8127E + 055.0767E + 042.0925E + 056.2824E + 04
F181.1961E + 031.5248E + 032.4607E + 032.1853E + 03+1.8132E + 031.2299E + 03+1.2256E + 039.4432E + 02+7.4769E + 052.2645E + 05+
F191.5410E + 034.7608E + 022.7351E + 036.3348E + 02+3.8112E + 034.2019E + 02+3.8914E + 033.8269E + 02+3.7429E + 036.3432E + 02+
F203.2158E + 021.0689E + 015.6599E + 026.7899E + 01+8.7799E + 025.2718E + 01+9.1930E + 024.2430E + 01+1.9708E + 032.0516E + 02+
F211.3559E + 041.4261E + 031.5553E + 041.5635E + 03+1.6776E + 047.9950E + 02+1.6942E + 048.2121E + 02+1.7242E + 041.2287E + 03+
F226.3211E + 022.5925E + 018.6506E + 025.4717E + 01+1.8542E + 032.5189E + 02+3.0910E + 032.9055E + 02+3.2047E + 033.3174E + 02+
F239.5196E + 022.3225E + 011.1875E + 036.9890E + 01+1.5803E + 031.1990E + 02+1.2639E + 037.4874E + 01+4.0015E + 034.7499E + 02+
F247.7148E + 025.0315E + 017.8326E + 026.9530E + 011.2930E + 037.2166E + 01+8.3898E + 025.9144E + 01+7.4581E + 026.2279E + 01
F253.7054E + 032.6285E + 027.0410E + 031.3734E + 03+1.1306E + 032.2046E + 035.5245E + 021.8028E + 032.8362E + 042.3977E + 03+
F266.7645E + 022.5237E + 017.4514E + 024.9608E + 01+1.5618E + 031.5587E + 02+1.4526E + 032.1073E + 02+4.8403E + 031.5164E + 03+
F275.6282E + 023.2005E + 015.6685E + 023.4540E + 019.8260E + 021.0030E + 02+6.2466E + 022.4996E + 01+5.7055E + 023.1503E + 01
F281.6854E + 033.9166E + 023.1889E + 035.6692E + 02+4.8089E + 033.7728E + 02+4.4616E + 033.8781E + 02+5.7363E + 036.2445E + 02+
F298.2305E + 034.0830E + 031.7542E + 041.3038E + 04+1.4872E + 051.0947E + 05+9.2667E + 032.1515E + 03+3.2390E + 069.4365E + 05+
24/3/2 24/1/4 20/3/6 23/2/4
MeanStdMeanStdw/t/lMeanStdw/t/lMeanStdw/t/lMeanStdw/t/l
F16.3560E + 037.0892E + 033.5258E + 061.0461E + 06+9.7549E + 041.8484E + 05+1.5315E+119.5837E + 09+3.9040E + 091.4402E + 09+
F29.4784E + 041.2626E + 048.7935E + 033.1767E + 031.4921E + 054.4282E + 04+2.8608E + 051.9950E + 04+1.1178E + 051.2081E + 04+
F32.0382E + 024.1646E + 012.8763E + 025.1955E + 01+3.0813E + 025.0506E + 01+2.5881E + 043.4226E + 03+8.7015E + 021.0630E + 02+
F41.7111E + 022.1521E + 018.2529E + 027.5971E + 01+3.9459E + 026.6578E + 01+1.3513E + 035.7633E + 01+5.6644E + 028.5188E + 01+
F52.0472E - 024.5042E - 026.5575E + 014.4224E + 00+1.4379E - 013.8274E - 02+8.8852E + 013.9499E + 00+2.7660E + 017.5904E + 00+
F62.4255E + 022.6057E + 012.4234E + 033.0967E + 02+7.4169E + 028.5497E + 01+2.6872E + 031.2032E + 02+1.1914E + 031.3883E + 02+
F71.4878E + 022.2124E + 019.3421E + 029.1831E + 01+3.7656E + 025.8945E + 01+1.4049E + 035.5571E + 01+5.9211E + 029.0295E + 01+
F88.8737E + 011.5820E + 022.6756E + 042.7532E + 03+8.9395E + 033.5167E + 03+6.6691E + 046.9060E + 03+1.1369E + 044.4620E + 03+
F91.2888E + 041.5988E + 031.5756E + 041.3042E + 03+1.1190E + 041.1669E + 033.0231E + 044.8705E + 02+1.9258E + 041.6613E + 03+
F105.3423E + 029.6716E + 011.2157E + 031.7182E + 02+3.2309E + 041.3534E + 04+6.9805E + 041.1019E + 04+4.5683E + 037.7965E + 02+
F111.3244E + 065.1110E + 051.1116E + 082.7313E + 07+3.5948E + 071.7463E + 07+5.3312E+107.1359E + 09+1.2962E + 092.8627E + 08+
F124.3951E + 034.7481E + 033.8614E + 041.2081E + 04+2.4195E + 041.9282E + 04+8.2239E + 091.3030E + 09+7.8207E + 064.6658E + 06+
F132.3920E + 055.8755E + 042.8571E + 051.1212E + 05+3.7686E + 063.6418E + 06+1.8012E + 076.9797E + 06+2.0354E + 068.1955E + 05+
F141.2903E + 031.3229E + 033.2329E + 041.6422E + 04+5.7930E + 034.9969E + 03+2.5663E + 096.8683E + 08+1.9148E + 061.6736E + 06+
F152.1082E + 036.1595E + 025.2675E + 038.3990E + 02+4.0851E + 037.4609E + 02+1.0805E + 046.0220E + 02+4.4914E + 035.9263E + 02+
F161.6771E + 034.4497E + 023.8389E + 035.2089E + 02+3.3910E + 034.8404E + 02+9.1346E + 031.2045E + 03+2.9074E + 034.9128E + 02+
F177.6071E + 052.7588E + 054.6030E + 051.4201E + 054.5520E + 064.2238E + 06+3.3404E + 071.1457E + 07+2.4666E + 061.2185E + 06+
F181.1961E + 031.5248E + 032.6760E + 061.4380E + 06+5.5198E + 035.4739E + 03+2.1113E + 096.4117E + 08+6.4761E + 063.8701E + 06+
F191.5410E + 034.7608E + 023.6192E + 034.0433E + 02+2.8771E + 035.3758E + 02+5.1156E + 032.6241E + 02+2.8994E + 034.7903E + 02+
F203.2158E + 021.0689E + 011.5897E + 032.5032E + 02+6.5392E + 025.6775E + 01+1.7783E + 036.6354E + 01+8.6488E + 029.7161E + 01+
F211.3559E + 041.4261E + 031.7115E + 041.5774E + 03+1.2518E + 041.1086E + 033.1304E + 046.2171E + 02+2.0671E + 041.3412E + 03+
F226.3211E + 022.5925E + 012.2624E + 036.1629E + 02+8.1287E + 024.5250E + 01+2.4681E + 038.3459E + 01+1.4996E + 031.4366E + 02+
F239.5196E + 022.3225E + 012.4558E + 037.6563E + 02+1.3755E + 036.7566E + 01+3.8987E + 031.7411E + 02+2.0000E + 031.7065E + 02+
F247.7148E + 025.0315E + 018.0231E + 026.0342E + 01+8.4538E + 025.7489E + 01+1.1454E + 041.5290E + 03+1.6724E + 031.0930E + 02+
F253.7054E + 032.6285E + 022.2574E + 045.5649E + 03+8.5552E + 035.9814E + 02+2.9198E + 041.4708E + 03+1.0961E + 041.0970E + 03+
F266.7645E + 022.5237E + 012.3814E + 038.5984E + 02+8.5467E + 025.9076E + 01+4.0216E + 033.7708E + 02+1.4208E + 031.3119E + 02+
F275.6282E + 023.2005E + 016.2416E + 024.0486E + 01+6.6577E + 024.0560E + 01+1.5468E + 041.5861E + 03+2.1511E + 033.5661E + 02+
F281.6854E + 033.9166E + 026.3818E + 037.2826E + 02+3.3961E + 035.6995E + 02+1.2581E + 041.2497E + 03+5.3223E + 036.4903E + 02+
F298.2305E + 034.0830E + 031.2333E + 073.9628E + 06+1.3182E + 059.1611E + 04+5.4903E + 091.0012E + 09+2.9884E + 087.8176E + 07+
27/0/2 27/0/2 29/0/0 29/0/0
Table 5. Parameters of the DED problem.
Table 5. Parameters of the DED problem.
E m i n [150, 135, 73, 60, 73, 57, 20, 47, 20]
E m a x [470, 460, 340, 300, 243, 160, 130, 120, 80]
Dimension 9 × 24
Table 6. Experimental results of the DED problem.
Table 6. Experimental results of the DED problem.
MeanStd
IEO1.7354E + 073.5084E + 04
EO1.7510E + 071.4663E + 05
GGSA2.5123E + 077.7561E + 05
HGSA2.0501E + 071.7684E + 05
RGBSO1.8138E + 071.1955E + 05
CBSO1.8578E + 072.2908E + 05
GLPSO3.5774E + 077.9333E + 05
SCA4.8429E + 077.0043E + 05
WFS2.4939E + 074.7440E + 05
Table 7. Experimental results of the STO problem.
Table 7. Experimental results of the STO problem.
Meanstd
IEO1.8666E + 013.0406E + 00
EO1.8705E + 013.9902E + 00
GGSA3.9056E + 017.6486E + 00
HGSA3.7395E + 015.8244E + 00
RGBSO3.0590E + 016.4354E + 00
CBSO2.7055E + 013.4890E + 00
GLPSO2.3800E + 012.9912E + 00
SCA3.4923E + 012.6531E + 00
WFS2.6464E + 013.3944E + 00
Table 8. Experimental results of DNM training.
Table 8. Experimental results of DNM training.
MeanStd
IEO4.3465E - 031.9451E - 02
EO6.6939E - 032.5921E - 02
GGSA1.2922E - 016.1563E - 02
HGSA7.3157E - 026.3336E - 02
RGBSO3.5674E - 024.6666E - 02
CBSO5.8530E - 024.8938E - 02
GLPSO1.1154E - 023.4852E - 02
SCA1.4400E - 013.3928E - 02
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, L.; Xu, Z.; Liu, Y.; Tian, G. An Improved Equilibrium Optimizer with a Decreasing Equilibrium Pool. Symmetry 2022, 14, 1227. https://doi.org/10.3390/sym14061227

AMA Style

Yang L, Xu Z, Liu Y, Tian G. An Improved Equilibrium Optimizer with a Decreasing Equilibrium Pool. Symmetry. 2022; 14(6):1227. https://doi.org/10.3390/sym14061227

Chicago/Turabian Style

Yang, Lin, Zhe Xu, Yanting Liu, and Guozhong Tian. 2022. "An Improved Equilibrium Optimizer with a Decreasing Equilibrium Pool" Symmetry 14, no. 6: 1227. https://doi.org/10.3390/sym14061227

APA Style

Yang, L., Xu, Z., Liu, Y., & Tian, G. (2022). An Improved Equilibrium Optimizer with a Decreasing Equilibrium Pool. Symmetry, 14(6), 1227. https://doi.org/10.3390/sym14061227

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop