Next Article in Journal
Bionic Artificial Neural Networks in Medical Image Analysis
Previous Article in Journal
Investigation of Fluidic Universal Gripper for Delicate Object Manipulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Dual-Population Collaborative Chicken Swarm Optimization Algorithm for High-Dimensional Optimization

1
School of Computer Science, Northwestern Polytechnical University, Xi’an 710072, China
2
School of Applied Science and Technology, Hainan University, Danzhou 571737, China
3
School of Computer Science, Shaanxi Normal University, Xi’an 710062, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2023, 8(2), 210; https://doi.org/10.3390/biomimetics8020210
Submission received: 29 April 2023 / Revised: 16 May 2023 / Accepted: 17 May 2023 / Published: 19 May 2023

Abstract

:
With the development of science and technology, many optimization problems in real life have developed into high-dimensional optimization problems. The meta-heuristic optimization algorithm is regarded as an effective method to solve high-dimensional optimization problems. However, considering that traditional meta-heuristic optimization algorithms generally have problems such as low solution accuracy and slow convergence speed when solving high-dimensional optimization problems, an adaptive dual-population collaborative chicken swarm optimization (ADPCCSO) algorithm is proposed in this paper, which provides a new idea for solving high-dimensional optimization problems. First, in order to balance the algorithm’s search abilities in terms of breadth and depth, the value of parameter G is given by an adaptive dynamic adjustment method. Second, in this paper, a foraging-behavior-improvement strategy is utilized to improve the algorithm’s solution accuracy and depth-optimization ability. Third, the artificial fish swarm algorithm (AFSA) is introduced to construct a dual-population collaborative optimization strategy based on chicken swarms and artificial fish swarms, so as to improve the algorithm’s ability to jump out of local extrema. The simulation experiments on the 17 benchmark functions preliminarily show that the ADPCCSO algorithm is superior to some swarm-intelligence algorithms such as the artificial fish swarm algorithm (AFSA), the artificial bee colony (ABC) algorithm, and the particle swarm optimization (PSO) algorithm in terms of solution accuracy and convergence performance. In addition, the APDCCSO algorithm is also utilized in the parameter estimation problem of the Richards model to further verify its performance.

1. Introduction

High-dimensional optimization problems generally refer to ones with high complexity and dimensions (exceeding 100). It often has the characteristics of non-linearity and high complexity. In real life, many problems can be expressed as high-dimensional optimization problems, such as large-scale job-shop-scheduling problems [1], vehicle-routing problems [2], feature selection [3], satellite autonomous observation mission planning [4], economic environmental dispatch [5], and parameter estimation. These kinds of optimization problems often greatly degrade the performance of the optimization algorithm as the dimension of the optimization problem increases, so it is extremely difficult to obtain the global optimal solution, which poses a technical challenge to solving many practical problems. Therefore, the study of high-dimensional optimization problems has important theoretical and practical significance [6,7].
The meta-heuristic optimization algorithm is a class of random search algorithms proposed by simulating biological intelligence in nature [8], and has been successfully applied in various fields, such as the Internet of Things [9], network information systems [10,11], multi-robot space exploration [12], and so on. At present, hundreds of algorithms have emerged, such as the particle swarm optimization (PSO) algorithm, the artificial bee colony (ABC) algorithm, the artificial fish swarm algorithm (AFSA), the bacterial foraging algorithm (BFA), the grey wolf optimizer (GWO) algorithm, and the sine cosine algorithm (SCA) [13]. These algorithms have become effective methods for solving high-dimensional optimization problems because of their simple structure and strong exploration and exploitation abilities. For example, Huang et al. proposed a hybrid optimization algorithm by combining the frog’s leaping optimization algorithm with the GWO algorithm and verified the performance of the algorithm on 10 high-dimensional complex functions [14]. Gu et al. proposed a hybrid genetic grey wolf algorithm for solving high-dimensional complex functions by combining the genetic algorithm and GWO and verified the performance of the algorithm on 10 high-dimensional complex test functions and 13 standard test functions [15]. Wang et al. improved the grasshopper optimization algorithm by introducing nonlinear inertia weight and used it to solve the optimization problem of high-dimensional complex functions. Experiments on nine benchmark test functions show that the algorithm has significantly improved convergence speed and convergence accuracy [16].
The chicken swarm optimization (CSO) algorithm is a meta-heuristic optimization algorithm proposed by Meng et al. in 2014, which simulates the foraging behavior of chickens in nature [17]. The algorithm realizes rapid optimization through information interaction and collaborative sharing among roosters, hens, and chicks. Because of its good solution accuracy and robustness, it has been widely used in network engineering [18,19], image processing [20,21,22], power systems [23,24], parameter estimation [25,26], and other fields. For example, Kumar et al. utilized the CSO algorithm to select the best peer in the P2P network and proposed an optimal load-balancing strategy. The experimental results show that it has better load balancing than other methods [18]. Cristin et al. applied the CSO algorithm to classify brain tumor severity in magnetic resonance imaging (MRI) images and proposed a brain-tumor image-classification method based on the fractional CSO algorithm. Experimental results show that this method has good performance in accuracy, sensitivity, and so on [20]. Liu et al. developed an improved CSO–extreme-learning machine model by improving the CSO algorithm and applied it to predict the photovoltaic power of a power system and obtained satisfactory results [23]. Alisan applied the CSO algorithm for the parameter estimation of the proton exchange membrane fuel cell model, and it exhibit particularly good performance [25].
Although the CSO algorithm has been successfully applied to various fields and solved many practical problems, the above application examples are all aimed at low-dimensional optimization problems. With the increase in the dimensions of the optimization problems, the CSO algorithm is prone to premature convergence. Therefore, for the optimization problem of high-dimensional complex functions, Yang et al. constructed a genetic CSO algorithm by introducing the idea of a genetic algorithm into the CSO algorithm and verified the performance of the proposed algorithm on 10 benchmark functions [27]. Although the convergence speed and stability were improved, the solution accuracy is still unsatisfactory. Gu et al. realized the solution to high-dimensional complex function optimization problems by removing the chicks in the chicken swarm and introducing an inverted S-shaped inertial weight to construct an adaptive simplified CSO algorithm [28]. Although the proposed algorithm is significantly better than some other algorithms in solution accuracy, there is still room for improvement in convergence speed. By introducing the dissipative structure and differential mutation operation into the basic CSO algorithm, Han constructed a hybrid CSO algorithm to avoid premature convergence in solving high-dimensional complex problems, and verified the performance of the proposed algorithm on 18 standard functions [29]. Although its convergence performance was improved, the solution accuracy should be further enhanced.
To address the aforementioned issues, we propose an adaptive dual-population collaborative CSO (ADPCCSO) algorithm in this paper. The algorithm solves high-dimensional complex problems by using an adaptive adjustment strategy for parameter G, an improvement strategy for foraging behaviors, and a dual-population collaborative optimization strategy. Specifically, the main technical features and originality of this paper are given below.
(1) The value of parameter G is given using an adaptive dynamic adjustment method, so as to balance the breadth and depth of the search abilities of the algorithm.
(2) To improve the solution accuracy and depth optimization ability of the CSO algorithm, an improvement strategy for foraging behaviors is proposed by introducing an improvement factor and adding a kind of chick’s foraging behavior near the optimal value.
(3) A dual-population collaborative optimization strategy based on the chicken swarm and artificial fish swarm is constructed to enhance the global search ability of the whole algorithm.
The simulation experiments on the selected standard test functions and the parameter estimation problem of the Richards model show that the ADPCCSO algorithm is better than some other meta-heuristic optimization algorithms in terms of solution accuracy, convergence performance, etc.
The rest of this paper is arranged as follows. In Section 2, the principle and characteristics of the standard CSO algorithm are briefly introduced. Section 3 describes the ADPCCSO algorithm proposed in this paper in detail, the improvement strategies of the algorithm, and the main implementation steps are presented in this section. Simulation experiments and analysis are presented in Section 4 to verify the performance of the proposed ADPCCSO algorithm. Finally, we conclude the paper in Section 5.

2. The Basic CSO Algorithm

CSO algorithm is a class of random search algorithm based on the collective intelligent behavior of chicken swarms in the process of foraging. In this algorithm, several randomly generated positions in the search range are regarded as several chickens, and the fitness function values of chickens are regarded as food sources. In light of the fitness function values, the whole chicken swarm is divided into the roosters, hens, and chicks, where roosters have the best fitness values, hens take second place, and chicks have the worst fitness values. The algorithm relies on the roosters, hens, and chicks to constantly conduct information interaction and cooperation sharing and finally finds the best food source [30,31]. The characteristics are as follows:
(1) The whole chicken swarm is divided into several subgroups, and each subgroup is composed of a rooster, at least one hen and several chicks. The hens and chicks look for food under the leadership of the roosters in their subgroups, and they will also obtain food from other subgroups.
(2) In the basic CSO algorithm, once the hierarchical relationship and dominance relationship between roosters, hens, and chicks are determined, they will remain unchanged for a certain period until the role update condition is met. In this way, they achieve information interaction and find the best food source.
(3) The whole algorithm realizes parallel optimization through the cooperation between roosters, hens, and chicks. The formulas corresponding to their foraging behaviors are as follows:
The roosters’ foraging behavior:
X i , j t + 1 = X i , j t × 1 + R a n d n 0 , σ 2 j ( 1 , 2 ,     D i m )
σ 2 = 1 , f i f k , exp f k     f i f i   +   ε , f i > f k k i
where X i , j t stands for the position of the ith rooster at iteration t. Dim is the dimension of the problem to be solved. R a n d n 0 , σ 2 is a random number matrix with a mean value of 0 and a variance of σ 2 . ε is a smallest positive normalized floating-point number in IEEE double precision. f k is the fitness function value of any rooster, and k i .
The hens’ foraging behavior is described by
X i , j t + 1 = X i , j t + c 1 × r a n d ( ) × X r 1 , j t X i , j t + c 2 × r a n d ( ) × X r 2 , j t X i , j t
c 1 = exp f i f r 1 / a b s f i + ε
c 2 = exp f r 2 f i
where X i , j t is the individual position of the ith hen, X r 1 , j t is the position of the group-mate rooster of the ith hen, X r 2 , j t is a randomly selected chicken, and r 2 r 1 .
The chicks’ foraging behavior is described by
X i , j t + 1 = X i , j t + F L × X m , j t X i , j t
where i is an index of the chick, and m is an index of the ith chick’s mother. F L 0 , 2 is a follow coefficient.

3. ADPCCSO Algorithm

To address the issue of precocious convergence of the basic CSO algorithm in solving high-dimensional optimization problems, an ADPCCSO algorithm is proposed. First, to balance the breadth and depth search abilities of the basic CSO algorithm, an s-shaped function is utilized to adaptively adjust the value of parameter G. Then, in order to improve the solution accuracy of the algorithm, inspired by the literature [32], an improvement factor is used to dynamically adjust the foraging behaviors of chickens. At the same time, when the role-update condition is met, the chicks are arranged to search for food near the global optimal value, which can enhance the depth optimization ability of the algorithm. Finally, in view of the fact that the AFSA has unique behavior-pattern characteristics, which can make the algorithm quickly jump out of the local optimal solution in solving the high-dimensional optimization problems, it is integrated into the CSO algorithm to construct a dual-population collaborative optimization strategy based on chicken swarms and artificial fish swarms to enhance the global search ability, so as to achieve rapid optimization in the algorithm.

3.1. The Improvement Strategy for Parameter G

In the basic CSO algorithm, the parameter G determines how often the hierarchical relationship and role assignment of the chicken swarm are updated. The setting of an appropriate parameter G plays a crucial role in balancing the breadth and depth search abilities of the algorithm. Too large a value of G means that the information interaction between individuals is slow, which is not conducive to improving the breadth search ability of the algorithm. Too small a value of G will make the information interaction between individuals too frequent, which is not beneficial to enhancing the depth-optimization ability of the algorithm. Considering that the value of parameter G is a constant in the basic CSO algorithm, it is not conducive to balancing the search abilities between breadth and depth. We use Equation (7) to adaptively adjust the value of the parameter G; that is, in the early stage of the algorithm iteration, let G take a smaller value to enhance the breadth optimization ability of the algorithm; in the late stage of iteration of the algorithm, let G take a larger value to enhance the depth-optimization ability of the algorithm.
G = r o u n d   ( 40 + 60 / ( 1 + exp ( 15 0.5 t ) ) )
where t represents the current number of iterations and round () is a rounding function that can round an element to the nearest integer.

3.2. The Improvement Strategy for Foraging Behaviors

To improve the solution accuracy and depth-optimization ability of the algorithm, we construct an improvement strategy for foraging behaviors in this section; that is, an improvement factor is used in updating formulas of chickens. At the same time, in an effort to improve the depth optimization ability of CSO algorithm, the chicks’ foraging behavior near the optimal value is also added.

3.2.1. Improvement Factor

To enhance the optimization ability of the algorithm, a learning factor was integrated into the foraging formula of roosters in Reference [32], which can be shown as follows:
a ( t ) = t × ( log ( ω max ) log ( ω min ) ) / M log ( ω max )
ω ( t ) = exp ( a ( t ) )
where M is the maximum number of iterations and ω max and ω min are the maximum and minimum values of the learning factor, whose values are 0.9 and 0.4, respectively.
The method in Reference [32] improved the optimization ability of the algorithm to a certain degree, but it only modified the position update formula of roosters, which is not conducive to further optimization of the algorithm. Therefore, we slightly modified the learning factor in Reference [32] and named it the improvement factor; that is, through trial and error, we set the maximum and minimum values of the improvement factor to be 0.7 and 0.1, respectively, and then used them in the foraging formulas of roosters, hens, and chicks. The experimental results have demonstrated that the solution accuracy and convergence performance are significantly improved. The modified foraging formulas for roosters, hens, and chicks are shown in Equations (10)–(12):
X i , j t + 1 = ω ( t ) × X i , j t × ( 1 + R a n d n 0 , σ 2 )
X i , j t + 1 = ω ( t ) × X i , j t + c 1 × r a n d ( ) × X r 1 , j t X i , j t + c 2 × r a n d ( ) × X b e s t , j ( t ) X i , j t
X i , j t + 1 = ω ( t ) × X i , j t + F L × X m , j t X i , j t + F L × X b e s t , j ( t ) X i , j t

3.2.2. Chicks’ Foraging Behavior near the Optimal Value

To enhance the depth optimization ability of the CSO algorithm, when the role update condition is met, chicks are allowed to search for food directly near the current optimal value. The corresponding formula is as follows:
X i , j t + 1 = l b + u b l b × r a n d ( )
l b = X b e s t , j ( t ) - X b e s t , j ( t ) × r a n d ( )
u b = X b e s t , j ( t ) + X b e s t , j ( t ) × r a n d ( )
where X b e s t , j ( t ) is the global optimal individual position at iteration t. lb and ub are the upper and lower bounds of an interval set near the current optimal value.

3.3. The Dual-Population Collaborative Optimization Strategy

To speed up the step of the algorithm jumping out of the local extrema, so as to quickly converge to the global optimal value, in view of the good robustness and global search ability of AFSA, the AFSA is introduced to construct a dual-population collaborative optimization strategy based on the chicken swarm and artificial fish swarm. With this strategy, the excellent individuals and several random individuals between the two populations are exchanged to break the equilibrium state within the population, so that the algorithm jumps out of the local extrema. The flow chart of the dual-population collaborative optimization strategy are shown in Figure 1.
The main steps are as follows:
(1)
Population initialization. Randomly generate two initial populations with a population size of N: the chicken swarm and the artificial fish swarm.
(2)
Chicken swarm optimization. Calculate the fitness function values of the entire chicken swarm and record the optimal value.
(a)
Update the position of chickens.
(b)
Update the optimal value of the current chicken swarm.
(3)
Artificial fish swarm optimization. Calculate the fitness function values of the entire artificial fish swarm and record the optimal value.
(i)
Update the positions of artificial fish swarm.
Update the positions of the artificial fish swarm; that is, by simulating fish behaviors of preying, swarming, and following, compare the fitness function values to find out the best behavior and execute this behavior. Their corresponding formulas are as follows.
The preying behavior:
X i | n e x t = X i + r a n d × S t e p × X j X i X j X i
X j = X i + r a n d × V i s u a l
where Xi is the position of the ith artificial fish. Step and Visual represent the step length and visual field of an artificial fish, respectively.
The swarming behavior:
X i | n e x t = X i + r a n d × S t e p × X c X i X c X i
X c = c i = 1 n f X c i n f
where nf represents the number of partners within the visual field of the artificial fish. Xc is the center position.
The following behavior:
X i | n e x t = X i + r a n d × S t e p × X max X i X max X i
where Xmax is the position of an artificial fish with the optimal food concentration that can be found within the current artificial fish’s visual field.
  • (ii)
    Update the optimal value of the current artificial fish swarm.
(4)
Interaction. To realize information interaction and thus break the equilibrium state within the population, first, select the optimal individuals in the chicken swarm and artificial fish swarm for exchange, and then select the remaining Num (Num < N) individuals randomly generated in the two populations for exchange.
(5)
Repeat steps (2)–(4) until the specified maximum number of iterations is reached and the optimal value is output.

3.4. The Design and Implementation of the ADPCCSO Algorithm

To address the premature convergence issue encountered by the basic CSO algorithm in solving high-dimensional optimization problems, the ADPCCSO algorithm is proposed. Firstly, the algorithm adjusts the parameter G adaptively and dynamically to balance the algorithm’s breadth and depth search ability. Then, the solution accuracy and depth-optimization ability of the algorithm are enhanced by using the improvement strategy for foraging behaviors described in Section 3.2. Finally, the dual-population collaborative optimization strategy is introduced to accelerate the step of the algorithm jumping out of the local extrema. The specific process is as follows:
(1)
Parameter initialization. The numbers of roosters, hens, and chicks are 0.2 × N, 0.6 × N, and N − 0.2 × N − 0.6 × N, respectively.
(2)
Population initialization. Initialize the two populations according to the method described in Section 3.3.
(3)
Chicken swarm optimization. Calculate the fitness function values of chickens and record the optimal value of the current population.
(4)
Conditional judgment. If t = 1, go to step (c); otherwise, execute step (a).
(a)
Judgment of the information interaction condition in the chicken swarm. If t%G = 1, execute step (b); otherwise, go to step (d).
(b)
Chicks’ foraging behavior near the optimal value. Chicks search for food according to Equations (13)–(15) in Section 3.2.2.
(c)
Information interaction. In light of the current fitness function values of the entire chicken swarm, the dominance relationship and hierarchical relationship of the whole population are updated to achieve information interaction.
(d)
Foraging behavior. The chickens with different roles search for food according to Equations (10)–(12).
(e)
Modification of the optimal value in the chicken swarm: after each iteration, the optimal value of the whole chicken swarm is updated.
(5)
Artificial fish swarm optimization. Calculate the fitness function values of the artificial fish swarm and record the optimal value of the current population.
(i)
In the artificial fish swarm, behaviors of swarming, following, preying, and random movement are executed to find the optimal food.
(ii)
Update the optimal value of the whole artificial fish swarm.
(6)
Exchange. This includes the exchange of the optimal individuals and the exchange of several other individuals in the two populations.
(7)
Judgment of ending condition for the algorithm. If the specified maximum number of iterations is reached, the optimal value will be output, and the program will be terminated. Otherwise, go to step (3).

3.5. The Time Complexity Analysis of the ADPCCSO Algorithm

In the standard CSO algorithm, if the population size of the chicken swarm is assumed to be N, then the dimension of the solution space is d, the iteration number of the entire algorithm is M, and the hierarchical relationship of the chicken swarm is updated every G iterations. The numbers of roosters, hens, and chickens in the chicken swarm e Nr, Nh, and Nc, respectively; that is, Nr + Nh + Nc = N. The calculation time of the fitness function value of each chicken is tf. Therefore, the time complexity of the CSO algorithm consists of two stages, namely, the initialization stage and the iteration stage [30,32].
In the initialization stage (including parameter initialization and population initialization), assume that the setting time of parameters is t1, the actual time required to generate a random number is t2, and the sorting time of the fitness function values is t3. Then, the time complexity of the initial stage is T1 = t1 + N × d × t2+ t3 + N × tf = O(N × d + N × tf).
In the iteration stage, let the time for each rooster, hen, and chick to update its position on each dimension be tr, th, and tc, respectively. The time it takes to compare the fitness function values between two individuals is t4, and the time it takes for the chickens to interact with information is t5. Therefore, the time complexity of this stage is as follows.
T 2 = M ×   d ×   N r × t r +   M ×   d ×   N h × t h +   M ×   d ×   N c × t c +   N ×   M ×   t f +   M ×   N × t 4   + M G   × t 5 = M ×   d ×   ( N r × t r +   N h × t h +   N c × t c ) +   N ×   M ×   ( t f +   t 4 ) + M G ×   t 5   =   O ( N ×   M ×   d +   N ×   M ×   t f ) .
Therefore, the time complexity of the standard CSO algorithm is as follows.
T′ = T1 + T2 = O(N × d + N × tf) + O(N × M × d + N × M × tf) = O(N × M × d + N × M × tf).
On the basis of the standard CSO algorithm, the ADPCCSO algorithm adds the improvement factor in the position update formula of the chicken swarm, the foraging behavior of chicks near the optimal value, and the optimization strategy of the artificial fish swarm. It is assumed that the population size of the artificial fish swarm is N, and the tentative number when performing foraging behavior is try_number. In the swarming and following behaviors, it is necessary to count friend_number times when calculating the values of nf and Xmax. The time to calculate the improvement factor is t6, and the time it takes to perform the foraging, swarming, and following behaviors are t7, t8, and t9, respectively.
Therefore, the time complexity of adding the improvement factor in the position updating formula is T3 = M × N × t6 = O(M × N). The time complexity of the chicks’ foraging behavior near the optimal value is T4 = M G × d × Nc × tc = O( M G × d × Nc).
The time complexity of the artificial fish swarm optimization strategy is mainly composed of three parts: foraging behavior, swarming behavior, and following behavior. Its time complexity is as follows [33].
T5 = M × N × try_number × t7 × d + M × N × t8 × Friend_number × d + M × N × Friend_number × t9 × d = O(M × N ×
try_number × d) + O(M × N × Friend_number × d) + O(M × N × Friend_number × d) = O(M × N × d).
Therefore, the time complexity of the ADPCCSO algorithm is as follows.
T   =   T   +   T 3   +   T 4   +   T 5   =   O ( N   ×   M ×   d   + N ×   M ×   t f )   +   O ( M   ×   N )   +   O ( M G   ×   d   × N c )   =   O ( N   ×   M ×   d   + N ×   M ×   t f )
It can be seen that the time complexity of the ADPCCSO and standard CSO algorithms is still in the same order of magnitude.

4. Simulation Experiment and Analysis

4.1. The Experimental Setup

In this study, our experiments were conducted on a desktop computer with an Intel® Pentium® CPU G4500 @ 3.5 GHz processor, 12 GB RAM, a Windows 7 operating system, and the programming environment of MATLABR2016a.
To verify the performance of he ADPCCSO algorithm in solving high-dimensional complex optimization problems, we selected 17 standard high-dimensional test functions in Reference [28] for experimental comparison, which are listed in Table 1. (Because the functions f18~f21 in Reference [28] are fixed low-dimensional functions, we only selected the functions f1~f17 for experimental comparison.) Here, the functions f1~f12 are unimodal functions. Because it is difficult to obtain the global optimal solution, they are often used to test the solution accuracy of the algorithms. The functions f13~f17 are multimodal functions, which are often used to verify the global optimization ability of the algorithms.
To fairly compare the performance of various algorithms, we need to make all algorithms have the same number of function evaluations (FEs). In our paper, FEs = the population size × the maximum number of iterations, and considering that the population size and the maximum number of iterations of GCSO [27] and DMCSO [29] are both 100 and 1000, in the experiment, we also set these two parameters for the remaining algorithms to 100 and 1000, respectively. The experimental data in this paper are obtained by independently running all algorithms on each function for 30 times. Other parameter settings are shown in Table 2.
In Table 2, c1 and c2 are two learning factors and ω min and ω max are the upper and lower bounds of the inertial weight. hPercent and rPercent are the proportion of hens and roosters in the entire chickens, respectively. Nc, Nre, and Ned represent the numbers of chemotactic, reproduction, and elimination-dispersal operations, respectively. Visual, Step, and try_number represent the vision field, step length, and maximum tentative number of the artificial fish swarm, respectively. Limit is a control parameter for bees to abandon their food sources. Pc and Pm are crossovers and variation operators.
In Table 2, the parameters of AFSA are set after trial and error on the basis of the literature [31]. The parameter of ABC is set according to the study [34] where ABC has been proposed. The parameters of PSO, CSO, ASCSO-S [28], GCSO [27], and DMCSO [29] are set according to their corresponding references (namely the studies [27,28,29]), respectively.

4.2. The Effectiveness Test of Two Improvement Strategies

To verify the effectiveness of the two improved strategies proposed in Section 3.1 and Section 3.3, we have compared the ACSO, DCCSO, and CSO algorithms on 17 test functions in terms of the solution accuracy and convergence performance. Here, the ACSO algorithm is an adaptive CSO algorithm, that is, we only use Equation (7) to make adaptive dynamic adjustment to the parameter G in the CSO algorithm. The DCCSO algorithm refers to the fact that only the dual-population collaborative optimization strategy mentioned in Section 3.3 is used in CSO algorithm.
The experimental results of the above three algorithms on 17 test functions are listed in Table 3, where the optimal results are marked in bold. In Table 3, “Dim” is the dimension of the problem to be solved, “Mean” is the mean value, and “Std” is the standard deviation. “↑”, “↓”, and “=”, respectively, signify that the operation results obtained by the ACSO and DCCSO algorithms are superior to, inferior to, and equal to those obtained by the basic CSO algorithm.
It can be seen from Table 3 that the optimization results of the ACSO and DCCSO algorithms on almost all benchmark test functions are far superior to those of the CSO algorithm (on only function f2, the optimization results of DCCSO algorithm are slightly inferior to those of CSO algorithm); in particular, the experimental data on functions f10 and f11 reached the theoretical optimal values. This shows the effectiveness of the two improvement strategies proposed in Section 3.1 and Section 3.3 in terms of solution accuracy.
To verify the effectiveness of ACSO and DCCSO algorithms compared with the CSO algorithm in terms of the aspect of convergence performance, the convergence curves of the above three algorithms on some functions are shown in Figure 2. For simplicity, we only list the convergence curves of the aforementioned algorithms on functions f1, f9, f13, and f16, where functions f1 and f9 are unimodal functions and functions f13 and f16 are multimodal functions. In addition, in order to make the convergence curves clearer, we take the logarithmic processing for the average fitness values.
As can be seen from Figure 2, the convergence performance of both ACSO and DCCSO algorithms is significantly superior to that of the CSO algorithm, which proves the effectiveness of the two improvement strategies proposed in this paper in terms of convergence performance.

4.3. The Effectiveness Test of Improvement Strategy for Foraging Behaviors

To test the effectiveness of the improvement strategy proposed in Section 3.2, the learning-factor-based foraging behavior improvement strategy in the literature [32] is used for experimental comparison. At the same time, with the purpose of conducting experimental comparison more objectively and fairly, we let the ADPCCSO algorithm use the above-mentioned improvement strategies on 17 test functions to verify the performance of the improvement strategy in Section 3.3. The experimental results are listed in Table 4, where the ADPCCSO [32] indicates that the improvement strategy for foraging behavior in the literature [32] is used in the ADPCCSO. In addition, the number of optimal results calculated by each algorithm based on the mean value is also shown in Table 4.
As can be seen from Table 4, the ADPCCSO [32] only obtained optimal values on 5 functions, while the ADPCCSO algorithm obtained optimal values on 16 functions and the theoretical optimal values were obtained on 13 functions. Only on function f5 were the results of ADPCCSO algorithm slightly inferior to those of the ADPCCSO [32]. This shows the effectiveness of the improvement strategy proposed in Section 3.2 in terms of solution accuracy.
To test the effectiveness of the improvement strategy proposed in Section 3.3 in terms of convergence performance, the convergence curves of the above two algorithms are also listed in this section. For simplicity, only their convergence curves on functions f9 and f15 are given, which are shown in Figure 3. It is worth noting that in order to make the convergence curves look more intuitive and clearer, we also logarithm the average fitness values in this section.
It is obvious from Figure 3 that the convergence performance of the ADPCCSO algorithm is better than that of ADPCCSO [32] as a whole. Especially on function f15, the ADPCCSO algorithm has more obvious advantages in convergence performance, and it began to converge stably around the 18th generation.

4.4. Performance Comparison of Several Swarm Intelligence Algorithms

To test the advantages of the ADPCCSO algorithm proposed in this paper over other algorithms in solving high-dimensional optimization problems, in this section, it is compared with five other algorithms, namely ASCSO-S [28], ABC, AFSA, CSO, and PSO. Their best values, worst values, mean values, and standard deviations obtained on the 17 benchmark standard test functions are shown in Table 5, Table 6 and Table 7, where the best values are shown in bold. In addition, we also count the number of optimal values obtained by each algorithm based on the mean value, which are shown in Table 5, Table 6 and Table 7.
It is not difficult to see from Table 5, Table 6 and Table 7 that the ADPCCSO and ASCSO-S algorithms are far superior to the other four swarm intelligence algorithms in terms of solution accuracy and stability. Among them, the ADPCCSO algorithm has the best performance: in particular, when Dim = 500, it obtained the optimal values in all 17 functions, and the number of optimal results calculated by the ASCSO-S algorithm is 14. Additionally, on function f5, the operation results of the ADPCCSO algorithm at Dim = 100 and Dim = 500 are far better than those at Dim = 30, which also shows to a certain extent that the ADPCCSO algorithm is more suitable for handling higher-dimensional complex optimization problems.
As can be seen from Table 5, although the ABC algorithm obtained the optimal values in three functions, its optimization ability worsens as the dimension of the problem increases. On the contrary, AFSA shows a higher optimization ability (when Dim = 500, its optimization ability on 11 functions is much better than that of the ABC algorithm), which is one of the reasons why we constructed a dual-population collaborative optimization strategy based on a chicken swarm and an artificial fish swarm to solve high-dimensional optimization problems. It is noteworthy that the operation results of the PSO algorithm on function f8 are not given in Table 7. This is because when Dim = 500, its fitness function values often exceed the maximum positive value that the computer can represent, resulting in the algorithm being unable to obtain suitable operation results. This also shows that the PSO algorithm is not suitable for handling higher-dimensional complex optimization problems.
Below, we summarize why the solution accuracy of ADPCCSO and ASCSO-S algorithms is better than that of the other four algorithms. This may be due to the fact that both algorithms introduce an improvement factor (which is called an inertial weight) into the position update formula of the chicken swarm. The reason why the performance of the former in terms of solution accuracy is better than that of the latter may be because the former uses an improvement strategy for foraging behaviors, which not only improves the depth optimization ability of the algorithm but also improves its solution accuracy.
To verify the superiority of the ADPCCSO algorithm over other algorithms in terms of convergence performance, this paper presents the convergence curves of the above six algorithms on all 17 test functions with Dim = 100, which are shown in Figure 4. In Figure 4, the average fitness values of all ordinates are also logarithmic. In addition, in order to further present a clearer convergence effect, we have locally enlarged some convergence curves, which is why there are subgraphs in some convergence curves.
As can be seen from Figure 4, the ADPCCSO algorithm has the best convergence performance on 16 functions, but on only function f4, its convergence is slightly inferior to that of the ABC algorithm. ASCSO-S ranks second in terms of convergence performance, and AFSA and CSO are tied for third place. (This is another reason why we construct a dual-population collaborative optimization strategy based on the chicken swarm and artificial fish swarm).
Below, we summarize why the convergence performance of the ADPCCSO and ASCSO-S algorithms is better than that of the other four algorithms as a whole. This may be because both algorithms use adaptive dynamic adjustment strategies. The convergence performance of the former is superior to that of the latter, which may be due to the use of the dual-population collaborative optimization strategy in the ADPCCSO algorithm, which improves the convergence performance of the algorithm. In addition, by carefully observing Figure 4, it is not difficult to find that on functions f1f3, f6f8, f10f14, and f16f17, it seems that the convergence curves of ADPCCSO and ASCSO-S algorithms in the late iteration stage are not fully presented. This is because both algorithms have found the theoretical optimal value of 0 in these functions, and 0 has no logarithm.

4.5. Friedman Test of Algorithms

The Friedman test, or Friedman’s method for randomized blocks, is a non-parametric test method that does not require the sample to obey a normal distribution, and it only uses rank to judge whether there are significant differences in multiple population distributions. This method was proposed by Friedman in 1973. Because of its simple operation and no strict requirements for data, it is often used to test the performance of algorithms [28,35].
To further test the performance of the ADPCCSO algorithm proposed in this paper, in this section, the Friedman test is utilized to compare the performance of the above six algorithms from a statistical perspective. For the minimum optimization problem, the smaller the average ranking of the algorithm is, the better the performance of the algorithm is. In this section, the SPSS software is used to calculate the average ranking values of all algorithms. The statistical results are shown in Table 8. It is obvious from Table 8 that the ADPCCSO algorithm has the lowest average ranking of 1.5 and therefore has the best performance.

4.6. Performance Comparison of Several Improved CSO Algorithms

To further verify the performance of ADPCCSO algorithm proposed in this paper, two improved CSO algorithms mentioned in the literature [27,29], namely GCSO [27] and DMCSO [29], have also been used to compare with the ADPCCSO algorithm. The experimental results are shown in Table 9. The experimental data of both algorithms are from the corresponding references. It is worth noting that the population size of the above three algorithms is 100, and the maximum number of iterations is 1000, which also facilitates a more fair and reasonable experimental comparison. Other parameter settings are shown in Table 2.
In Table 9, GCSO [27] counts the operation results of 6 functions out of 17 test functions but only obtained the optimal values on the standard deviation of function f4 and the best values of functions f13 and f14. DMCSO [29] counted the operation results of 12 functions out of 17 test functions and only obtained the optimal values on function f4. However, the operation results of the ADPCCSO algorithm are better than those of the above two algorithms overall. On only function f4, the operation results of the ADPCCSO algorithm are worse than those of DMCSO [29]. This shows the advantages of the ADPCCSO algorithm.

4.7. Performance Test of ADPCCSO Algorithm for Solving Higher-Dimensional Problems

To further verify the performance of the ADPCCSO algorithm in solving higher-dimensional optimization problems, the relevant experiments for the proposed algorithm on 17 benchmark test functions with Dim = 1000 are also presented in this section. The corresponding experimental results are shown in Table 10.
As can be seen from Table 10, even when the dimension of the optimization problem is adjusted to 1000, the proposed algorithm can still achieve satisfactory optimization accuracy on most test functions; only on functions f4, f5, and f9 do the experimental data fluctuate slightly. This indicates that when the dimension increases, the proposed algorithm will not be greatly affected, which fully demonstrates that the ADPCCSO algorithm still has a competitive advantage in dealing with higher-dimensional optimization problems.

4.8. Parameter Estimation Problem of Richards Model

To verify the performance of ADPCCSO algorithm in solving practical problems, it is applied to the parameter estimation problem of the Richards model in this section. The Richards model is a growth curve model with four unknown parameters, which can adequately simulate the whole process of biological growth. Its mathematical formula is as follows [28,36,37]:
y ( t ) = α ( 1 e β γ t ) 1 δ
where y ( t ) stands for the growth amount at time t, and α , β , γ , δ are four unknown parameters.
The core problem of the ADPCCSO algorithm for parameter estimation of the Richards model is the design of the fitness function. In this paper, the fitness function design method mentioned in the sties [28,36] is adopted; that is, the sum of squares of the difference between the observed and predicted values is used as the fitness function. The mathematical formula is as follows:
f i t ( α , β , γ , δ ) = i = 1 n ( y i α ( 1 + e β γ t i ) 1 δ ) 2
where yi is the actual growth amount observed at time i. In this section, the actual growth concentrations of glutamate listed in the studies [28,36] are used as the observation values, which are shown in Table 11. The optimal solutions obtained by different algorithms through 30 independent runs are listed in Table 12, where the experimental data of ASCSO-S [28] and VS-FOA [36] come from the corresponding references. The data in Table 13 are the growth concentration of glutamate calculated by using the data in Table 12 in Equation (21). In Table 13, “fit” represents the fitness function value.
To evaluate the effect of VS-FOA [36], ASCSO-S [28], and ADPCCSO in the parameter estimation of Richards model, we select the root mean square error, mean absolute error, and coefficient of determination as evaluation indexes to evaluate the performance of the above three algorithms. The formulas are as follows:
(1) The root mean square error:
RMSE = i = 1 n ( y i - y ^ i ) 2 n
where yi is the actual value observed and y ^ i is the predicted value at time i. n is the number of actual values observed. The root mean square error is used to measure the deviation between the predicted values and the observed values. The smaller its value is, the better the predicted value is.
(2) The mean absolute error:
MAE = 1 n i = 1 n | ( y i - y ^ i ) |
The mean absolute error is the mean value of the absolute error. It reflects the actual situation of the error of the predicted value better. The smaller its value is, the more precise the predicted value is.
(3) The coefficient of determination:
R 2 = 1 - i = 1 n ( y ^ i - y i ) 2 i = 1 n ( y i - y ¯ ) 2
where y ¯ is the mean value of the actual values observed. The coefficient of determination is generally used to evaluate the conformity between the predicted and actual values. The closer its value is to 1, the better the prediction effect.
The comparison results of the above three algorithms in the three evaluation indexes are shown in Table 14, where the optimal values are marked in bold.
As can be seen from Table 14, the ADPCCSO algorithm has optimal values in both indexes. Although the ADPCCSO algorithm is slightly inferior to the other two algorithms in terms of the mean absolute error, its fitness function value is indeed the best of the three algorithms, which can be seen from Table 13. This preliminarily shows that the ADPCCSO algorithm can solve the parameter estimation problem of the Richards model.

5. Conclusions

In view of the precocious convergence problem that the basic CSO algorithm is prone to when solving high-dimensional complex optimization problems, an ADPCCSO algorithm is proposed in this paper. The algorithm first uses an adaptive dynamic adjustment method to give the value of parameter G, so as to balance the algorithm’s depth and breadth search ability. Additionally, then, the solution accuracy and depth optimization ability of the algorithm are improved by using a foraging-behavior-improvement strategy. Finally, a dual-population collaborative optimization strategy is constructed to improve the algorithm’s global search ability. The experimental results preliminarily show that the proposed algorithm has obvious advantages over other comparison algorithms in terms of solution accuracy and convergence performance. This provides new ideas for the study of high-dimensional optimization problems.
However, although the experimental results of the proposed algorithm on most given benchmark test functions have achieved obvious advantages over the comparison algorithms, there is still a gap between the actual optimal solutions obtained on several functions and their theoretical optimal solutions. Therefore, understanding how to improve the performance of the algorithm to better solve more complex large-scale optimization problems still needs further research. Moreover, in future research work, it is also a good choice to apply this algorithm to other fields, such as the constrained optimization problem, the multi-objective optimization problem, and the vehicle-routing problem.

Author Contributions

Conceptualization, J.L. and L.W.; methodology, J.L.; software, J.L.; validation, J.L., L.W. and M.M.; formal analysis, J.L.; investigation, J.L.; resources, J.L.; data curation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L.; visualization, J.L.; supervision, J.L.; project administration, J.L.; funding acquisition, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Education Department of Hainan Province of China (Hnkyzc2023-3, Hnky2020-3), the Natural Science Foundation of Hainan Province of China (620QN230), and the National Natural Science Foundation of China (61877038).

Data Availability Statement

All data used to support the findings of this study are included within the article. Color versions of all figures in this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, D.; Wu, M.; Li, D.; Xu, Y.; Zhou, X.; Yang, Z. Dynamic opposite learning enhanced dragonfly algorithm for solving large-scale flexible job shop scheduling problem. Knowl.-Based Syst. 2022, 238, 107815. [Google Scholar] [CrossRef]
  2. Li, J.; Duan, Y.; Hao, L.; Zhang, W. Hybrid optimization algorithm for vehicle routing problem with simultaneous delivery-pickup. J. Front. Comput. Sci. Technol. 2022, 16, 1623–1632. [Google Scholar]
  3. Tran, B.; Xue, B.; Zhang, M. Variable-Length Particle Swarm Optimization for Feature Selection on High-Dimensional Classification. IEEE Trans. Evol. Comput. 2019, 23, 473–487. [Google Scholar] [CrossRef]
  4. Gao, X.; Guo, Y.; Ma, G.; Zhang, H.; Li, W. Agile satellite autonomous observation mission planning using hybrid genetic algorithm. J. Harbin Inst. Technol. 2021, 53, 1–9. [Google Scholar]
  5. Larouci, B.; Ayad, A.N.E.I.; Alharbi, H.; Alharbi, T.E.; Boudjella, H.; Tayeb, A.S.; Ghoneim, S.S.; Abdelwahab, S.A.M. Investigation on New Metaheuristic Algorithms for Solving Dynamic Combined Economic Environmental Dispatch Problems. Sustainability 2022, 14, 5554. [Google Scholar] [CrossRef]
  6. Yang, Q.; Zhu, Y.; Gao, X.; Xu, D.; Lu, Z. Elite Directed Particle Swarm Optimization with Historical Information for High-Dimensional Problems. Mathematics 2022, 10, 1384. [Google Scholar] [CrossRef]
  7. Yang, Q.; Chen, W.N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Trans. Cybern. 2022, 52, 1960–1976. [Google Scholar] [CrossRef]
  8. Pellerin, R.; Perrier, N.; Berthaut, F. A survey of hybrid metaheuristics for the resource-constrained project scheduling problem. Eur. J. Oper. Res. 2020, 280, 395–416. [Google Scholar] [CrossRef]
  9. Forestiero, A. Heuristic recommendation technique in Internet of Things featuring swarm intelligence approach. Expert Syst. Appl. 2022, 187, 115904. [Google Scholar] [CrossRef]
  10. Forestiero, A.; Mastroianni, C.; Spezzano, G. Antares: An ant-inspired P2P information system for a self-structured grid. In Proceedings of the 2007 2nd Bio-Inspired Models of Network, Information and Computing Systems, Budapest, Hungary, 10–13 December 2007. [Google Scholar]
  11. Forestiero, A.; Mastroianni, C.; Spezzano, G. Reorganization and discovery of grid information with epidemic tuning. Future Gener. Comput. Syst. 2008, 24, 788–797. [Google Scholar] [CrossRef]
  12. Gul, F.; Mir, A.; Mir, I.; Mir, S.; Islaam, T.U.; Abualigah, L.; Forestiero, A. A Centralized Strategy for Multi-Agent Exploration. IEEE Access 2022, 10, 126871–126884. [Google Scholar] [CrossRef]
  13. Brajević, I.; Stanimirović, P.S.; Li, S.; Cao, X.; Khan, A.T.; Kazakovtsev, L.A. Hybrid Sine Cosine Algorithm for Solving Engineering Optimization Problems. Mathematics 2022, 10, 4555. [Google Scholar] [CrossRef]
  14. Huang, C.; Wei, X.; Huang, D.; Ye, J. Shuffled frog leaping grey wolf algorithm for solving high dimensional complex functions. Control Theory Appl. 2020, 37, 1655–1666. [Google Scholar]
  15. Gu, Q.H.; Li, X.X.; Lu, C.W.; Ruan, S.L. Hybrid genetic grey wolf algorithm for high dimensional complex function optimization. Control Decis. 2020, 35, 1191–1198. [Google Scholar]
  16. Wang, Q.; Li, F. Two Novel Types of Grasshopper Optimization Algorithms for Solving High-Dimensional Complex Functions. J. Chongqing Inst. Technol. 2021, 35, 277–283. [Google Scholar]
  17. Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A New Bio-Inspired Algorithm: Chicken Swarm Optimization; Springer International Publishing: Cham, Switzerland, 2014; pp. 86–94. [Google Scholar]
  18. Kumar, D.; Pandey, M. An optimal load balancing strategy for P2P network using chicken swarm optimization. Peer-Peer Netw. Appl. 2022, 15, 666–688. [Google Scholar] [CrossRef]
  19. Basha, A.J.; Aswini, S.; Aarthini, S.; Nam, Y.; Abouhawwash, M. Genetic-Chicken Swarm Algorithm for Minimizing Energy in Wireless Sensor Network. Comput. Syst. Sci. Eng. 2022, 44, 1451–1466. [Google Scholar] [CrossRef]
  20. Cristin, D.R.; Kumar, D.K.S.; Anbhazhagan, D.P. Severity Level Classification of Brain Tumor based on MRI Images using Fractional-Chicken Swarm Optimization Algorithm. Comput. J. 2021, 64, 1514–1530. [Google Scholar] [CrossRef]
  21. Bharanidharan, N.; Rajaguru, H. Improved chicken swarm optimization to classify dementia MRI images using a novel controlled randomness optimization algorithm. Int. J. Imaging Syst. Technol. 2020, 30, 605–620. [Google Scholar] [CrossRef]
  22. Liang, J.; Wang, L.; Ma, M. A new image segmentation method based on the ICSO-ISPCNN model. Multimed. Tools Appl. 2020, 79, 28131–28154. [Google Scholar] [CrossRef]
  23. Liu, Z.F.; Li, L.L.; Tseng, M.L.; Lim, M.K. Prediction short-term photovoltaic power using improved chicken swarm optimizer—Extreme learning machine model. J. Clean. Prod. 2020, 248, 119272. [Google Scholar] [CrossRef]
  24. Othman, A.M.; El-Fergany, A.A. Adaptive virtual-inertia control and chicken swarm optimizer for frequency stability in power-grids penetrated by renewable energy sources. Neural Comput. Appl. 2021, 33, 2905–2918. [Google Scholar] [CrossRef]
  25. Ayvaz, A. An improved chicken swarm optimization algorithm for extracting the optimal parameters of proton exchange membrane fuel cells. Int. J. Energy Res. 2022, 46, 15081–15098. [Google Scholar] [CrossRef]
  26. Maroufi, H.; Mehdinejadiani, B. A comparative study on using metaheuristic algorithms for simultaneously estimating parameters of space fractional advection-dispersion equation. J. Hydrol. 2021, 602, 126757. [Google Scholar] [CrossRef]
  27. Yang, X.; Xu, X.; Li, R. Genetic chicken swarm optimization algorithm for solving high-dimensional optimization problems. Comput. Eng. Appl. 2018, 54, 133–139. [Google Scholar]
  28. Gu, Y.; Lu, H.; Xiang, L.; Shen, W. Adaptive Simplified Chicken Swarm Optimization Based on Inverted S-Shaped Inertia Weight. Chin. J. Electron. 2022, 31, 367–386. [Google Scholar] [CrossRef]
  29. Han, M. Hybrid chicken swarm algorithm with dissipative structure and differential mutation. J. Zhejiang Univ. (Sci. Ed.) 2018, 45, 272–283. [Google Scholar]
  30. Zhang, K.; Zhao, X.; He, L.; Li, Z. A chicken swarm optimization algorithm based on improved X-best guided individual and dynamic hierarchy update mechanism. J. Beijing Univ. Aeronaut. Astronaut. 2021, 47, 2579–2593. [Google Scholar]
  31. Liang, J.; Wang, L.; Ma, M. An Improved Chicken Swarm Optimization Algorithm for Solving Multimodal Optimization Problems. Comput. Intell. Neurosci. 2022, 2022, 5359732. [Google Scholar] [CrossRef]
  32. Gu, Y.C.; Lu, H.Y.; Xiang, L.; Shen, W.Q. Adaptive Dynamic Learning Chicken Swarm Optimization Algorithm. Comput. Eng. Appl. 2020, 56, 36–45. [Google Scholar]
  33. Fei, T. Research on Improved Artificial Fish Swarm Algorithm and Its Application in Logistics Location Optimization; Tianjin University: Tianjin, China, 2016. [Google Scholar]
  34. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  35. Song, X.; Zhao, M.; Yan, Q.; Xing, S. A high-efficiency adaptive artificial bee colony algorithm using two strategies for continuous optimization. Swarm Evol. Comput. 2019, 50, 100549. [Google Scholar] [CrossRef]
  36. Wang, J. Parameter estimation of Richards model based on variable step size fruit fly optimization algorithm. Comput. Eng. Des. 2017, 38, 2402–2406. [Google Scholar]
  37. Smirnova, A.; Pidgeon, B.; Chowell, G.; Zhao, Y. The doubling time analysis for modified infectious disease Richards model with applications to COVID-19 pandemic. Math. Biosci. Eng. 2022, 19, 3242–3268. [Google Scholar] [CrossRef]
Figure 1. The flow chart of the dual-population collaborative optimization strategy.
Figure 1. The flow chart of the dual-population collaborative optimization strategy.
Biomimetics 08 00210 g001
Figure 2. The convergence curves of three algorithms on functions f1, f9, f13 and f16.
Figure 2. The convergence curves of three algorithms on functions f1, f9, f13 and f16.
Biomimetics 08 00210 g002aBiomimetics 08 00210 g002b
Figure 3. The convergence curves of two algorithms on functions f9 and f15.
Figure 3. The convergence curves of two algorithms on functions f9 and f15.
Biomimetics 08 00210 g003
Figure 4. Convergence curves of six swarm intelligence algorithms on 17 functions.
Figure 4. Convergence curves of six swarm intelligence algorithms on 17 functions.
Biomimetics 08 00210 g004aBiomimetics 08 00210 g004bBiomimetics 08 00210 g004c
Table 1. The description of the test functions.
Table 1. The description of the test functions.
TypeFunctionsNamesSearch Ranges
Unimodal Functions f 1 = i = 1 n χ i 2 Sphere[−100,100]
f 2 = i = 1 n χ i i + 1 Sum of different powers[−1,1]
f 3 = i = 1 n i χ i 2 Sum squares[−10,10]
f 4 = i = 1 n - 1 100 χ i + 1 χ i 2 2 + χ i 1 2 Rosenbrock[−5,10]
f 5 = χ 1 1 2 + i = 2 n i 2 χ i 2 χ i 1 2 Dixon-price[−10,10]
f 6 = i = 1 n j = 1 i χ j 2 Rotated hyper-ellipsoid[−65.536,65.536]
f 7 = max χ i Schwefels P 2.21[−100,100]
f 8 = i = 1 n χ i + i = 1 n χ i Schwefels P 2.22[−10,10]
f 9 = i = 1 n i χ i 4 + r a n d 0 , 1 Quartic[−1.28,1.28]
f 10 = i = 1 n χ i + 0.5 2 Step[−100,100]
f 11 = 10 6 χ 1 2 i = 1 n χ i 2 Discus[−100,100]
f 12 = i = 1 n χ i 2 + i = 1 n 0.5 i χ i 2 + i = 1 n 0.5 i χ i 4 Zakharov[−5,10]
Multimodal Functions f 13 = 1 4000 i = 1 n χ i 2 - i = 1 n cos χ i i + 1 Griewank[−600,600]
f 14 = i = 1 n χ i 2 - 10 cos 2 π χ i + 10 Rastrigin[−5.12,5.12]
f 15 = - 20 exp - 0.2 i = 1 n χ i 2 n - exp 1 n i = 1 n cos 2 π χ i + 20 + e Ackley[−32,32]
f 16 = i = 1 n / 4 χ 4 i - 3 + 10 χ 4 i - 2 2 + 5 χ 4 i - 1 - χ 4 i 2 + χ 4 i - 2 - 2 χ 4 i - 1 4 + 10 χ 4 i - 3 + χ 4 i 4 Powell[−4,5]
f 17 = i = 1 n χ i sin χ i + 0.1 χ i Alpine[−10,10]
Global minimum: 0
Table 2. The parameter settings of all algorithms.
Table 2. The parameter settings of all algorithms.
AlgorithmsParameter Settings
PSOc1 = c2 = 2, ω min = 0.4 , ω max = 0.9
CSOrPercent = 0.2, hPercent = 0.6, G = 10
AFSAVisual = 2.5, Step = 0.3, try_number = 5
ABCLimit = 100,
GCSO [27]rPercent = 0.2, hPercent = 0.6, G = 10, Pc = 0.8, Pm = 0.2
ASCSO-S [28]rPercent = 0.4, hPercent = 0.6, G = 100,
DMCSO [29]rPercent = 0.2, hPercent = 0.6, G = 10
ADPCCSOrPercent = 0.2, hPercent = 0.6
Table 3. The experimental comparison of two improvement strategies with Dim = 100.
Table 3. The experimental comparison of two improvement strategies with Dim = 100.
FunctionsResultsACSODCCSOCSO
f1Mean1.2641 × 10−76.1971 × 10−80.0231
Std5.0764 × 10−73.3534 × 10−70.0724
f2Mean2.0509 × 10−334.9656 × 10−123.6821 × 10−17
Std1.1232 × 10−321.2131 × 10−112.0117 × 10−16
f3Mean1.5007 × 10−84.9472 × 10−80.5679
Std3.5575 × 10−81.7774 × 10−73.0256
f4Mean97.9332↑97.70929.3677 × 104
Std0.63170.38454.9125 × 104
f5Mean0.6671↑0.25002.6486 × 104
Std4.4377 × 10−40.14511.5023 × 104
f6Mean1.3338 × 10−92.2178 × 10−100.0374
Std6.4294 × 10−95.8728 × 10−100.1085
f7Mean26.3562↑22.932727.6541
Std2.87174.36033.0137
f8Mean8.3801 × 10−286.1715 × 10−271.8973 × 10−16
Std2.3832 × 10−271.3900 × 10−262.8673 × 10−16
f9Mean5.6485 × 10−40.0213↑2.7873
Std0.00160.02091.4489
f10Mean00208.3000
Std00608.1095
f11Mean0=0=0
Std000
f12Mean27.6333↑0.0036121.6425
Std8.77310.019631.6731
f13Mean1.7590 × 10−65.1184 × 10−90.0971
Std5.6806 × 10−61.6819 × 10−80.2474
f14Mean2.3473 × 10−111.6175 × 10−103.5819 × 10−7
Std7.0250 × 10−116.0763 × 10−101.3432 × 10−6
f15Mean1.3016 × 10−51.4443 × 10−44.2376
Std3.1445 × 10−56.7327 × 10−43.0389
f16Mean4.5474 × 10−51.8637 × 10−4124.7637
Std1.0932 × 10−47.2074 × 10−480.3793
f17Mean1.2160 × 10−258.6956 × 10−240.1021
Std4.4244 × 10−253.9926 × 10−230.0795
1615
=11
01
Table 4. The experimental results of improvement strategy for foraging behaviors.
Table 4. The experimental results of improvement strategy for foraging behaviors.
FunctionsDimResultsADPCCSO [32]ADPCCSO
f1100Mean4.5239 × 10−290
Std1.5994 × 10−280
f2100Mean1.9065 × 10−1200
Std1.0442 × 10−1190
f3100Mean1.4728 × 10−290
Std5.7490 × 10−290
f4100Mean97.445797.3542
Std0.43940.1748
f5100Mean0.20800.2483
Std0.03600.1220
f6100Mean4.6786 × 10−270
Std2.4167 × 10−260
f7100Mean4.5660 × 10−90
Std2.2296 × 10−80
f8100Mean2.5173 × 10−180
Std1.2607 × 10−170
f9100Mean2.8945 × 10−45.0547 × 10−5
Std2.2222 × 10−44.0937 × 10−5
f10100Mean00
Std00
f11100Mean00
Std00
f12100Mean9.1771 × 10−80
Std4.5102 × 10−70
f13100Mean00
Std00
f14100Mean00
Std00
f15100Mean8.3743 × 10−138.8818 × 10−16
Std2.1757 × 10−120
f16100Mean2.1177 × 10−130
Std1.1599 × 10−120
f17100Mean6.1813 × 10−190
Std2.9778 × 10−180
The number of optimal values516
Table 5. The experimental results of several algorithms on the 17 test functions with Dim = 30.
Table 5. The experimental results of several algorithms on the 17 test functions with Dim = 30.
FunctionsResultsPSOCSOABCAFSAASCSO-S [28]ADPCCSO
f1Best3.7892 × 10−71.9607 × 10−571.6335 × 10−105.6533 × 10300
Worst3.3136 × 10−53.5835 × 10−516.1183 × 10−91.1818 × 10400
Mean8.1646 × 10−62.0326 × 10−521.1637 × 10−98.6369 × 10300
Std7.3589 × 10−66.6738 × 10−521.2091 × 10−91.7458 × 10300
f2Best2.8786 × 10−253.4757 × 10−2291.0079 × 10−161.7090 × 10−1400
Worst1.1064 × 10−191.3980 × 10−1753.4341 × 10−121.0720 × 10−600
Mean7.0544 × 10−214.6607 × 10−1772.3088 × 10−138.7408 × 10−800
Std2.1066 × 10−2006.5800 × 10−132.2756 × 10−700
f3Best2.2371 × 10−85.1238 × 10−595.6509 × 10−129.4436 × 10−1000
Worst2.9967 × 10−66.0429 × 10−501.5907 × 10−102.5102 × 10−500
Mean6.2016 × 10−72.2297 × 10−514.7148 × 10−111.9605 × 10−600
Std6.9470 × 10−71.1016 × 10−503.8334 × 10−115.5333 × 10−600
f4Best16.299628.11790.017228.668228.094626.0670
Worst116.270228.80122.018828.695828.836126.4449
Mean40.434228.60450.420828.681528.575926.2860
Std28.14040.17800.48320.00640.18950.1059
f5Best0.15840.66670.00200.23650.66750.1264
Worst3.82480.66800.04130.88630.69450.6667
Mean1.05390.66680.01340.62390.67710.6152
Std0.81502.5860 × 10−40.01040.29490.00600.1572
f6Best2.6902 × 10−61.3140 × 10−571.9353 × 10−91.6776 × 10−600
Worst4.2475 × 10−45.2576 × 10−504.0498 × 10−82.1169 × 10300
Mean4.6277 × 10−51.9182 × 10−511.2558 × 10−8470.076500
Std7.5722 × 10−59.5710 × 10−518.6975 × 10−9554.158500
f7Best2.83024.8744 × 10−435.626223.843700
Worst11.697811.881969.591637.720800
Mean7.12011.721253.544632.114600
Std1.90842.50148.42482.865900
f8Best2.3485 × 10−51.2698 × 10−478.4305 × 10−72.6316 × 10−500
Worst5.9154 × 10−43.8306 × 10−393.6912 × 10−60.001900
Mean1.3335 × 10−41.4723 × 10−402.1448 × 10−64.9365 × 10−400
Std1.1816 × 10−46.9630 × 10−406.7603 × 10−74.5878 × 10−400
f9Best0.02145.0059 × 10−40.12070.06333.8672 × 10−66.7322 × 10−8
Worst0.06620.00630.22080.93401.2345 × 10−41.3725 × 10−5
Mean0.03880.00220.16180.48804.4905 × 10−55.4986 × 10−6
Std0.01070.00140.02490.23602.8655 × 10−53.7638 × 10−6
f10Best25,2420014,42000
Worst56,6470019,64700
Mean4.1697 × 104001.7400 × 10400
Std8.7103 × 103001.4435 × 10300
f11Best6.5543 × 10−11002.4514 × 10−170.236900
Worst1.5207 × 10−9205.3306 × 10−103.4087 × 10500
Mean7.7255 × 10−9405.5002 × 10−112.9144 × 10400
Std2.9853 × 10−9301.2907 × 10−107.8180 × 10400
f12Best7.43664.0903 × 10−10178.93702.9405 × 10−1000
Worst25.28810.0026297.28581.3546 × 10−600
Mean14.48491.6588 × 10−4258.72801.5486 × 10−700
Std5.14865.2388 × 10−426.65932.7336 × 10−700
f13Best6.8986 × 10−709.5128 × 10−11341.947400
Worst0.04180.03172.7115 × 10−6548.686300
Mean0.00730.00111.1044 × 10−7453.644600
Std0.01100.00584.9356 × 10−750.068200
f14Best24.900102.6751 × 10−103.8881 × 10−1100
Worst72.645600.99501.6855 × 10−400
Mean42.209800.03341.2826 × 10−500
Std9.736500.18163.4642 × 10−500
f15Best2.7359 × 10−44.4409 × 10−154.6745 × 10−62.3668 × 10−78.8818 × 10−168.8818 × 10−16
Worst0.00877.9936 × 10−152.0602 × 10−54.5716 × 10−58.8818 × 10−168.8818 × 10−16
Mean0.00125.1514 × 10−151.1813 × 10−59.9561 × 10−68.8818 × 10−168.8818 × 10−16
Std0.00191.4454 × 10−154.5981 × 10−61.0266 × 10−500
f16Best0.00362.2516 × 10−100.01971.9804 × 10−900
Worst0.05010.03440.06620.051300
Mean0.01580.00290.03980.002400
Std0.00940.00640.01160.009900
f17Best4.4072 × 10−53.6124 × 10−433.1203 × 10−59.5533 × 10−700
Worst0.00260.01350.00161.1938 × 10−400
Mean5.4095 × 10−44.4862 × 10−42.7629 × 10−41.8260 × 10−500
Std6.0504 × 10−40.00253.6245 × 10−42.4140 × 10−500
The number of optimal values03301415
Table 6. The experimental results of several algorithms on the 17 test functions with Dim = 100.
Table 6. The experimental results of several algorithms on the 17 test functions with Dim = 100.
FunctionsResultsPSOCSOABCAFSAASCSO-S [28]ADPCCSO
f1Best331.63843.8575 × 10−104.6681 × 10−41.0976 × 10500
Worst1.5637 × 1030.31890.01151.3888 × 10500
Mean820.87950.02310.00371.2477 × 10500
Std306.56910.07240.00307.3483 × 10300
f2Best1.3001 × 10−95.4401 × 10−1188.9306 × 10−118.8725 × 10−1500
Worst2.9796 × 10−71.1019 × 10−151.7231 × 10−61.3526 × 10−700
Mean3.9194 × 10−83.6821 × 10−171.5964 × 10−76.9148 × 10−900
Std5.6393 × 10−82.0117 × 10−163.2834 × 10−72.5352 × 10−800
f3Best193.90702.8194 × 10−85.0620 × 10−41.3041 × 10−1200
Worst607.906116.58600.00804.3909 × 10−700
Mean359.47450.56790.00206.8692 × 10−800
Std119.46553.02560.00191.1022 × 10−700
f4Best1.7150 × 1031.7196 × 10310.320397.992598.423296.9640
Worst5.2518 × 1031.6970 × 105163.320597.997498.650597.7413
Mean3.1415 × 1039.3677 × 10464.725297.994898.534797.3542
Std969.86804.9125 × 10448.89320.00130.05420.1748
f5Best8.8700 × 1035.8244 × 1032.55200.25880.66970.1650
Worst4.6263 × 1045.8722 × 10432.20560.99570.71300.6670
Mean1.7986 × 1042.6486 × 10419.40810.67880.68240.2483
Std9.9693 × 1031.5023 × 1046.81180.36520.01000.1220
f6Best7.1656 × 1031.0137 × 10−80.01581.4010 × 10600
Worst2.7522 × 1040.46700.59921.7946 × 10600
Mean1.4753 × 1040.03740.11891.5276 × 10600
Std5.2502 × 1030.10850.11477.8348 × 10400
f7Best60.425420.001989.545164.400200
Worst74.009833.072494.958467.801600
Mean69.084527.654192.697766.412900
Std2.76113.01371.51260.829300
f8Best7.94222.4244 × 10−250.04341.8359 × 10−500
Worst31.83091.3145 × 10−151.83367.7035 × 10−400
Mean15.97171.8973 × 10−160.41742.4555 × 10−400
Std5.22352.8673 × 10−160.45292.1085 × 10−400
f9Best0.01320.84900.84100.05562.9430 × 10−65.9305 × 10−6
Worst0.05966.98531.93231.05841.8325 × 10−41.6123 × 10−4
Mean0.03892.78731.50580.39945.9754 × 10−55.0547 × 10−5
Std0.01211.44890.24310.24764.0570 × 10−54.0937 × 10−5
f10Best178,07700131,44900
Worst245,665325911150,60300
Mean2.0967 × 105208.30003.90001.4119 × 10500
Std1.6501 × 104608.10962.79595.5073 × 10300
f11Best1.5426 × 10−10400.00223.2837 × 10−400
Worst2.4719 × 10−910203.04412.0208 × 10500
Mean8.3044 × 10−93020.68291.0851 × 10400
Std4.5118 × 10−92043.52213.8186 × 10400
f12Best667.460057.72451.2511 × 1031.3428 × 10−900
Worst945.0711201.42141.5822 × 1034.1973 × 10−600
Mean795.7159121.64251.4465 × 1033.3379 × 10−700
Std71.256631.673174.63568.8407 × 10−700
f13Best5.36914.6892 × 10−86.4557 × 10−41.9224 × 10300
Worst15.89320.84680.15072.2859 × 10300
Mean8.39920.09710.03932.1567 × 10300
Std2.30100.24740.040172.420800
f14Best360.58595.4570 × 10−1244.07416.6412 × 10−900
Worst554.49537.1574 × 10−694.45516.2310 × 10−600
Mean457.66723.5819 × 10−776.21989.6717 × 10−700
Std53.25551.3432 × 10−611.87631.5725 × 10−600
f15Best5.15812.6698 × 10−42.70548.85928.8818 × 10−168.8818 × 10−16
Worst7.32608.09904.335611.33018.8818 × 10−168.8818 × 10−16
Mean6.19884.23763.301910.40078.8818 × 10−168.8818 × 10−16
Std0.63923.03890.38050.678900
f16Best222.646211.70000.40545.5729 × 10−1000
Worst430.1541356.00545.26183.7702 × 10−600
Mean300.4803124.76371.09243.6632 × 10−700
Std57.050480.37930.89727.3156 × 10−700
f17Best4.10145.5199 × 10−160.33101.5395 × 10−600
Worst15.77940.34832.89681.4224 × 10−400
Mean9.92030.10211.77833.1874 × 10−500
Std3.08570.07950.66983.4611 × 10−500
The number of optimal values01101416
Table 7. The experimental results of several algorithms on the 17 test functions with Dim = 500.
Table 7. The experimental results of several algorithms on the 17 test functions with Dim = 500.
FunctionsResultsPSOCSOABCAFSAASCSO-S [28]ADPCCSO
f1Best3.4417 × 1052.5543 × 1033.6768 × 1051.0969 × 10600
Worst4.6704 × 1056.8327 × 1044.4583 × 1051.1766 × 10600
Mean3.8680 × 1052.5851 × 1044.1182 × 1051.1455 × 10600
Std2.8825 × 1042.1362 × 1042.1084 × 1041.7879 × 10400
f2Best2.9636 × 10−52.0082 × 10−200.00133.8515 × 10−2400
Worst5.8766 × 10−41.0013 × 10−80.09546.4414 × 10−2100
Mean2.4724 × 10−43.4055 × 10−100.02831.1455 × 10−2100
Std1.5991 × 10−41.8271 × 10−90.02221.7334 × 10−2100
f3Best6.8732 × 1051.5062 × 1038.3936 × 1057.8388 × 10−1100
Worst9.1084 × 1051.5161 × 1051.1140 × 1068.8722 × 10−700
Mean7.9392 × 1054.7686 × 1049.8574 × 1051.3798 × 10−700
Std5.4605 × 1043.9801 × 1046.4256 × 1042.4515 × 10−700
f4Best1.7194 × 1061.3918 × 1066.3634 × 106493.9575496.7487493.9299
Worst3.5606 × 1061.9726 × 1061.2479 × 107493.9587497.2473493.9573
Mean2.3993 × 1061.6391 × 1069.3641 × 106493.9579496.9662493.9538
Std3.9854 × 1051.3493 × 1051.4413 × 1062.6809 × 10−40.12790.0061
f5Best1.0013 × 1081.8404 × 1067.8591 × 1070.31840.66890.2500
Worst1.3784 × 1087.8611 × 1061.5676 × 1080.99990.70171.000
Mean1.1896 × 1082.8855 × 1061.2822 × 1080.97720.67930.4515
Std1.0854 × 1071.0721 × 1061.7802 × 1070.12440.00800.3363
f6Best2.9002 × 1072.2390 × 1053.5757 × 1079.9591 × 10700
Worst3.5667 × 1076.8730 × 1064.6993 × 1071.0592 × 10800
Mean3.3281 × 1072.7459 × 1064.1792 × 1071.0295 × 10800
Std1.7553 × 1061.9846 × 1062.6048 × 1061.5075 × 10600
f7Best86.950328.142298.371285.701700
Worst92.026637.395499.477386.604100
Mean90.068332.409099.003386.078700
Std1.09622.60200.30650.197800
f8Best7.2696 × 10−514.82512.5230 × 10−600
Worst0.027235.10576.2084 × 10−400
Mean0.003823.26321.6534 × 10−400
Std0.00525.81711.4938 × 10−400
f9Best6.8570 × 103109.68456.4870 × 1030.04054.3034 × 10−64.3945 × 10−6
Worst9.8138 × 103248.10321.1307 × 1041.09152.5897 × 10−42.3338 × 10−4
Mean8.0498 × 103165.47848.7373 × 1030.47478.6590 × 10−57.8670 × 10−5
Std713.703735.65371.3277 × 1030.32696.7364 × 10−56.1413 × 10−5
f10Best1,184,738696338,4071,090,29900
Worst1,420,60978,661438,6991,178,87800
Mean1.3025 × 1063.2412 × 1044.0323 × 1051.1436 × 10600
Std5.1289 × 1042.1558 × 1042.1198 × 1042.1114 × 10400
f11Best2.2971 × 10−1080238.19842.593500
Worst2.3336 × 10−9003.2393 × 1086.8893 × 101200
Mean8.4855 × 10−9202.4048 × 1073.5147 × 101100
Std4.2637 × 10−9106.8332 × 1071.3599 × 101200
f12Best7.4466 × 103423.56078.0224 × 1037.4597 × 10−1000
Worst9.0534 × 103957.89569.0259 × 1037.0508 × 10−500
Mean8.3756 × 103605.47798.5640 × 1034.8219 × 10−600
Std411.8127134.3923225.02381.5936 × 10−500
f13Best3.1602 × 1034.12793.3720 × 1031.2415 × 10400
Worst4.0106 × 103619.87094.1088 × 1031.3397 × 10400
Mean3.5174 × 103257.68463.7513 × 1031.2997 × 10400
Std207.0651196.7478178.7140245.861600
f14Best4.4677 × 1030.05543.4416 × 103000
Worst5.3152 × 10328.42603.9138 × 1039.2223 × 10−600
Mean4.9007 × 1031.91753.7437 × 1035.3898 × 10−700
Std223.79735.1479122.19011.6800 × 10−600
f15Best19.90679.362918.737819.48678.8818 × 10−168.8818 × 10−16
Worst20.329710.805319.168819.68998.8818 × 10−168.8818 × 10−16
Mean20.087410.188018.962119.61198.8818 × 10−168.8818 × 10−16
Std0.09560.29900.10580.053500
f16Best3.6786 × 1045.9190 × 1035.8397 × 1032.4219 × 10−1300
Worst5.9557 × 1041.1086 × 1046.9910 × 1042.3802 × 10−700
Mean4.8675 × 1047.6308 × 1034.6904 × 1042.7949 × 10−800
Std6.0856 × 1031.0778 × 1031.4820 × 1045.2233 × 10−800
f17Best579.01760.3822347.23006.4884 × 10−700
Worst846.706611.2094455.84001.1283 × 10−400
Mean651.81882.8344425.68863.0884 × 10−500
Std53.63872.475822.23232.7903 × 10−500
The number of optimal values01001417
Table 8. Friedman test results of algorithms.
Table 8. Friedman test results of algorithms.
AlgorithmsAverage RankingRanking
ADPCCSO1.501
ASCSO-S1.792
CSO4.063
AFSA4.244
ABC4.244
PSO5.185
Table 9. The experimental results of three improved CSO algorithms with Dim = 100.
Table 9. The experimental results of three improved CSO algorithms with Dim = 100.
FunctionsResultsGCSO [27]DMCSO [29]ADPCCSO
f1Best1.85 × 10−225.2267 × 10−140
Worst8.95 × 10−221.0984 × 10−20
Mean3.44 × 10−225.8629 × 10−40
Std1.49 × 10−222.0933 × 10−30
f2Best2.0470 × 10−520
Worst3.6166 × 10−160
Mean1.5245 × 10−170
Std6.7682 × 10−170
f3Best0
Worst0
Mean0
Std0
f4Best98.42.1671 × 10 −596.9640
Worst99.115.96697.7413
Mean98.41.191297.3542
Std0.16853.26480.1748
f5Best0.234330.1650
Worst0.950060.6670
Mean0.394950.2483
Std0.193410.1220
f6Best0
Worst0
Mean0
Std0
f7Best0
Worst0
Mean0
Std0
f8Best1.4932 × 10−290
Worst1.3772 × 10−250
Mean1.1269 × 10−260
Std2.7019 × 10−260
f9Best5.9305 × 10−6
Worst1.6123 × 10−4
Mean5.0547 × 10−5
Std4.0937 × 10−5
f10Best0
Worst0
Mean0
Std0
f11Best2.71 × 10−905.0821 × 10−160
Worst4.94 × 10−757.8047 × 10−40
Mean1.23 × 10−751.1682 × 10−40
Std1.89 × 10−752.0372 × 10−40
f12Best2.4456 × 10−60
Worst3.1745 × 1020
Mean1.2118 × 1020
Std1.1470 × 1020
f13Best01.0436 × 10−130
Worst3.33 × 10−161.5270 × 10−40
Mean2.78 × 10−171.5162 × 10−50
Std6.16 × 10−173.8028 × 10−50
f14Best02.2612 × 10−130
Worst1.95 × 10−148.8588 × 10−60
Mean2.72 × 10−156.6629 × 10−70
Std5.04 × 10−152.0084 × 10−60
f15Best5.21 × 10−241.3195 × 10−78.8818 × 10−16
Worst1.69 × 10−219.2963 × 10−28.8818 × 10−16
Mean9.08 × 10−249.5407 × 10−38.8818 × 10−16
Std2.44 × 10−242.2479 × 10−20
f16Best4.5274 × 10−50
Worst7.6947 × 10−20
Mean1.1517 × 10−20
Std1.9620 × 10−20
f17Best9.5471 × 10−300
Worst9.2151 × 10−20
Mean9.2379 × 10−30
Std1.9493 × 10−20
Table 10. The experimental results of the ADPCCSO algorithms with Dim = 1000.
Table 10. The experimental results of the ADPCCSO algorithms with Dim = 1000.
FunctionsMeanstdBestWorst
f10000
f20000
f30000
f4988.90734.2390 × 10−4988.9067988.9089
f50.89000.25250.25051.0000
f60000
f70000
f80000
f97.4907 × 10−57.1055 × 10−53.3858 × 10−72.8846 × 10−4
f100000
f110000
f120000
f130000
f140000
f158.8818 × 10−1608.8818 × 10−168.8818 × 10−16
f160000
f170000
Table 11. The observed growth concentration of glutamate.
Table 11. The observed growth concentration of glutamate.
Time (h)Concentration (g/L)Time (h)Concentration (g/L)
20.321120.869
30.353130.878
40.369140.879
50.408150.893
60.581160.894
70.640170.900
80.742180.901
90.781190.902
100.824200.903
110.855210.903
Table 12. The experimental results of optimal solutions obtained by various algorithms.
Table 12. The experimental results of optimal solutions obtained by various algorithms.
Parameters α β γ δ
Algorithms
VS-FOA [36]0.89654.83690.60793.0260
ASCSO-S [28]0.89735.50.65563.6327
ADPCCSO0.89496.55220.75334.4263
Table 13. The growth concentration of glutamate predicted by each algorithm.
Table 13. The growth concentration of glutamate predicted by each algorithm.
Time (h)VS-FOA [36]ASCSO-S [28]ADPCCSO
20.26860.28210.2858
30.32600.33660.3383
40.39350.40030.3997
50.47050.47310.4705
60.55420.55340.5499
70.63880.63630.6341
80.71610.71420.7155
90.77890.77920.7840
100.82440.82650.8328
110.85430.85720.8626
120.87250.87540.8789
130.88310.88560.8872
140.88910.89120.8912
150.89240.89410.8932
160.89430.89560.8941
170.89530.89640.8945
180.89580.89680.8947
190.89610.89710.8948
200.89630.89720.8949
210.89640.89720.8949
fit0.00970.00890.0087
Table 14. The comparison results of three algorithms.
Table 14. The comparison results of three algorithms.
AlgorithmsVS-FOAASCSO-SADPCCSO
Indexes
RMSE0.02200.02110.0209
MAE0.01360.01350.0146
R20.98880.98960.9899
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, J.; Wang, L.; Ma, M. An Adaptive Dual-Population Collaborative Chicken Swarm Optimization Algorithm for High-Dimensional Optimization. Biomimetics 2023, 8, 210. https://doi.org/10.3390/biomimetics8020210

AMA Style

Liang J, Wang L, Ma M. An Adaptive Dual-Population Collaborative Chicken Swarm Optimization Algorithm for High-Dimensional Optimization. Biomimetics. 2023; 8(2):210. https://doi.org/10.3390/biomimetics8020210

Chicago/Turabian Style

Liang, Jianhui, Lifang Wang, and Miao Ma. 2023. "An Adaptive Dual-Population Collaborative Chicken Swarm Optimization Algorithm for High-Dimensional Optimization" Biomimetics 8, no. 2: 210. https://doi.org/10.3390/biomimetics8020210

APA Style

Liang, J., Wang, L., & Ma, M. (2023). An Adaptive Dual-Population Collaborative Chicken Swarm Optimization Algorithm for High-Dimensional Optimization. Biomimetics, 8(2), 210. https://doi.org/10.3390/biomimetics8020210

Article Metrics

Back to TopTop