Next Article in Journal
Some Bounds on Eigenvalues of the Hadamard Product and the Fan Product of Matrices
Previous Article in Journal
An Iterative Approach to the Solutions of Proximal Split Feasibility Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity

1
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230009, China
2
Ningxia Province Key Laboratory of Intelligent Information and Data Processing, North Minzu University, Yinchuan 750021, China
3
School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(2), 146; https://doi.org/10.3390/math7020146
Submission received: 22 November 2018 / Revised: 22 January 2019 / Accepted: 28 January 2019 / Published: 3 February 2019

Abstract

:
It is generally known that the balance between convergence and diversity is a key issue for solving multi-objective optimization problems. Thus, a chaotic multi-objective particle swarm optimization approach incorporating clone immunity (CICMOPSO) is proposed in this paper. First, points in a non-dominated solution set are mapped to a parallel-cell coordinate system. Then, the status of the particles is evaluated by the Pareto entropy and difference entropy. At the same time, the algorithm parameters are adjusted by feedback information. At the late stage of the algorithm, the local-search ability of the particle swarm still needs to be improved. Logistic mapping and the neighboring immune operator are used to maintain and change the external archive. Experimental test results show that the convergence and diversity of the algorithm are improved.

1. Introduction

Most problems [1,2,3,4,5,6] in engineering and science are multi-objective optimization problems, and these objectives usually conflict with each other. The focus of academics and engineers is on finding an optimal solution for these problems. For multi-objective optimization problems, however, when a certain objective is improved, it often simultaneously impacts other objectives. Usually, people make a trade-off and compromise to solve this problem. A very effective method to solve such problems is by using an intelligent algorithm. In recent years, intelligent algorithms have been paid increasing attention in many areas [7,8,9,10] and have achieved good results. However, when solving multi-objective optimization problems using intelligent algorithms, there are three goals that need to be satisfied: (1) particles must be as close to the Pareto optimal front as possible; (2) there must be a maximal number of particles in the Pareto optimal front; and (3) particles must be distributed as evenly as possible in the Pareto optimal front.
The particle swarm optimization (PSO) algorithm [11,12] is one of the most important and studied paradigms in computational swarm intelligence. It was put forward by Dr. Kennedy and Professor Eberhart in 1995. The simple form, simple parameters, and rapid convergence of PSO have contributed to its rapid development and it becoming a research hotspot over the last 20 years. For multi-objective PSO algorithms, there are six research directions [13]: (1) aggregating approaches, which combine all the objectives of the problem into a single one. In other words, the multi-objective problem is transformed into a single-objective problem. It is not a new idea, since aggregating functions can be derived from the well-known K-T conditions for non-dominated solutions. (2) Lexicographic ordering, in which the objectives are ranked in order of importance. The optimal solution is obtained by separately minimizing the objective functions, starting with the most important one and proceeding according to the order of importance of the objectives. Lexicographic ordering tends to be useful only when few objective functions are used, and it maybe sensitive to the ordering of the objectives. (3) Subpopulation approaches, which use several subpopulations as single-objective optimizers. In order to balance the objectives, the subpopulations exchange information with each other. Approaches: The information exchange with Sub-Population is not well controlled, and it is difficult to ensure concurrent evolution of each objective. (4) Pareto-based approaches, which use leader selection techniques based on Pareto dominance. This method is the mainstream method, such as multi-objective particle swarm (MOPSO) [14]. It is useful for the multi-objective optimization problems with few objective functions, but when the objective functions increase, the selection pressure increases. (5) Combined approaches, that combine the PSO algorithm with other algorithms. Some examples include the genetic algorithm (GA) [15], the cultural algorithm [16], and the differential evolution (DE) algorithm [17]. (6) Other approaches [18,19] that do not fall into the above five types, such as designing a threshold for the particle updating strategy [20].
The remainder of this paper is organized as follows. In Section 2, we describe the multi-objective optimization and PSO algorithm. Thereafter, in Section 3, we explain a computational method to improve the multi-objective PSO algorithm. Section 4 outlines the strategy and flow of this algorithm. Test problems, performance measures, and results are provided in Section 5, and conclusions are presented in Section 6.

2. Propaedeutics

2.1. Multi-Objective Optimization Problem

In general, a multi-objective optimization problem with n decision variables and M objective functions can be described as [21]:
m i n y = f ( x ) = [ f 1 ( x ) , f 2 ( x ) , , f M ( x ) ] s . t . g i ( x ) 0 , i = 1 , 2 , , p h j ( x ) = 0 , j = 1 , 2 , , q
where, x = ( x 1 , x 2 , , x n ) X is a single decision variable, X is the decision variable space, y = ( f 1 , f 2 , , f M ) Y is the objective function vector, Y is the objective function space, g i ( x ) is the i th inequality constraint, and h j ( x ) is the j th equality constraint.
The following definitions are useful for the conceptual framework of a multi-objective optimization problem:
Definition 1.
(Pareto dominance) Vector x = ( x 1 , x 2 , , x n ) dominates vector x = ( x 1 , x 2 , , x n ) if and only if the next statement is true:
i = ( 1 , 2 , , n ) : f i ( x ) f i ( x ) a n d i ( 1 , 2 , , n ) : f i ( x ) < f i ( x ) .
We denote this dominance as x x .
Definition 2.
(Pareto optimality) A solution x * D is a Pareto optimal solution if there is no other x D that satisfies f ( x ) f ( x * ) .
Definition 3.
(Pareto optimal set) The Pareto optimal set is defined as the set of all Pareto optimal solutions.
Definition 4.
(Pareto optimal front) The Pareto optimal front consists of the values of the objective functions at the solutions in the Pareto optimal set.

2.2. Particle Swarm Optimization Algorithm

Let n be the dimension of the search space, x i = ( x i 1 , x i 2 , , x i n ) be the current position of the i th particle in the swarm, p i , best = ( p i 1 , best , p i 2 , best , , p i n , best ) be the best position of the i th particle at that time, and g best = ( g 1 , best , g 2 , best , , g n , best ) be the best position that any particle in the entire swarm has visited. The rate of the velocity of the i th particle is denoted as v i = ( v i 1 , v i 2 , , v i n ) . The position of each particle x i j t , and its velocity v i j t updated according to the following:
v i j t + 1 = w v i j t + c 1 r 1 ( p i j , best t x i j t ) + c 2 r 2 ( g i j , best t x i j t )
x i j t + 1 = x i j t + v i j t + 1 ,
where c 1 , c 2 are the learning factors, and r 1 , r 2 are random numbers in [ 0 , 1 ] . w is the inertia weight, which is defined as follows [22]:
w = w m a x t * ( w m a x w m i n ) / T m a x ,
where, w m a x , w m i n are the maximum and minimum of inertia weight w, respectively, t is the time of the present iteration, and T m a x is the maximum number of iterations.

3. Pareto Entropy and Difference Entropy-Based Improvements of PSO

3.1. Parallel Cell Coordinate System

The parallel cell coordinate system (PCCS) [23] maps the target vector of non-dominated solutions in the external archive to a two-dimensional plane by parallel coordinates, and then rounds these coordinate values. Its mathematical formula is as follows:
L k , m = K f k , m f m m i n f m m a x f m m i n ,
where x is a top integral function; k = 1 , 2 , , K where K is the external archive size in the current iteration; m = 1 , 2 , , M where M is the number of objective functions in the optimization problem; f m m a x = max k f k , m , and f m m i n = min k f k , m . When f m m i n = f m m a x , L k , m = 1 .
Consider Table 1 as an example, in which eight particles were included for three objectives. According to the formula (5), we can calculate L k , m and draw the Figure 1.
In this paper, we use the Pareto entropy to estimate the distribution uniformity of the Pareto front, which is calculated with the following formula:
E n t r o p y ( t ) = k = 1 K m = 1 M C e l l k , m ( t ) K M log C e l l k , m ( t ) K M
When approximating the Pareto front map to the PCCS, C e l l k , m ( t ) is the number of cell coordinate components in the kth row and mth column cell.
According to Pareto entropy, we know the distribution of particles in the current iteration. However, in order to judge the change between this iteration and the last iteration, a difference entropy Δ E n t r o p y is proposed [23]:
Δ E n t r o p y = E n t r o p y ( t ) E n t r o p y ( t 1 )
Through the difference entropy, information about the external archive can be obtained, and we can dynamically adjust population using this information.

3.2. Difference Entropy Discussion

Previous work has proved that the maximum Pareto entropy is log M and the minimum Pareto entropy is log K M [23]. Here, we discuss a few special situations for the difference entropy:
(1) In the t t h iteration, the corresponding coordinate components in the PCCS of all objective vectors on the Pareto front are the most nonuniform, and after maintaining the external archive, the corresponding coordinate components of all objective vectors are the most uniform.
Proof. 
Assume that, in the t t h iteration, the distribution of parallel cell coordinates is C e l l k , m ( t ) and the external archive size is K 1 . In the ( t + 1 ) t h iteration, the distribution of parallel cell coordinates is C e l l k , m ( t + 1 ) , and the external archive size is K 2 . Next, make all components of the nth column collect in the nth column of c-row, so that C e l l c , n ( t ) = K 1 and C e l l k c , n ( t ) = 0 . From the L’Hospital’s rule, we can know lim x 0 x log ( x ) = 0 , so we stipulate that 0 log ( 0 ) = 0 . In the ( t + 1 ) t h iteration, C e l l k , m ( t + 1 ) = 1 , that is to say, it always holds for kth row and mth column. We obtain Difference Entropy through the following formula:
Δ E n t r o p y = E n t r o p y ( t + 1 ) E n t r o p y ( t ) = k = 1 K 2 m = 1 M C e l l k , m ( t + 1 ) K 2 M log C e l l k , m ( t + 1 ) K 2 M k = 1 K 1 m = 1 M C e l l k m ( t ) K 1 M log C e l l k , m ( t ) K 1 M = k = 1 K 1 m = 1 M C e l l k , m ( t ) K 1 M log C e l l k , m ( t ) K 1 M k = 1 K 2 m = 1 M C e l l k , m ( t + 1 ) K 2 M log C e l l k , m ( t + 1 ) K 2 M = m = 1 M K 1 K 1 M log K 1 K 1 M k = 1 K 2 m = 1 M 1 K 2 M log 1 K 2 M = log 1 M log 1 K 2 M = log K 2
 □
(2) In the t t h iteration, the corresponding coordinate components in the PCCS of all objective vectors on the Pareto front are the most nonuniform and after maintaining the external archive, the corresponding coordinate components of all objective vectors remain the most nonuniform.
Proof. 
From Case (1), we know C e l l c 1 , n 1 ( t ) = K 1 and C e l l k c 1 , n 1 ( t ) = 0 . Furthermore, C e l l c 2 , n 2 ( t + 1 ) = K 2 and C e l l k c 2 , n 2 ( t + 1 ) = 0 . From Equation (7):
Δ E n t r o p y = E n t r o p y ( t + 1 ) E n t r o p y ( t ) = k = 1 K 2 m = 1 M C e l l k , m ( t + 1 ) K 2 M log C e l l k , m ( t + 1 ) K 2 M k = 1 K 1 m = 1 M C e l l k , m ( t ) K 1 M log C e l l k m ( t ) K 1 M = m = 1 M K 1 K 1 M log K 1 K 1 M m = 1 M K 2 K 2 M log K 2 K 2 M = log 1 M log 1 M = 0
 □
(3) In the t t h iteration, the corresponding coordinate components in the PCCS of all objective vectors on the Pareto front are the most uniform and after maintaining the external archive, the corresponding coordinate components of all objective vectors remain the most uniform.
Proof. 
From Cases (1) and (2), we know that C e l l k , m ( t ) = 1 and C e l l k , m ( t + 1 ) = 1 . So:
Δ E n t r o p y = E n t r o p y ( t + 1 ) E n t r o p y ( t ) = k = 1 K 2 m = 1 M C e l l k , m ( t + 1 ) K 2 M log C e l l k , m ( t + 1 ) K 2 M k = 1 K 1 m = 1 M C e l l k , m ( t ) K 1 M log C e l l k m ( t ) K 1 M = k = 1 K 1 m = 1 M 1 K 1 M log 1 K 1 M k = 1 K 2 m = 1 M 1 K 2 M log 1 K 2 M = log 1 K 1 M log 1 K 2 M = log K 2 K 1
 □
(4) In the t t h iteration, the corresponding coordinate components in the PCCS of all objective vectors on the Pareto front are the most uniform, but after maintaining the external archive, the corresponding coordinate components of all objective vectors are the most nonuniform.
Proof. 
From Case (1), we know C e l l k , m ( t ) = 1 , C e l l c 2 , n 2 ( t + 1 ) = K 2 , and C e l l k c 2 , n 2 ( t + 1 ) = 0 .
Δ E n t r o p y = E n t r o p y ( t + 1 ) E n t r o p y ( t ) = k = 1 K 2 m = 1 M C e l l k , m ( t + 1 ) K 2 M log C e l l k , m ( t + 1 ) K 2 M k = 1 K 1 m = 1 M C e l l k , m ( t ) K 1 M log C e l l k m ( t ) K 1 M = k = 1 K 1 m = 1 M 1 K 1 M log 1 K 1 M m = 1 M K 2 K 2 M log K 2 K 2 M = log 1 K 1 M log 1 M = log 1 K 1
 □
In general, there are only four cases for maintaining the external archive, that is, nonuniform to more nonuniform, nonuniform to relatively uniform, uniform to relatively nonuniform, and uniform to more uniform. It is certain that difference entropy has a maximum and minimum, that is to say, the difference entropy is bounded.

3.3. Sketch of State Inspection

The distribution of particles in an external archive can directly react with entropy, and the status of the external archive can be indirectly speculated by the difference entropy. This reaction may be a convergence condition, diverse condition, or stagnant; then, we can change the parameters of the algorithm through the feedback information of these particles. So, the primary question is how to test the state of particles. We directly use the threshold values from the literature [21]:
δ c = 2 H log 2
δ s = 2 M K log 2 ,
where H is the number of elements in the external archive, K is the maximum capacity of the external archive, and M is the number of objective functions.
Now, the determining conditions of the environment are given as follow:
  • Convergence condition: | Δ E n t r o p y | > δ c or | H ( t ) H ( t 1 ) | > 0
  • Divergence condition: δ s < | Δ E n t r o p y | < δ c and H ( t ) = H ( t 1 ) = K
  • Stagnant condition: | Δ E n t r o p y | < δ s and H ( t ) = H ( t 1 )
Obviously, the determining conditions are sensitive to changes in H. Whenever H increases or decreases, it is seen as a convergence condition because, when H changes, at least a non-dominated solution is introduced from the external archive. The external archive changes, and the PCCS may also change. This is because an H increase or decrease may lead to the maximum or minimum of one dimension in the objective vector change.

4. Multi-Objective Particle Swarm Optimization Algorithm Based on Clone Immunity

4.1. External Archive Update

In a multi-objective particle swarm optimization algorithm, the maintenance of the external archives plays a key role. First, the calculation method of the individual density is introduced [23]. When the particle is mapped to the PCCS, the individual density of P i ( i , 1 = 2 , , K , K is the external archive size) is obtained according to the following formula:
D e n s i t y ( P i ) = j = 1 , j i K 1 P C D ( P i , P j ) 2 ,
where P j ( j , 1 = 2 , , K , j i , P C D ( P i , P j ) is parallel cell distance between P i and P j , which can be calculated by the following formula:
P C D ( P i , P j ) = 0.5 if m , L i , m = L j , m m = 1 M | L i , m L j , m | if m , L i , m L j , m
Then, the specific update steps in Algorithm 1 [23].
Algorithm 1: Improved external archive updating algorithm.
Input: (i) External archive A.
    (ii) Maximum size of the external archive K.
    (iii) New solution P obtained by the algorithm.
Output: Updated external archive A n e w .
Step 1: Determine B = A P and calculate the objective vector value of B.
Step 2: Find the non-dominated solution set B for B.
Step 3: If | B | K , then A n e w = B ; if not, turn to Step 4.
Step 4: Calculate the individual density of all particles in B .
Step 5: Place particles in ascending order by their individual density, and take the first K particles to compose A n e w .

4.2. Update Strategy of the Global Best Position and Personal Best Position

Particles are selected by the lattice dominance strength [23], the specific steps in Algorithm 2.
Algorithm 2: Update strategy of the global best position.
Input: (i) External archive A and the number of objective functions M.
    (ii) Status of the external archive.
Output: Global best position.
Step 1: Calculate the lattice coordinate vector L k , m of particles in the external archive.
Step 2: Calculate the lattice dominant strength and individual density D e n s i t y ( i ) .
Step3: Determine a and b by the status of the external archive.
    (i) convergence: a = M 1 , b = M + 1 .
    (ii) diversification: a = M + 1 , b = M 1 .
    (iii) stagnation: a = M , b = M .
Step 4: Sort the particles in descending order by the lattice dominant strength and in ascending order by the individual density.
Step 5: Take the first a particles by the individual density and the first b particles by the lattice dominant strength to store in C.
Step 6: Randomly select one particle from C as the global best position.
 For the update strategy of the personal best position, we adopted the following method: [24]:
p i , b e s t t = p i , b e s t t 1 if f ( p i , b e s t t ) f ( x i t ) x i t if f ( x i t ) f ( p i , b e s t t ) r a n d s e l e t ( p i , b e s t t 1 , x i t ) otherwise

4.3. Parameter Selection Strategy

The PSO algorithm has a simple form and can be easily implemented, but always approaches the global and personal best position, and tends to a local best position after several iterations. In order to overcome the early-maturing phenomenon, the adaptive inertia weight based on the feedback information from status evaluation is used in this paper [25].
w ( t ) = w ( t 1 ) stagnation w ( t 1 ) 2 × S t e p w ( 1 + | Δ E n t r o p y ( t ) | ) convergence w ( t 1 ) + 2 × S t e p w ( | Δ E n t r o p y ( t ) | ) diversificat i o n
where S t e p w is the adjusting stepsize of w, and its formula is as follows:
S t e p w = w m a x w m i n T m a x ,
where w m a x = 0.9 , w m i n = 0.4 , and T m a x is the maximum number of iterations. The initial value of w is 0.9.
For c 1 and c 2 , nonlinear functions of the inertia weight w are used [23]:
c 1 ( t ) = 1.167 × w 2 0.1167 × w + 0.66 c 2 ( t ) = 3 c 1 ( t ) .
Thus, the learning factors c 1 and c 2 also adjust dynamically with the adjustment of the inertia weight.
According to Figure 2, the inertia weight decreased approximately linearly. With the inertia weight change, the learning factor changed.

4.4. Clone Immune Strategy

With each iteration of the algorithm, the diversity of the population decreases. In our paper, the clone, recombination, and mutation operations are used to solve this problem.
(1) Clone: For the external archive A, the population was cloned according to the crowding distance [26]. The larger the crowding distance, the larger the clone population. Then, the specific steps of clone in Algorithm 3:
Algorithm 3: Clone algorithm.
Input: (i) External archive A and number of objective functions M.
    (ii) Maximum size of the clone population N C .
Output: Clone population A .
Step 1: Calculate the crowding degree of the active population: p a d i s .
Step 2: Calculate the number of clones of each particle with the following formula:
q i = N C × p a d i s i j = 1 A p a d i s i .

Step 3: If the total number of clone particles exceeds N C , add the first N C particles to A and return A .
Sparse areas get more clone particles by the cloning operation, which can be performed several times.
(2) Recombination: Clone population A is recombined [26], the specific steps in Algorithm 4:
Algorithm 4: Recombination algorithm.
Input: (i) Clone population A and External archive A.
    (ii) Lower bound of particles: x m i n , upper bound of particles: x m a x , the size of the clone population: N , and parameter ε , r 1 .
Output: Recombination population A .
Step 1: For each particle x ( i ) A , randomly select a particle x ( t ) A .
Step 2: Generate a random number r a n d [ 0 , 1 ] . If r a n d < r 1 , calculate Δ = | x ( i ) j x ( t ) j | , y 1 = m i n ( x ( i ) j , x ( t ) j ) , y 2 = m a x ( x ( i ) j , x ( t ) j ) .
Step 3: If Δ > ε ,
   Step 3.1 If y 1 x m i n j > x m a x j y 2 , then N b e t a = 1 + 2 × ( x m a x j y 2 ) y 2 y 1 ;
   Step 3.2 Else N b e t a = 1 + 2 × ( y 1 x m i n j ) y 2 y 1 .
Step 4: Let β = 1 N b e t a and α = 2 β e p .
Step 5: Generate a random number r [ 0 , 1 ] , if r < 1 α , then α = α × r ; else, α = 2 α × r .
Step 6: Let e p = 1 16 and N b e t a = α e p .
Step 7: Calculate c h l d 1 = 0.5 × [ ( y 1 + y 2 ) N b e t a × ( y 2 y 1 ) ] , c h l d 2 = 0.5 × [ ( y 1 + y 2 ) + N b e t a × ( y 2 y 1 ) ] .
Step 8: Generate a random number r a n d [ 0 , 1 ] , if r a n d < r 2 , then x ( i ) j = m i n ( m a x ( c h l d 1 , x m a x j ) , x m i n j ) ; else, x ( i ) j = m i n ( m a x ( c h l d 2 , x m a x j ) , x m i n j ) .
(3) Mutation: The main way to produce new particles is through a mutation that can enhance the diversity of particles [26], the specific steps in Algorithm 5.
Algorithm 5: Mutation algorithm.
Input: (i) Recombination population A .
    (ii) Mutation parameter p m and parameters e t a m , r 1 .
    (iii) Lower bound of particles: x m i n , upper bound of particles: x m a x , and size of clone population: N .
Output: Mutation population A .
Step 1: Generate a random number r a n d [ 0 , 1 ] , if r a n d < p m ,
   Step 1.1 If x ( i ) j x m i n j < x m a x j x ( i ) j , then δ = x ( i ) j x m i n j x m a x j x m i n j ;
   Step 1.2 Else δ = x m a x j x ( i ) j x m a x j x m i n j .
Step 2: Calculate x y = 1 δ , and v a l = 2 r 1 + ( 1 2 r 1 ) × ( x y η m + 1 ) .
Step 3: Generate a random number r a n d [ 0 , 1 ] , if r a n d < r 1 , δ q = v a l 1 η m + 1 1 ; else, δ q = 1 v a l 1 η m + 1 .
Step 4: Calculate x ( i ) j = x ( i ) j + δ q ( x m a x j x m i n j ) .
Then, The external archive will be updated by A and Algorithm 1. We can find that the clone operation can make particles inherit good information, the recombination operation can make good genes be inherited better, and the mutation operation can increase the diversity of particles. So, all operations make the algorithm more effective.

4.5. Chaotic Strategy

In order to enhance the diversity and local search ability of the algorithm in the late iterative stages, a local chaotic searching approach is used for the external archive, the specific steps in Algorithm 6.
Algorithm 6: Local chaotic algorithm.
Input: (i) External archive A.
    (ii) Maximum number of chaotic searching agents M .
Output: Clone population A c .
Step 1: Randomly generate a chaotic initial point y 0 = r a n d , and chaotic searching number m = 1 .
Step 2: According to the logistic map equation in chaotic systems, the following chaotic series is generated: y j = 4 y j ( 1 y j ) .
Step 3: Renew the positions based on the following equation: x ( i ) j = x ( i ) j + r a n d ( 1 ) j y j .
Step 4: If the termination criterion is satisfied, then output the A c . Otherwise, let m = m + 1 and return to Step 2.

4.6. Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity (CICMOPSO)

Step 1: Initialize a population of particles X N , and each particle with random position vector x i and velocity vector v i . Further initialize the maximum size of the external archive K, maximum size of the active population N A , maximum size of the clone population N C , lower bound of particles x m i n , upper bound of particles x m a x , maximum number of generations T m a x , and the current generation number T = 0 .
Step 2: Calculate the fitness of all particles in X N ( T ) .
Step 3: Initialize an update to the external archive A.
Step 4: Calculate the parallel cell coordinates of particles with Equation (5).
Step 5: Calculate the Pareto entropy and difference entropy of particles with Equations (6) and (7).
Step 6: Evaluate the states of particles by the judgement condition.
Step 7: Adjust the inertia weight by the states of particles and calculate the learning factors.
Step 8: Select the global best solution with Algorithm 2.
Step 9: Renew the positions and velocities of the particles based on Equations (2) and (3).
Step 10: For the external archive, perform the clone operation (Algorithm 3), the recombination operation (Algorithm 4), and the mutation operation (Algorithm 5), then update to the external archive A.
Step 11: Conduct chaotic search for the external archive A with Algorithm 6, obtain the chaotic population, then update to the external archive A.
Step 12: Calculate the fitness of the particles and renew every optimal position of the particles.
Step 13: Renew the external archive A with Algorithm 1.
Step 14: If the termination criterion is satisfied, then output the external archive A. Otherwise, let T = T + 1 and return to Step 4.

4.7. Computational Complexity

In this section, the computational complexity of proposed algorithm is discussed. We consider the main steps in one generation in the main loop of algorithm. M is the objective number, N indicates the population size, K is external archive size (in our algorithm, K = N )
(1) The parallel cell coordinate of particles computing Process requires O ( N × M ) time;
(2) The clone operation requires O ( N ) time;
(3) The recombination operation requires O ( N C ) time, N C indicates the maximum size of Clone population;
(4) The mutation operation requires O ( N C ) time;
(5) The chaotic operation requires O ( N ) time;
(6) The external archive update process requires O ( N 2 × M ) time.
To summarize, the worst case overall computational complexity of CICMOPSO whin one generation is O ( N 2 × M ) , which indicates that CICMOPSO is computationally efficient.

5. Numerical Experimentation

5.1. Benchmark Function and Parameter Setting

In order to test the performance of the CICMOPSO algorithm, the ZDT [27] (Table 2) and DTLZ [28] (Table 3) test functions were used in the experiments in this paper. We then compared the algorithm to NSGA-II [21], SPAE2 [29], and NICPSO [26]. For the parameters in this paper, the initial values of the learning factors were c 1 = c 2 = 1.5 , the initial value of w was 0.9 , the recombination possibility P c = 0.8 , the mutation probability P m = 1 / n , which was the reciprocal of the decision variable dimension, the maximum number of the external archive was 100, the population size was 100, and maximum number of generations T m a x = 300 . Experiments were independently repeated 30 times. The optimization problems are described in Table 2.

5.2. Performance Indicators

(1) Convergence index [26]: Generational distance (GD) indicates the average distance between the obtained Pareto front and the true Pareto front. GD is defined as follow:
G D = i = 1 k d i n
where d i = j = 1 n ( P F j P F j t ) 2 , kis the number of Pareto Solutions, n is the number of objective functions, P F j indicate the j t h obtained Pareto solution, P F j t indicate the nearest point on true Pareto front from P F j .
(2) Spacing index [26]: Spacing (S) specify the spread of the obtained Pareto front. It is defined as follow:
S = 1 k 1 p = 2 k ( D p D ¯ ) 2
where D p = i = 1 n | P F i , ( p 1 ) P F i , p | , is the absolute difference between the two consecutive solutions in the obtained Pareto front P F , D ¯ is the average of all D p .

5.3. Numerical Results

The convergence results are shown in Table 4. The CICMOPSO algorithm had better results than the NSGA-II and SPEA2 algorithms for all benchmark functions. For ZDT4, CICMOPSO was worse than NICPSO, but for other functions, CICMOPSO had the better performance. This finding indicates that the resulting Pareto fronts obtained by CICMOPSO are closer to the true Pareto fronts, and CICMOPSO can effectively improve convergence.
From Table 5, we can see that NICPSO and CICMOPSO produced better results for the Spacing index (S) than NSGA-II and SPEA2. For ZDT1 and ZDT2, CICMOPSO has better performance than NICPSO, which has minimum mean and variance, but for other functions, NICPSO performed even better.
It is well known that multi-objective optimization problems, especially when the number of objectives ≥3, are more difficult to solve. Simulation results in Table 6 show that CICMOPSO can solve most objective function problems.

6. Conclusions

A chaotic multi-objective particle swarm optimization algorithm incorporating clone immunity (CICMOPSO), which is used to solve multi-objective problems, was proposed in this paper. CICMOPSO uses the clone immunity strategy to maintain an external file and avoid the algorithm falling into a local optimal solution. Additionally, the Pareto entropy is used to dynamically adjust the algorithm parameters.The experimental results showed that CICMOPSO significantly outperforms all other algorithms based on the test problems with respect to two metrics. For problems with two-objective functions, CICMOPSO performed well, but for more-objective functions, the results are not satisfactory. we will improve CICMOPSO to make it suitable for solving more problems in the near future.

Author Contributions

All the authors have contributed equally to this paper. All the authors have read and approved the final manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (61561001) and First-Class Disciplines Foundation of NingXia (Grant No. NXYLXK2017B09).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PSOParticle swarm optimization algorithm
MOPSOmulti-objective particle swarm
GAgenetic algorithm
DEdifferential evolution algorithm
PCCSparallel cell coordinate system
CICMOPSOchaos multi-objective particle swarm optimization incorporating clone immunity

References

  1. Sebt, M.H.; Afshar, M.R.; Alipouri, Y. Hybridization of genetic algorithm and fully informed particle swarm for solving the multi-mode resource-constrained project scheduling problem. Eng. Optim. 2017, 49, 513–530. [Google Scholar] [CrossRef]
  2. Rezaei, F.; Safavi, H.R.; Mirchi, A.; Madani, K. f-MOPSO: An Alternative Multi-Objective PSO Algorithm for Conjunctive Water Use Management. J. Hydro-Environ. Res. 2017, 14, 1–18. [Google Scholar] [CrossRef]
  3. Punnathanam, V.; Kotecha, P. Multi-objective optimization of Stirling engine systems using Front-based Yin-Yang-Pair Optimization. Energy Convers. Manag. 2017, 133, 332–348. [Google Scholar] [CrossRef]
  4. Bendu, H.; Deepak, B.B.V.L.; Murugan, S. Multi-objective optimization of ethanol fuelled HCCI engine performance using hybrid GRNN-PSO. Appl. Energy 2017, 187, 601–611. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Jun, Y.; Wei, G.; Wu, L. Find multi-objective paths in stochastic networks via chaotic immune PSO. Expert Syst. Appl. 2010, 37, 1911–1919. [Google Scholar] [CrossRef]
  6. Emary, E.; Zawbaa, H.M.; Hassanien, A.E.; Parv, B. Multi-objective retinal vessel localization using flower pollination search algorithm with pattern search. Adv. Data Anal. Classif. 2017, 11, 611–627. [Google Scholar] [CrossRef]
  7. Zawbaa, H.M.; Emary, E.; Grosan, C.; Snasel, V. Large-dimensionality small-instance set feature selection: A hybrid bio-inspired heuristic approach. Swarm Evol. Comput. 2018, 42, 29–42. [Google Scholar] [CrossRef]
  8. Zawbaa, H.M.; Szlek, J.; Grosan, C.; Jachowicz, R.; Mendyk, A. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection. PLoS ONE 2016, 11, e0157610. [Google Scholar] [CrossRef]
  9. Wang, S.; Zhang, Y.; Ji, G.; Yang, J.; Wu, J.; Wei, L. Fruit Classification by Wavelet-Entropy and Feedforward Neural Network Trained by Fitness-Scaled Chaotic ABC and Biogeography-Based Optimization. Entropy 2015, 17, 5711–5728. [Google Scholar] [CrossRef] [Green Version]
  10. Zawbaa, H.M.; Schiano, S.; Perez-Gandarillas, L.; Grosan, C.; Michrafy, A.; Wu, C.Y. Computational intelligence modelling of pharmaceutical tabletting processes using bio-inspired optimization algorithms. Adv. Poeder Technol. 2018, 29, 2966–2977. [Google Scholar] [CrossRef]
  11. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks, ICNN’95, Paris, France, 9–13 October 1995; IEEE: Piscataway, NJ, USA, 2002; Volume 4, pp. 1942–1948. [Google Scholar]
  12. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, MHS’95, Nagoya, Japan, 4–6 October 1995; IEEE: Piscataway, NJ, USA, 1995; pp. 39–43. [Google Scholar]
  13. Reyes-Sierra, M.; Coello Coello, C.A. Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art. Int. J. Comput. Intell. Res. 2006, 2, 287–308. [Google Scholar]
  14. Coello, C.A.C.; Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  15. Zhang, C.; Zhang, J.; Gu, X. The Application of Hybrid Genetic Particle Swarm Optimization Algorithm in the Distribution Network Reconfigurations Multi-Objective Optimization. In Proceedings of the International Conference on Natural Computation, Haikou, China, 24–27 August 2017; IEEE: Piscataway, NJ, USA, 2007; pp. 455–459. [Google Scholar]
  16. Reynolds, R.G. An Introduction to Cultural Algorithms. In Proceedings of the 3rd Annual Conference on Evolutionary Programming, San Diego, CA, USA, 24–26 February 1994; World Scientific Press: Singapore, 1994; pp. 131–139. [Google Scholar]
  17. Su, Y.X.; Chi, R. Multi-objective particle swarm-differential evolution algorithm. Neural Comput. Appl. 2017, 28, 407–418. [Google Scholar] [CrossRef]
  18. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization: An Overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  19. Sengupta, S.; Basak, S.; Peters, R. Particle Swarm Optimization: A Survey of Historical and Recent Developments with Hybridization Perspectives. Mach. Learn. Knowl. Extr. 2018, 1, 10. [Google Scholar] [CrossRef]
  20. Wei, J.; Wang, Y.; Wang, H. A hybrid Particle Swarm Evolutionary Algorithm for Constrained Multi-Objective Optimization. Comput. Inf. 2010, 29, 701–718. [Google Scholar]
  21. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T.A.M.T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  22. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the IEEE ICEC Conference, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  23. Wang, H.U.; Yen, G.G.; Xin, Z. Multiobjective Particle Swarm Optimization Based on Pareto Entropy. J. Softw. 2014, 25, 1025–1050. [Google Scholar]
  24. Daneshyari, M.; Yen, G.G. Cultural-based multiobjective particle swarm optimization. IEEE Trans. Syst. Man Cybern. Part B 2011, 41, 553–567. [Google Scholar] [CrossRef] [PubMed]
  25. Zhao, Y.; Fang, Z. Particle swarm optimization algorithm with weight function’s learning factor. J. Comput. Appl. 2013, 33, 2265–2268. [Google Scholar] [CrossRef]
  26. Liu, J.H.; Gao, Y.L. Multi-objective Particle Swarm Optimization with Non-dominated Neighbor immune strategy. J. Taiyuan Univ. Technol. 2014, 45, 769–775. [Google Scholar]
  27. Zitzler, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable multi-objective optimization test problems. In Proceedings of the 2002 Congress on Evolutionary Computation CEC’02, Honolulu, HI, USA, 12–17 May 2002; IEEE: Piscataway, NJ, USA, 2002; pp. 825–830. [Google Scholar]
  29. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm; Techn. Rep. TIK-Report 103; Swiss Federal Institute of Technology (ETH): Zurich, Switzerland, 2001. [Google Scholar]
Figure 1. Particle distribution in parallel cell coordinates system.
Figure 1. Particle distribution in parallel cell coordinates system.
Mathematics 07 00146 g001
Figure 2. The inertia weight and learning factors as a function of iteration number for ZDT1. (a) The curve of adaptive inertia weight and linear decreasing inertia weight; (b) The curve of adaptive learning factors c 1 and c 2 .
Figure 2. The inertia weight and learning factors as a function of iteration number for ZDT1. (a) The curve of adaptive inertia weight and linear decreasing inertia weight; (b) The curve of adaptive learning factors c 1 and c 2 .
Mathematics 07 00146 g002
Table 1. Parallel grid coordinate system of particles.
Table 1. Parallel grid coordinate system of particles.
tabF1F2F3 L k , 1 L k , 2 L k , 3
10.53771.8339−2.2588481
20.86220.3188−1.3077552
3−0.43360.34263.5784258
42.7694−1.34993.0349818
50.725−0.06310.7147445
6−0.2050−0.12411.4897346
71.40901.41720.6715675
8−1.20750.71721.6302166
Table 2. Benchmark function—ZDT.
Table 2. Benchmark function—ZDT.
NameObjective FunctionsDVariable Bounds
ZDT1 f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) ( 1 x 1 g ( x ) ) g ( x ) = 1 + 9 i = 2 n x i n 1 30 x i [ 0 , 1 ]
ZDT2 f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) [ 1 ( x 1 / g ( x ) ) 2 ] g ( x ) = 1 + 9 ( i = 2 n x i ) / ( n 1 ) 30 x i [ 0 , 1 ]
ZDT3 f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) ( 1 x 1 / g ( x ) x 1 sin ( 10 π x 1 ) / g ( x ) ) g ( x ) = 1 + 9 i = 2 n x i / ( n 1 ) 30 x i [ 0 , 1 ]
ZDT4 f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) ( 1 x 1 / g ( x ) ) g ( x ) = 1 + 10 ( n 1 ) + i = 2 n [ x i 2 10 cos ( 4 π x i ) ] 10 x 1 [ 0 , 1 ] x i [ 5 , 5 ] i = 2 , , n
ZDT6 f 1 ( x ) = 1 exp ( 4 x 1 ) sin 6 ( 6 π x 1 ) f 2 ( x ) = g ( x ) [ 1 ( f 1 ( x ) / g ( x ) ) 2 ] g ( x ) = 1 + 9 [ i = 2 n x i / ( n 1 ) ] 0.25 10 x i [ 0 , 1 ]
Table 3. Benchmark function—DTLZ.
Table 3. Benchmark function—DTLZ.
NameObjective FunctionsDVariable Bounds
DTLZ1 f 1 ( x ) = 0.5 ( 1 + g ( x M ) ) i = 1 M 1 x i f m = 2 , M 1 ( x ) = 0.5 ( 1 + g ( x M ) ) ( 1 x M m + 1 ) i = 1 M 1 x i f M ( x ) = ( 1 + g ( x M ) ) s i n ( x i π 2 ) g ( x ) = 100 ( | x M | + x i M ( x 0.5 ) 2 c o s ( 20 π ( x 0.5 ) ) ) 30 x i [ 0 , 1 ]
DTLZ2 f 1 ( x ) = 0.5 ( 1 + g ( x M ) ) i = 1 M 1 cos ( x i π 2 ) f m = 2 , M 1 ( x ) = 0.5 ( 1 + g ( x M ) ) sin ( x M m + 1 π 2 ) i = 1 M 1 cos ( x i π 2 ) f M ( x ) = ( 1 + g ( x M ) ) s i n ( x i π 2 ) g ( x ) = x i M ( x 0.5 ) 2 30 x i [ 0 , 1 ]
DTLZ3 f 1 ( x ) = 0.5 ( 1 + g ( x M ) ) i = 1 M 1 cos ( x i π 2 ) f m = 2 , M 1 ( x ) = 0.5 ( 1 + g ( x M ) ) sin ( x M m + 1 π 2 ) i = 1 M 1 cos ( x i π 2 ) f M ( x ) = 100 ( | x M | + x i M ( x 0.5 ) 2 cos ( 20 π ( x i 0.5 ) ) ) g ( x ) = x i M ( x 0.5 ) 2 30 x i [ 0 , 1 ]
DTLZ4 f 1 ( x ) = 0.5 ( 1 + g ( x M ) ) i = 1 M 1 cos ( x i π 2 ) f m = 2 , M 1 ( x ) = 0.5 ( 1 + g ( x M ) ) sin ( x M m + 1 π 2 ) i = 1 M 1 cos ( x i π 2 ) f M ( x ) = ( 1 + g ( x M ) ) sin ( x 1 a π ) g ( x ) = x i M ( x 0.5 ) 2 , a = 100 30 x i [ 0 , 1 ]
DTLZ5 f 1 ( x ) = 0.5 ( 1 + g ( x M ) ) i = 1 M 1 cos ( π ( 1 + 2 g ( x M ) x 1 ) 2 ( 2 + ( 1 + g ( x M ) ) ) ) f m = 2 , M 1 ( x ) = 0.5 ( 1 + g ( x M ) ) sin ( x M m + 1 π 2 ) i = 1 M m cos ( π ( 1 + 2 g ( x M ) x 1 ) 2 ( 2 + ( 1 + g ( x M ) ) ) ) f M ( x ) = ( 1 + g ( x M ) ) sin ( x 1 π 2 ) g ( x ) = x i M ( x 0.5 ) 2 30 x i [ 0 , 1 ]
DTLZ6 f 1 ( x ) = 0.5 ( 1 + g ( x M ) ) i = 1 M 1 cos ( π ( 1 + 2 g ( x M ) x 1 ) 2 ( 2 + ( 1 + g ( x M ) ) ) ) f m = 2 , M 1 ( x ) = 0.5 ( 1 + g ( x M ) ) sin ( x M m + 1 π 2 ) i = 1 M m cos ( π ( 1 + 2 g ( x M ) x 1 ) 2 ( 2 + ( 1 + g ( x M ) ) ) ) f M ( x ) = ( 1 + g ( x M ) ) sin ( x 1 π 2 ) g ( x ) = x i M x i 0 . 1 30 x i [ 0 , 1 ]
Table 4. Comparison of results for the convergence index (GD).
Table 4. Comparison of results for the convergence index (GD).
NSGA-IISPEA2NICPSOCICMOPSO
ZDT1Aver 1.14 × 10 3 3.82 × 10 3 1.08 × 10 3 2.95 × 10 4
Var 1.41 × 10 4 4.91 × 10 3 4.26 × 10 5 4.23 × 10 5
ZDT2Aver 4.88 × 10 1 8.61 × 10 3 7.74 × 10 4 8.29 × 10 5
Var 2.77 × 10 2 2.60 × 10 3 2.99 × 10 5 2.86 × 10 5
ZDT3Aver 2.48 × 10 3 9.72 × 10 3 8.93 × 10 4 6.75 × 10 5
Var 1.27 × 10 4 5.23 × 10 3 4.02 × 10 5 4.03 × 10 5
ZDT4Aver 5.13 × 10 1 9.98 1.37 × 10 3 8.55 × 10 2
Var 1.18 × 10 1 2.01 × 10 1 7.42 × 10 3 5.72 × 10 2
ZDT6Aver 7.58 × 10 2 1.93 × 10 2 3.08 × 10 3 1.19 × 10 4
Var 6.08 × 10 3 1.40 × 10 3 1.33 × 10 4 8.33 × 10 6
Table 5. Comparison of results for the Spacing index(S).
Table 5. Comparison of results for the Spacing index(S).
NSGA-IISPEA2NICPSOCICMOPSO
ZDT1Aver 5.04 × 10 1 2.96 × 10 1 8.78 × 10 2 3.33 × 10 2
Var 3.93 × 10 2 1.09 × 10 1 4.83 × 10 2 1.76 × 10 2
ZDT2Aver 4.73 × 10 1 5.05 × 10 1 1.22 × 10 2 1.37 × 10 2
Var 2.99 × 10 2 1.84 × 10 1 1.87 × 10 2 2.09 × 10 2
ZDT3Aver 5.90 × 10 1 5.03 × 10 1 1.88 × 10 2 1.74 × 10 2
Var 3.04 × 10 2 9.73 × 10 2 1.66 × 10 2 3.39 × 10 2
ZDT4Aver 7.03 × 10 1 8.70 × 10 1 1.17 × 10 2 2.23 × 10 2
Var 6.47 × 10 2 1.01 × 10 1 7.37 × 10 3 5.36 × 10 2
ZDT6Aver 4.86 × 10 1 2.49 × 10 1 3.27 × 10 3 2.87 × 10 2
Var 3.61 × 10 2 4.97 × 10 2 1.96 × 10 4 3.59 × 10 2
Table 6. Convergence index and Spacing index for the DTLZ model.
Table 6. Convergence index and Spacing index for the DTLZ model.
Convergence Uniformity
AverVarAverVar
DTlZ1 3.54 × 10 1 2.88 × 10 1 3.22 × 10 1 2.76 × 10 1
DTlZ2 1.93 × 10 1 1.62 × 10 1 7.40 × 10 1 5.43 × 10 1
DTlZ3 5.42 8.84 1.74 5.00
DTlZ4 1.95 6.15 9.17 1.21
DTlZ5 1.37 × 10 1 8.29 × 10 2 4.51 × 10 1 9.62 × 10 2
DTlZ6 1.45 1.07 9.29 6.69

Share and Cite

MDPI and ACS Style

Sun, Y.; Gao, Y.; Shi, X. Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity. Mathematics 2019, 7, 146. https://doi.org/10.3390/math7020146

AMA Style

Sun Y, Gao Y, Shi X. Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity. Mathematics. 2019; 7(2):146. https://doi.org/10.3390/math7020146

Chicago/Turabian Style

Sun, Ying, Yuelin Gao, and Xudong Shi. 2019. "Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity" Mathematics 7, no. 2: 146. https://doi.org/10.3390/math7020146

APA Style

Sun, Y., Gao, Y., & Shi, X. (2019). Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity. Mathematics, 7(2), 146. https://doi.org/10.3390/math7020146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop