1. Introduction
The particle swarm optimization (PSO) algorithm, originally proposed by Kennedy and Eberhart in 1995, is a population-based metaheuristic that imitates the social behavior of birds flocking [
1]. In PSO, each particle is treated as a potential solution, and all particles follow their own experiences and the current optimal particle to fly through the solution space. As it requires fewer parameters to adjust and can be easily implemented, PSO has been rapidly developed in solving real-world optimization problems, including circuit design [
2], job scheduling [
3], data mining [
4], path planning [
5,
6] and protein-ligand docking [
7]. In 1999, Moore and Chapman extended PSO to solve multi-objective problems (MOPs) for the first time in [
8]. Since then, a great interest has been aroused among researchers from different communities to tackle MOPs by using PSO. For example, Coello and Lechuga [
9] introduced a proposal for multi-objective PSO, noted as MOPSO, which determines particles’ flight directions by using the concept of Pareto dominance and adopts a global repository to store previously found nondominated solutions. Later in 2004, Coello et al. [
10] presented an enhanced version of MOPSO which employs a mutation operator and a constraint-handling mechanism to improve the algorithmic performance of the original MOPSO. Raquel and Prospero [
11] proposed the MOPSO-CD algorithm that selects the global best and updates the external archive of nondominated solutions by calculating the crowding distance of particles. In 2008, Peng and Zhang developed a new MOPSO algorithm adopting a decomposition approach, called MOPSO/D [
12]. It is based on a framework, named as MOEA/D [
13], which converts an MOP into a number of single-objective optimization sub-problems and then simultaneously solves all these sub-problems. In MOPSO/D, the particle’s global best is defined by the solutions located within a certain neighborhood. Moubayed et al. [
14] proposed a novel smart MOPSO based on decomposition (SDMOPSO) that realizes the information exchange between neighboring particles with fewer objective function evaluations and stores the leaders of the whole particle swarm using a crowding archive. dMOPSO, proposed by Martinez and Coello [
15], selects the global best from a set of solutions according to the decomposition approach and thus update each particle’s position. Moubayed et al. [
16] realized a MOPSO, called D2MOPSO, which hybrids the approach of dominance and decomposition and introduces an archiving technique using crowding distance. There have also been some other MOPSOs proposed in recent years that have proved to be effective in solving complex MOPs, such as MPSO/D [
17], MMOPSO [
18], AgMOPSO [
19], CMOPSO [
20] and CMaPSO [
21].
The quantum-behaved PSO (QPSO), proposed by Sun et al. [
22], is a variant of PSO inspired by quantum mechanics and the trajectory analysis of PSO. The trajectory analysis clarified the idea that each particle in PSO is in a bound state. Specifically, each particle in PSO oscillates around and converges to its local attractor [
23]. In QPSO, the particle is assumed to have quantum behavior and further assumed to be attracted by a quantum delta potential well centered on its local attractor. Additionally, the concept of the mean best position was defined and employed in this algorithm to update particles’ positions. In the terms of the update equation, which is different from that of PSO, on the one hand, QPSO has no velocity vectors for each particle to update, on the other hand, QPSO requires fewer parameters to adjust [
24]. Due to these advantages of QPSO, we incorporate it into the original MOEA/D framework for the purpose of obtaining a more effective algorithm for solving MOPs than other decomposition-based MOPSOs that uses the canonical PSO.
QPSO and other PSO variants generally have fast convergence speed due to more information exchange among particles. This is why such kinds of algorithms are more efficient to solve optimization problems than other population-based random search algorithms. Fast convergence speed means rapid diversity decline, which is desirable for the algorithm to find satisfying solutions quickly during the early stage of the search process. However, rapid diversity decline during the later stage of the search process results in aggregation of particles around the global best position and in turn the stagnation of the whole particle swarm (i.e., premature convergence).
Diversity maintenance is also essential when extending PSO to solve MOPs. During the past decade, researchers have done a lot of work on developing novel techniques to maintain diversity in their MOPSOs. For example, Qiu et al. [
25] introduced a novel global best selection method, which is based on proportional distribution and K-means algorithm, to make particles converge to the Pareto front in a fast speed while maintaining diversity. Cheng et al. [
26] presented the IMOPSO-PS algorithm, in which the preference strategy is applied for optimal distributed generation (DG) integration into the distribution system. This algorithm uses a circular nondominated selection of particles from one iteration to the next and performs mutation on particles to enhance the swarm diversity during the search process.
In this paper, we propose a multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named DMO-QPSO, which integrates the QPSO with the original MOEA/D framework and uses a strategy of diversity control. As in the literature [
12,
13], a neighborhood relationship is defined according to distances between the weight vectors of different sub-problems. Each sub-problem is solved utilizing the information only from its neighboring sub-problems. However, with the increasing number of iterations, the current best solutions to the neighbors of a sub-problem may get close to each other. This may result in a diversity loss of the new population produced in the next iteration, particularly at the later stage of the search process. Therefore, in DMO-QPSO, we do not adopt the neighborhood relationship described in the framework of MOEA/D and MOPSO/D. Meanwhile, we introduce a two-phased diversity controlling mechanism to make particles alternate between attraction and explosion states according to the swarm diversity. Particles move through the search space in the phase of attraction unless the swarm diversity declines to a threshold value that triggers the phase of explosion. Additionally, unlike MOPSO/D in which the global best is updated according to a decomposition approach, the proposed DMO-QPSO uses a vector set to store a pre-defined number of nondominated solutions and then randomly picks one as the current global best. All solutions in this vector set would have a chance to guide the movement of the whole particle swarm. The penalty-based boundary intersection (PBI) approach [
27] is used in the algorithm owing to its advantage over other decomposition methods including the weighted sum (WS) and the Tchebycheff (TCH) [
13].
The rest of this paper is organized as follows. Some preliminaries of MOP, PSO, QPSO and the framework of MOEA/D are given in
Section 2.
Section 3 describes the procedure of our proposed DMO-QPSO algorithm in detail.
Section 4 presents the experimental results and analysis. Some further discussion on DMO-QPSO are introduced in
Section 5. Finally, the paper is concluded in the last section.
3. The Proposed DMO-QPSO
In this section, we propose an improved multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named as DMO-QPSO, which integrates the QPSO algorithm with the MOEA/D framework and adopts a mechanism to control the swarm diversity during the search process so as to avoid premature convergence and escape the local optimal area with a higher probability.
At the beginning of the proposed algorithm DMO-QPSO, we need to define a set of well-distributed weight vectors and then use a certain approach to decompose the original MOP into a group of single objective sub-problems. More precisely, let to be the weight vectors, and the PBI approach is employed in this paper owing to its advantage over other decomposition approaches.
In DMO-QPSO, the swarm
with
particles is randomly initialized. Each particle
has a position vector
and a personal best position
.
is initially set to be equal to
. Then the mean best position of all particles can be easily obtained according to Equation (6). The global best position
is produced in a natural way according to the Pareto dominance relationship among different personal best positions. More specifically, we firstly define a vector set
, and the size of
is pre-set, denoted as
. Then, the fast nondominated sorting approach [
33] is applied to sort the set of all the personal best positions and
. The lower nondomination rank of a solution is, the better it is. Therefore, we only select the ones in the lower nondomination ranks and then store them in
. All of the solutions in
are regarded as candidates for the global best employed in the next iteration for updating particles’ positions.
It should be noted that the neighborhood in the original MOEA/D framework is formed according to distances between the weight vectors of different sub-problems. That is to say, the neighborhood of a sub-problem includes all sub-problems with the closest weight vectors. Hence, on the basis of this definition, solutions to neighboring sub-problems would be close in the decision space. It may enable the algorithm to converge faster at the early stage but brings the risk of diversity loss and premature convergence at the later stage. For this reason, we do not adopt in DMO-QPSO the neighborhood relationship stated in the original MOEA/D framework.
Furthermore, we measure the swarm diversity during the search process and make the swarm alternate between two phases, i.e., attraction and explosion, according to its diversity. At each iteration, the diversity of the particle swarm is calculated as below.
where
is the dimensionality of the problem,
is the length of longest diagonal in the search space,
is the particle swarm,
is the population size,
is the
th value of particle
and
is the
th value of the average point. According to the literature [
34,
35], the particle converges when the contraction-expansion coefficient
is less than 1.778 and otherwise it diverges. Therefore, we set a threshold, denoted as
, to the swarm diversity. When the diversity drops below
(i.e., in the explosion phase), the value of
will be reset to a constant
, larger than 1.778, to make particles diverge and thus increase the swarm diversity. Otherwise,
linearly decreases between the predefined interval
(i.e., in the attraction phase).
Like MOEA/D, we also use an external population
in the DMO-QPSO to store the nondominated solutions found during the search process. In each iteration step, we check the Pareto dominance relationship between the new generated solutions and the solutions in
. Solutions in
dominated by a new generated solution will be removed from
and this new generated solution will be added to
if no one in
dominates it. The main process of the DMO-QPSO algorithm is presented in Algorithm 2.
Algorithm 2 DMO-QPSO |
Input: The number of sub-problems, i.e., the population size, ; The set of weight vectors, ; The maximal number of iterations, ; |
Output:; |
1: | |
2: | for do |
3: | Randomly initialize the position vector ; |
4: | Set the personal best position ; |
5: | Evaluate the fitness value ; |
6: | end for |
7: | ; |
8: | ; |
9: | for do |
10: | Compute the mean best position of all the particles according to Equation (6); |
11: | Measure ; |
12: | if do |
13: | ; |
14: | else |
15: | Set linearly decreasing between the interval ; |
16: | end if |
17: | for do |
18: | Update the position vector using Equation (4); |
19: | Repair ; |
20: | Evaluate ; |
21: | for do |
22: | if then |
23: | ; |
24: | end if |
25: | end for |
26: | if then |
27: | ; |
28: | end if |
29: | end for |
30: | Update ; |
31: | Update ; |
32: | end for |
4. Experimental Studies
This section presents the experiments conducted to investigate the performance of our proposed DMO-QPSO algorithm. Firstly, we introduce a set of MOPs used as benchmark functions. Next, the parameter settings for different algorithms and two performance metrics are described in detail. Finally, the comparison experiments and results analysis are presented. More precisely, we compared the DMO-QPSO with two recently proposed multi-objective PSOs (i.e., MMOPSO and CMOPSO) and other three multi-objective optimization algorithms, namely, MOPSO, MOPSO/D and MOEA/D-DE [
36]. The PBI approach is used in four decomposition-based algorithms (i.e., DMO-QPSO, MOPSO/D, MOEA/D-DE and MMOPSO).
4.1. Test Functions
We selected 15 test functions whose PFs have different characteristics including concavity, convexity, multi-frontality and disconnections. Twelve of these test functions are bi-objective (i.e., F1, F2, F3, F4, F5, F7, F8, F9 from the F test set [
36], UF4, UF5, UF6, UF7 from the UF test set [
37]) and the rest of them are tri-objective (i.e., F6 from the F test set and UF9, UF10 from the UF test set). As shown in references [
36,
37], the F test set and the UF test set are two sets of test instances for facilitating the study of the ability of MOEAs to solve problems with complicated PS shapes. Besides, we used 30 decision variables for the UF test set, problems from F1 to F5, and F9. Problems from F6 to F8 were tested by using 10 decision variables.
4.2. Parameter Setting
The setting of weight vectors
is decided by an integer
[
36]. More precisely, each individual weight in
takes a value from
. Therefore, the population size
can be presented by
, where
is the number of objectives.
was 299 for the bi-objective test functions and 33 for the tri-objective ones. Consequently, the population size
was 300 for the bi-objective test functions and 595 for the tri-objective ones. The maximal number of iterations
was 500 and each algorithm runs 30 independent times for each test function. The size of external population
was set to be 100. Besides, the penalty factor
in PBI was 5.0.
The polynomial mutation [
29] was employed in MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO and CMOPSO. Two parameters
in this mutation operator were 20 and
, respectively.
For DMO-QPSO, the size of was 10, and the contraction-expansion coefficient in the attraction phase varied linearly from 1.0 to 0.5. The value of the lower bound of diversity was set to be 0.05. When the diversity drops below , we set the parameter .
4.3. Performance Metrics
In our experiments, the following performance metrics were used.
The inverted generational distance (IGD) [
36]: It is proposed as a method of estimating the distance between the elements in a set of nondominated vectors and those in the Pareto optimal set, and can be stated as:
where
is a number of points which are evenly distributed in the objective space along the PF, and
is an approximation to the PF.
is the minimal Euclidean distance between
and the points in
, and
is the size of the set
.
must be as close as possible to the PF of a certain test problem and cannot miss any part of the whole PF, so as to obtain a low value of
. It can reflect both the diversity and the convergence of the obtained solutions to the real PF.
Coverage (C-metric) [
13]: It can be stated as:
where
are two approximations to the real PF of an MOP,
is defined as the proportion of the solutions in
that are dominated by at least one solution in
.
4.4. Results and Discussion
Table 2 and
Table 3 present the average, minimum and standard deviation (SD) of the IGD values of 30 final populations on different test functions that were produced by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO. It is clear that our proposed algorithm, DMO-QPSO, performed better than the other five algorithms on most of the test problems. It yielded the best mean IGD values on all the problems except on F7, F8, UF4, UF6 and UF10. According to
Table 2 and
Table 3, both the mean and minimal IGD values on F7, F8, UF4 and UF10 obtained by DMO-QPSO are worse than those obtained by MOEA/D-DE. It should be noted that the performance of DMO-QPSO is still acceptable on UF6. The mean IGD value of DMO-QPSO on UF6 are slightly worse than that of MMOPSO, but DMO-QPSO got the lowest minimal IGD value among all of these algorithms. Besides, it is obvious that MOPSO/D performed the worst on almost all of the test problems except on F6 and UF4.
In addition, the statistics by the Wilcoxon rank sum tests in
Table 2 and
Table 3 also indicate that DMO-QPSO outperformed other five algorithms. The MMOPSO was the second best on F problems and the third best on UF problems respectively, while the CMOPSO was the second worst on both F and UF problems.
Table 4 illustrates the total ranks of these algorithms on F and UF problems and gives their final ranks in the last column of the table. As shown in this table, MOEA/D-DE is the second-best algorithm, followed by MMOPSO. In contrast, MOPSO/D is the worst algorithm, followed by CMOPSO.
The average C-metric values are shown in
Table 5,
Table 6,
Table 7,
Table 8,
Table 9 which confirm the results above. It can be seen from these tables that the final solutions obtained by DMO-QPSO is better than those obtained by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO and CMOPSO for most of the test functions.
The results of the trial runs which have the lowest IGD values on 15 test functions produced by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO are selected, respectively, and then plotted in
Figure 1 and
Figure 2. The figures in
Figure 1 and
Figure 2 clearly show the evolution of the IGD values for different algorithms versus the number of iterations on both F and UF problems. It can be seen that the results in these figures are in consistence with those in
Table 2 and
Table 3. For the F problems, DMO-QPSO performed the best except on F6, F7 and F8. As we can see in
Figure 1, MOEA/D-DE was the best on problems F7 and F8, the IGD values of which drop quickly at the early stage and then converge to values close to 0.08 and 0.20, respectively. MMOPSO obtained the second minimal IGD value on F8, followed by DMO-QPSO, but it declines even faster than MOEA/D-DE during the first 200 iteration steps. On F6, the IGD value of DMO-QPSO fluctuates for about 400 iteration steps during the whole search process and then reaches the value (0.1634) just slightly larger than that of MMOPSO (0.1632), which may be related to the variation of the swarm diversity. MOPSO/D had the worst performance on all of F problems except on F6.
For the UF problems, DMO-QPSO still performed the best except on UF4 and UF10. It can be seen that the MOEA/D-DE was the best on UF4, the IGD value of which decreases rapidly to a value just below 0.1 while the value of DMO-QPSO is larger than 0.1. By contrast, MMOPSO had the worst performance on UF4, the IGD value of which fluctuates significantly between 0.5 and 0.9. On UF10, MOEA/D-DE and MOPSO had the similar performance, for the IGD values of them decrease rapidly during the first 80 iteration steps and then gradually converge to values around 1.0. DMO-QPSO was the third-best, followed by MMOPSO. MOPSO/D had the worst performance on all of UF problems except on UF4. Additionally, from all the figures in
Figure 1 and
Figure 2, we can observe that within the same iteration steps, the IGD value of DMO-QPSO declines slowly at the early stage on several test problems compared to MOEA/D-DE but gets to a much smaller value at the later stage. If we perform a greater amount of iterations, the DMO-QPSO may obtain much better results than the other compared algorithms.
In summary, DMO-QPSO has better performance on most of the test functions compared to other tested algorithms, and it is promising in solving MOPs with complicated PS shapes.
6. Conclusions
This paper has proposed a multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named DMO-QPSO, which integrates the QPSO algorithm with the original MOEA/D framework and adopts a strategy to control the swarm diversity. Without using the neighboring relationship defined in the MOEA/D framework, we employed a two-phased diversity controlling mechanism to avoid the premature convergence and make the algorithm escape sub-optimal solutions with a higher probability. In addition, we used a set of nondominated solutions to produce the global best so as to update the particle’s position. The comparison experiments were carried out among six algorithms, namely, MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO, on 15 test functions with complicated PS shapes. The experimental results show that the proposed DMO-QPSO algorithm has an advantage over other five algorithms on most of the test problems. It has a slower convergence speed than MOEA/D-DE on some test problems at the early stage, but has better balance between exploration and exploitation, finally obtaining better solutions to an MOP. In addition, we further investigated the impact of different decomposition approaches, i.e., the TCH and PBI approaches, as well as the polynomial mutation on DMO-QPSO. It was shown that using PBI and the polynomial mutation can enhance the algorithmic performance of DMO-QPSO, particularly when the tri-objective problems were being solved.
In the future, we will focus on studying new methods for generating a set of weight vectors that are as uniformly distributed as possible, modifying the mechanism of diversity control in the DMO-QPSO algorithm for dealing with more complicated test problems, and improving the quality of the solutions obtained. In addition, we will extend the proposed algorithm to problems having more than three objectives.