Next Article in Journal
Greengage Grading Method Based on Dynamic Feature and Ensemble Networks
Next Article in Special Issue
Big Data Analysis Framework for Water Quality Indicators with Assimilation of IoT and ML
Previous Article in Journal
A Low-Latency Streaming On-Device Automatic Speech Recognition System Using a CNN Acoustic Model on FPGA and a Language Model on Smartphone
Previous Article in Special Issue
A Hybrid Deep Learning-Based Approach for Brain Tumor Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective Quantum-Inspired Seagull Optimization Algorithm

1
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
2
Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences (UCAS), Shenzhen 518055, China
3
Faculty of Engineering and Technology, Future University in Egypt, New Cairo 11835, Egypt
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(12), 1834; https://doi.org/10.3390/electronics11121834
Submission received: 24 May 2022 / Revised: 31 May 2022 / Accepted: 6 June 2022 / Published: 9 June 2022
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)

Abstract

:
Objective solutions of multi-objective optimization problems (MOPs) are required to balance convergence and distribution to the Pareto front. This paper proposes a multi-objective quantum-inspired seagull optimization algorithm (MOQSOA) to optimize the convergence and distribution of solutions in multi-objective optimization problems. The proposed algorithm adopts opposite-based learning, the migration and attacking behavior of seagulls, grid ranking, and the superposition principles of quantum computing. To obtain a better initialized population in the absence of a priori knowledge, an opposite-based learning mechanism is used for initialization. The proposed algorithm uses nonlinear migration and attacking operation, simulating the behavior of seagulls for exploration and exploitation. Moreover, the real-coded quantum representation of the current optimal solution and quantum rotation gate are adopted to update the seagull population. In addition, a grid mechanism including global grid ranking and grid density ranking provides a criterion for leader selection and archive control. The experimental results of the IGD and Spacing metrics performed on ZDT, DTLZ, and UF test suites demonstrate the superiority of MOQSOA over NSGA-II, MOEA/D, MOPSO, IMMOEA, RVEA, and LMEA for enhancing the distribution and convergence performance of MOPs.

1. Introduction

Optimization is an indispensable and important application area in engineering applications and scientific research. Practical problems usually require the optimization of several objectives simultaneously, and these problems can be described as multi-objective optimization problems (MOPs). Unlike single-objective optimization problems, multi-objective optimization problems aim to find the optimal vectors in the decision space, known as the Pareto optimal solution (PS). Each objective of the PS cannot be better without the other objectives deteriorating. All the objective vectors of the PS form the Pareto optimal front (PF). Multi-objective optimization is one of the most difficult and popular problems in recent evolutionary computing research. Multi-objective optimization algorithms are designed to solve two key problems: (1) “How to search solutions whose objective vectors are on or near PF”; (2) “How to search solutions whose objective vectors are distributed as widely as possible along the PF.”
Evolutionary computing has been widely used to solve MOPs with great success in many engineering problems. In recent years, many multi-objective optimization algorithms have been introduced by scholars. In particular, evolutionary computation and swarm intelligence algorithms have been introduced to solve MOPs through a process known as evolutionary multi-objective optimization, such as the fast and elitist multi-objective genetic algorithm (NSGA-II) [1], the improving strength Pareto evolutionary algorithm for multi-objective optimization (SPEA2) [2], the multi-objective optimization evolutionary algorithm based on decomposition (MOEA/D) [3], and the non-dominated neighbor-based immune algorithm (NNIA) [4]. In addition, particle swarm optimization (PSO) is a type of swarm intelligence optimization algorithm. It has advantages such as fast convergence speed and strong robustness, and it has been successfully applied to solve single-objective optimization problems. Many scholars attempt to use the PSO algorithm to solve complicated and large-scale MOPs. Since the multi-objective particle swarm optimization algorithm (MOPSO) [5] was first proposed by Han, F.; Chen, many other improved Deep leaning model with MOPSO have been developed [6,7,8]. Cui et al. [9] proposed the multi-objective particle swarm optimization algorithm based on a two-archive mechanism (MOPSO_TA) to perform well in terms of convergence and diversity, simultaneously. Abdel-Basset et al. [10] improved and extended the whale optimization algorithm (WOA) to solve such multi-objective optimization problems (MIWOA). To solve increasingly complex multi-objective optimization problems in engineering practice, Wu et al. [11] proposed a multi-objective lion swarm optimization based on a multi-agent (MOMALSO). Zheng et al. [12] presented a dynamic multi-objective particle swarm optimization algorithm based on adversarial decomposition and neighborhood evolution (ADNEPSO) dealing with dynamic problems. Gu et al. [13] presented a random forest-assisted adaptive multi-objective particle swarm optimization (RFMOPSO) algorithm for expensive constrained combinatorial optimization problems.
Nowadays, multi-objective optimization algorithms are common for various real-world problems. Evolutionary computation and swarm intelligence algorithms have become important methods for optimization in the electronics field, such as for the optimal integration of electrical units in distribution networks [14], energy management strategies for range-extended electric vehicles [15], RFID network planning [16], multi_object fuse detection [17]. Multi-objective optimization algorithms also play a key role in cloud computing [18,19] supplier selection [20], and other combinatorial optimization problems [21].
Solving a multi-objective optimization problem generally leads to a set of Pareto non-dominated solutions. The optimization algorithm needs to find solutions as close as possible to the Pareto front while generating a solution set to cover the entire Pareto front as far as possible. Hence, multi-objective optimization algorithms need to balance the convergence of the algorithm with the distribution of Pareto optimal solutions. However, many multi-objective optimization algorithms are prone to local optimization, leading to unbalanced convergence and distribution problems. In order to counterpoise the convergence and distribution of Pareto optimal solutions, this paper proposes a multi-objective quantum-inspired seagull optimization algorithm (MOQSOA) to optimize the convergence and distribution of solutions in multi-objective optimization problems. This is particularly applicable in electronics fields such as: circuit design, electronics component arrangement, cost optimization, etc. MOQSOA is a hybrid algorithm combining a quantum-inspired search algorithm and the seagull optimization algorithm (SOA). Quantum-inspired search algorithms have adopted the principles and concepts of quantum mechanics including superposition, quantum gates, standing waves, and collapse, and are easy to find in local convergence in global searches. In addition, the SOA performs well in local searches, which helps maintain the balance between exploration and exploitation.
The contributions of this paper are summarized as follows. Firstly, an improved opposition-based learning strategy is used for the initialization of the seagull population to preserve distribution. Secondly, the current optimal solution is selected from the archive with global grid ranking and receives a real-coded quantum representation considered as a linear superposition of two probabilistic states, i.e., the positive and deceptive states. Thirdly, seagull individuals are updated with nonlinear migration, attacking operations, and quantum rotation gates for exploration and exploitation. In addition, the archive of non-dominated solutions is controlled with grid density ranking. The experimental results demonstrate the competitive performance of the proposed algorithm.
The remainder of the paper proceeds as follows: Section 2 surveys the theoretical background of multi-objective optimization problems, seagull optimization algorithm, and quantum computing. Section 3 details the proposed multi-objective quantum-inspired seagull optimization algorithm. Section 4 compares and discusses the performance of the proposed algorithm with several state-of-the-art metaheuristic algorithms. Finally, Section 5 draws the main conclusions and points out some possible future work.

2. Related Work

2.1. Multi-Objective Optimization Problems

Multi-objective optimization problems (MOPs), which involve more than one conflicting objective, can be described as follows [22]:
min F ( X ) = ( f 1 ( X ) , f 2 ( X ) , f 3 ( X ) f m ( X ) ) T X = ( x 1 , x 2 , , x n ) T Y = F ( X ) f i : Ω   n ( i = 1 , 2 , , m )
where the vector x claims the decision space X, the objective function vector F(X) includes m (m ≥ 2) objectives, Y m represents the objective space, and f : n m is the objective mapping function.
Pareto dominance: Given two vectors x , y R n and their corresponding objective vectors F ( x ) , F ( y ) m , x dominates y (denoted as x y ) if and only if i ( 1 , 2 , , m ) , f i ( x ) f i ( y ) and j ( 1 , 2 , , m ) , f i ( x ) < f i ( y ) .
Pareto optimal solution: A decision vector x R n is said to be Pareto optimal if and only if y R n : y x .
Pareto optimal set: The set of Pareto optimal solutions (PS) is called a Pareto optimal set if: P S = { x R n | y R n , y x } .
Pareto optimal front: The Pareto optimal front (PF) is defined as: P F = { F ( x ) | x P S } .

2.2. Seagull Optimization Algorithm

The seagull optimization algorithm (SOA) [23] is a novel swarm optimization algorithm, proposed by Dhiman and Kumar in 2019, which simulates the migration and attacking behavior of seagulls. An extension of the SOA has been developed in terms of MOPs by introducing dynamic archive, grid, and leader-selection mechanisms [24,25].
Seagulls typically live in villages. They are able to locate and attack prey with their own knowledge. Migration and attacking actions are the most important actions of seagulls. They travel in groups during migration. Seagulls change their initial positions in order to prevent collisions. Seagulls will fly in a group in the direction of the fittest seagull with the best likelihood of survival. Other seagulls will update their initial positions based on the fittest seagull. Seagulls frequently attack migrating birds over the sea when they migrate from one place to another. They perform a spiral-shaped movement during attack.
The SOA mainly uses migration and attacking operations to simulate the migration and attacking behaviors of seagulls. The migration operation simulates how the group of seagulls move from one position to another with the exploration capability of the SOA. The attacking operation simulates how groups of seagulls hunt their prey with the exploitation capability of the algorithm.

2.3. Quantum Computing

Quantum computing is the combination of quantum mechanics in physics and computer science, and is an emerging theory of computing. Quantum-inspired evolutionary computing (QIEC) is a method based on concepts and principles from quantum mechanics. Narayanan and Moore [26] firstly combined evolutionary computation (EA) and quantum-inspired computation to solve traveling salesman problems (TSPs). After that, a series of EAs inspired by quantum computation appeared, such as the quantum-inspired evolutionary algorithm (QEA) [27,28] and the quantum-inspired immune clonal algorithm (QICA) [29]. These kinds of algorithms are characterized by quantum bits and quantum gates. The quantum gates are used to change the quantum bits and generate new solutions through observing.
A quantum bit, known as qubit, is the smallest unit of information stored in quantum computing. A qubit may be in state “0” or state “1”, or in a superposition of the two states. The state representation of a qubit in the Dirac notation can be given as:
ψ = α 0 + β 1  
where α and β are complex numbers indicating the probability amplitudes of the respective states. α 2 and β 2 denote the probability of observing a qubit in state “0” and state “1”, respectively. The normalization of the states, resulting in unity, can be written as α 2 + β 2 = 1 .
Compared with classical bits, quantum bits can be in any superposition of two eigenstates of “0” and “1”. Moreover, the superposition amplitudes of the two states can interfere with each other during quantum operation, which is called quantum interference. The principle of quantum superposition suggests that the system in a superposition of all of its possible states is described by probability density amplitudes. Additionally, all states can be processed in parallel to optimize the objective function.
A qubit individual consisting of a string of m qubits can be described as follows:
α 1 α 2 α m β 1 β 2 β m
where α i 2 + β i 2 = 1 , i = 1 , 2 , , m . Therefore, a quantum individual of length m is capable of representing 2m states simultaneously based on probability. Because a quantum individual can represent the superposition of several quantum bit states, a small population of quantum individuals can correspond to a large population of individuals under conventional representation.
Layeb [30] presented a new hybrid natural algorithm called the quantum-inspired harmony search algorithm (QIHSA) based on the harmony search algorithm (HSA) and quantum computing (QC). Another kind of quantum-inspired computation called the quantum particle swarm optimization algorithm (QPSO) was proposed by Sun et al. [31,32,33,34], inspired by the behavior of particles in a potential field. Particles are bounded by an attractor. Meanwhile, they appear anywhere in the space with different probability densities. Via setting potential well and solving the Schrödinger equation, a new style of search space is built. Based on this point, Li et al. [35] proposed an improved cooperative quantum-behaved particle swarm optimization method for solving real parameter optimization and obtained a good performance. QPSO was an improvement over the particle swarm algorithm based on the principles of quantum mechanics, which has better convergence properties than the ordinary particle swarm optimization algorithm [36].
In recent years, the combination of quantum computing and multi-objective problems has been studied and a number of new quantum multi-objective optimization algorithms have been proposed. Guo et al. [37] proposed a novel quantum-behaved particle swarm optimization algorithm with a flexible single-/multi-population strategy and a multi-stage perturbation strategy. At the first stage, the main target of the perturbation is to broaden the search range. The second stage applies the univariate perturbation to raise the local search accuracy. You et al. [38] presented a novel algorithm called DMO-QPSO, combining the quantum-behaved particle swarm optimization (QPSO) algorithm with the MOEA/D framework in order to make the QPSO able to solve MOPs effectively. Fan et al. [39] established a bi-level optimization model based on the quantum evolutionary algorithm and multi-objective programming to solve the problem of regional integrated energy systems. Hesar et al. [40] proposed a quantum-inspired multi-objective harmony search algorithm to solve multi-objective optimization problems. In this algorithm, a new quantum mutation strategy is proposed, which is a combination of harmony improvisation operators and a quantum adaptive rotation gate. Dayana et al. [41] presented a Quantum Firefly Optimization-based Multi-Objective Secure Routing (QFO-MOSR) protocol for Fog-based WSN.
The QPSO algorithm has been applied in many real-life multi-objective problems due to the numerous variants of QPSO proposed. These methods have been successfully used to solve combinatorial optimization problems, such as scheduling problems [42,43], load forecast [44], routing optimization [45], disease diagnosis [46], and optimal design [47]. In addition, algorithms based on QPSO play important roles in other multi-objective problems, including multi-carrier communication [48] and system control [49].
When solving a multi-objective optimization problem, it is expected that the obtained solutions can fully reflect the entire Pareto front. However, it frequently occurs that solutions are only concentrated near a part of the Pareto front when solving practical problems. The key to resolving the issue is to deal with the balance of the convergence and distribution of Pareto optimal solutions. To address the above issue, this paper suggests a hybrid algorithm called the multi-objective quantum-inspired seagull optimization algorithm, termed MOQSOA, for multi-objective problems. In the MOQSOA, opposition-based learning is applied for initialization to preserve distribution. The current optimal solution is selected with global grid ranking, and receives a real-coded quantum representation considered as a linear superposition of positive and deceptive states. Additionally, individuals are updated with nonlinear migration, attacking operations, and quantum rotation gates.

3. The Multi-Objective Quantum-Inspired Seagull Optimization Algorithm

The main concept of the multi-objective seagull optimization algorithm (MOSOA) is based on the natural behavior of seagull populations. Four components have been used to develop the extension of the SOA in terms of MOPs: an archive controller, a grid mechanism, a leader-selection mechanism, and an evolutionary operator.
The evolutionary strategy explored in the seagull optimization algorithm is similar to most swarm intelligence algorithms. The deceptive nature of local optimal solutions, the loss of diversity, and weak causality present in the algorithm cause the algorithm to potentially fall into premature convergence. In this paper, a multi-objective quantum-inspired seagull optimization algorithm is presented for MOPs. The proposed algorithm combines opposite-based learning, the migration and attacking behavior of seagulls, grid ranking, and the superposition principles of quantum computing. The OBL mechanism is used to initialize the seagull population to obtain a better initialized population in the absence of a priori knowledge. To maintain a better balance between exploitation and the exploration of searching for global optimal solutions, the real-coded quantum representation of the current optimal solution and quantum rotation gate was adapted. Moreover, it contained the nonlinear migration and attacking operations of the SOA for exploration and exploitation. In addition, a grid mechanism with the global grid ranking (GGR) and the grid density ranking (GDR) provided a criterion for leader selection and archive control. The framework of the MOQSOA is shown in Figure 1, and the procedure of QDGWO and its main steps can be summarized as shown in Algorithm 1.

3.1. Initialization with Opposition-Based Learning

In the absence of a priori knowledge, initialization by random roulette in traditional evolutionary algorithms reduces the probability of sampling better regions in population-based algorithms. However, opposition-based learning (OBL) [50] can obtain more suitable initial candidate regions without a priori knowledge, thus increasing the probability of detecting better regions and promising potential to improve the fitness.
The opposite point of OBL can be defined as follows:
Algorithm 1 Multi-Objective Quantum-Inspired Seagull Optimization Algorithm (MOQSOA)
Input: Seagull population P
Output: Archive non-dominated optimal solutions.
Initialize P with opposition-based learning.
Calculate the corresponding objective values for each search agent.
Find all the non-dominated solutions and initialize these solutions to the archive of non-dominated optimal solutions.
Select the current optimal solution with global grid ranking and grid density ranking methods.
Encode the current optimal solution by real-coded quantum representation.
while (t < Maxiter) do
  for each search agent do
   Update the position of current search agent by nonlinear seagull migration and attacking operations.
  end for
  Apply mutation and crossover operators on these updated search agents.
  Calculate the objective values for all search agents.
  Find the non-dominated solutions from the updated search agents.
  Update the obtained non-dominated solutions to the archive.
  if archive is full then
   Remove one of the most crowded solutions in the archive with the grid density ranking method.
   Add the new solution to the archive.
  end if
  Adjust search agent if any one goes beyond the search space.
  Calculate the objective values for each non-dominated solution in the archive.
  Select the current optimal solution with global grid ranking and grid density ranking methods.
  Conduct quantum update operation depending on whether the current optimal solution has changed or not.
  tt + 1
end while
return archive of non-dominated optimal solutions
end MOQSOA
P ( x 1 , x 2 , , x D ) is given as a point in the D-dimensional space where x 1 , x 2 , , x D are real numbers and x i [ a i , b i ] , i = 1 , 2 , , D . Then, its opposite point is defined as P ( x 1 , x 2 , , x D ) where
x i = a i + b i x i
MOQSOA firstly divides the population into two parts. Then, one part is generated by random initialization and the other part is generated by OBL. Later, the dominated solutions in the two parts are deleted, and the deleted individuals are replaced by random strategies. This specific process can be shown in Figure 2, and the main steps are shown as follows:
Step 1: The initial population P N is divided into two parts, named as P 1 and P 2 . The individuals in the half population P 1 are randomly generated;
Step 2: The opposite points of individuals in P 1 are generated based on OBL and are added to P 2 ;
Step 3: After completing the construction of P 2 , P 1 and P 2 are combined;
Step 4: The dominated solutions are deleted in P 1 P 2 ;
Step 5: The deleted individuals are replaced by random strategies to generate the final initial population P * .
The improved OBL strategy in the proposed algorithm is more suitable for multi-objective optimization problems. On the one hand, the improved OBL strategy can obtain a better initialized population because the dominated solutions between the original individuals and the opposite individuals have been removed. On the other hand, the distribution of the population can be guaranteed, because the original solution and its opposite solution are completely symmetric in the decision space. There must be a solution closer to the optimal solution between the original solution and its opposite solution, so the OBL strategy can improve the distribution of the initial population without a priori knowledge.

3.2. Selection of Current Optimal Solution

The grid mechanism is a very efficient mechanism for characterizing and maintaining convergence and distribution. In addition, the grid mechanism can be utilized to show not only the superiority and inferiority between solutions, but also the differences in the objective values between optimal solutions and other solutions. In this paper, the global grid ranking (GGR) [22] is utilized to enhance the convergence of the algorithm, and the grid density ranking (GDR) is used to improve the distribution of the solutions.
The GGR represents the sum of the number of individual grid coordinates that are superior to other individuals in each objective. GGR is denoted as:
GGR ( x i ) = d = 1 N m c o u n t ( G d ( x i ) < G d ( x j ) )
where x i and x j are the two candidate solutions in the population which satisfy i j ; G d ( x i ) is the grid coordinates of x i in the dth objective; N m denotes the number of problem objectives; and c o u n t ( ) is the function to count the number to meet the conditions of ( ) . The larger the value of GGR(xi) is, the more individuals are dominated by xi in the sub-objectives.
The GDR is mainly used to view the crowding level around a candidate solution. A large GDR value indicates that the candidate solution is densely distributed with other solutions. GDR is generated by:
GDR ( x i ) = c o u n t ( d = 1 N m | G d ( x i ) G d ( x j ) | < M )
where x i and x j are the two candidate solutions in the population which satisfy i j ; G d ( x i ) is the grid coordinates of x i in the dth objective; Nm denotes the number of problem objectives; M is the number of current objectives; and c o u n t ( ) is the function to count the number to meet the conditions of ( ) .
In a MOP, comparing new solutions with the existing solutions in a given search space is a key problem. The MOQSOA uses grid ranking to help compare the merits of solutions and select the best candidate solution. For candidate solutions in the archive, the algorithm prioritizes the candidate solution with a larger GGR value (i.e., dominating more solutions in the sub-objectives). Additionally, if the GGR values of several solutions are same, the solution with a smaller GDR value (i.e., lower crowding level) is preferred as the current optimal solution to guide the position updating of other individuals. If there are multiple solutions with the largest GGR values and the same GDR values, a roulette wheel is used to select a solution among these for the current optimal solution.

3.3. Real-Coded Quantum Representation of Current Optimal Solution

The deceptiveness of local optimal solutions is one of the main factors leading to premature convergence. The evolutionary strategy of the optimization algorithm tries to receive the gradient information by the direction of convergence. The reliability of the gradient information directly determines the effect of global convergence. A positive global optimal solution can accelerate the search process, while a deceptive local optimal solution prevents the exploration of the global optimum.
In this paper, the current optimal solution is considered as a linear superposition of two probabilistic states, i.e., the positive and deceptive states inspired by QEA. In the evolutionary process, every seagull individual makes its own judgment on the question of whether to accept the current optimal solution as the global optimum. If the ith seagull believes that the current optimal solution is a positive global optimal solution, then it will take the current optimal solution as the direction of convergence. Otherwise, it rejects and randomly chooses another individual as the direction of convergence.
At the beginning, it is assumed that the probability that the current optimal solution is positive or deceptive is equal. After several iterations, the positive probability of current optimal solution is enhanced if no changes have occurred, whereas if the current optimal solution is updated, the probability needs to be reset. The MOQSOA consists of two quantum operations, namely, the real-coded quantum representation of the current optimal solution and the quantum rotation gate for updating the probability amplitudes of two states.
The real-coded quantum representation [51] of an individual has been developed through the study of QEA [27]. In a binary-coded QEA, the qubit is used to represent a linear superposition of state “0” and state “1” in a probabilistic manner. Similarly, a real continuous number is assumed to be in a deterministic state or a random state. In this paper, qubits are used to represent the global optimal solution and wave functions to calculate specific values.
A qubit can be represented by a state “0” (denoted as 0 ), or a state “1” (denoted as 1 ), or a linear superposition of both. The states of a qubit can be given by:
ψ = α 0 + β 1
where α and β represent the probability magnitudes of the two states, respectively, and satisfy α 2 + β 2 = 1 . α 2 is the probability that the quantum bit is observed in state “0”, and β 2 is the probability that the quantum bit is observed in state “1”.
In quantum theory, a quantum state can be completely described by a wave function w(x, t), which is a composite function of coordinates and time. Additionally, |w(x, t)|2 is called the probability density, which implies the probability of the quantum state occurring at the appropriate location and time. Therefore, a normal wave function is introduced to calculate the observed values of real-coded quantum representation as:
w ( x i ) 2 = 1 2 π σ i exp ( x i μ i ) 2 2 σ i 2 , i = 1 , 2 , , n
where μi is the expectation and σi is the standard deviation. Here, Equation (8) is used to generate the position of a quantum individual after quantum observation, μi is the mean position of the individual, and σi expresses the distribution range of the probability cloud around the mean position.
In this paper, the probability amplitude of the positive for the current optimal solution P g b t is defined as α, while the probability amplitude of the deceptive is defined as β. Since the two probability amplitudes satisfy α 2 + β 2 = 1 , the real-coded quantum representation of the current optimal solution can be expressed as:
P g b t x g b , 1 x g b , 2 x g b , n α 1 α 2 α n
where n is the problem dimension, αi is the probability amplitude by which the optimal solution component is considered positive, and t is the current iteration.

3.4. Nonlinear Seagull Migration Operation

During migration, seagulls will move from their initial positions to the next positions within the group. The migration operation simulates this position movement process of the seagull population during exploration. The main concept of the MOQSOA is based on the SOA migration and attacking behaviors. Therefore, in the exploration phase, the seagull position movement process satisfies the following three steps: avoiding collisions, approaching the optimal neighbor’s direction, and moving to the optimal search agent.
In order to avoid collisions with surrounding seagulls, an additional variable A is employed to adjust the seagull’s position:
C s = A × P s ( t )
where Cs represents the direction of the search agent for no collisions with other search agents, Ps represents the current location of the search agent, t is the current iteration, and A indicates the migration behavior of the seagull in the search space. In the basic seagull optimization algorithm, the size of A is linearly decreased from parameter fc to 0 in the iteration:
A = f c f c t t max
where the value of fc is set to 2 in the basic seagull optimization algorithm [23].
However, in the actual optimization process, the search process shows a nonlinear curve downward trend. Therefore, if the control variable A simulates the migration process of the seagull population in a purely linear decreasing manner, the actual search ability of the algorithm is affected. Therefore, this paper adopts a nonlinearly varying control variable A, which is more appropriate to the migration process of the actual seagull population:
A = f c ( 2 ω 1 )
ω = e 1 t max t max t
where the value of fc is set to 2, t is the current iteration, and tmax is the maximum number of iterations.
The nonlinear variable A can accelerate the convergence ability of the algorithm by rapidly decreasing in the early stage, and can improve the search accuracy of the algorithm by slowly decreasing in the later stage.
After ensuring that no collisions occur between seagulls, the seagull agents approach the best seagull. Here, each seagull individual makes an independent judgment on whether to recognize the current optimal solution as the global optimal solution (positive) or not (deceptive). If the seagull believes that the current optimal solution is a positive global optimal solution, it will take the position of the current optimal solution as the direction of convergence. Otherwise, it randomly chooses the direction of convergence.
Since a real-coded quantum representation is used to express the current optimal solution, the convergence direction of each agent is generated by:
M s = B × P ^ g b ( t ) P s ( t )
where Ms denotes the convergence direction of individuals toward the best seagull, and B is varied as:
B = 2 × A 2 × r a n d ( 0 , 1 )
P ^ g b = x ^ g b , 1 x ^ g b , 2 x ^ g b , n is the observed position of the current optimal solution. It is calculated as:
x ^ g b , i = r n x g b , i , σ i 2 ( ψ i ) ( x i , max x i , min )
where r n x g b , i , σ i 2 ( ψ i ) denotes a random number generated according to the wave function Equation (8), x g b , i is the expectation, and σ i 2 ( ψ i ) is defined as:
σ i 2 ( ψ i ) = 1 α i 2 ,   if   ψ i = 0 α i 2 ,   if   ψ i = 1
where αi is the probability amplitude that the optimal solution component is considered to be positive. The observation of ψ i adheres to the following stochastic process:
ψ i = 0 ,   if   r u α i 2 1 ,   if   r u > α i 2
where ru is a uniform random variable between 0 and 1.
The schematic diagram of a seagull agent approaching the current optimal individual is shown in Figure 3. If the seagull individual recognizes the current optimal solution as the global optimal solution (positive identification), the observed position generated according to the wave function of Equation (8) will be in the vicinity of the current optimal individual, while if the seagull individual doubts the current optimal solution (deceptive identification), the individual randomly chooses its own search direction.
Finally, after calculating the direction of convergence for each agent, the seagulls update the position toward this direction:
D s = C s + M s
where Ds is the direction of migration for the seagulls composited by the direction for no collision (Cs) and the direction of the movement toward the best seagull (Ms).

3.5. Seagull Attacking Operation

Seagulls frequently attack other birds over the sea when migrating. They can maintain altitude during migration and constantly change their angle of attack and speed in flight. When it is necessary to launch an attack, the seagulls descend in a spiral through a three-dimensional space and move through the air by constantly changing their angle and radius. The attacking operation simulates the attacking process of the seagull population for exploitation.
The motion of the seagulls in the three-dimensional space is described as follows:
x = r × cos ( k ) y = r × sin ( k ) z = r × k r = u × e k v
where k is a random number in the range [0, 2π], and r is the spiral radius controlled by u and v, which are usually taken as 1.
Combining the migration and attacking operations of the seagull, the overall seagull position is updated by:
P s ( t ) = D s × x × y × z + P ^ g b ( t )
To obtain a better exploration and exploitation capability, the mutation and crossover operators, which are the same as those in NSGA-II [1], are employed in the MOQSOA.

3.6. Quantum Update Operation

The main update strategy in the QEA is the quantum rotation gate (QRG) [13]. The QRG is adopted as a quantum operator to update the probability amplitudes toward the direction with better fitness. The probability amplitudes updated from QRG are calculated by:
α i ( t + 1 ) = cos ( Δ θ ) sin ( Δ θ ) α i ( t ) 1 [ α i ( t ) ] 2
where Δ θ is the rotation angle, which is equivalent to the step size that determines the rate of convergence toward the current best solution.
Unlike the traditional update strategy of QEA, the QRG in this paper is an operator that enhances the probability amplitude of the positivity of the current optimal solution. For a given current optimal solution P g b t , the probability amplitude of positivity and deceptiveness will be initialized to α i = β i = 2 / 2 ,   i = 1 , 2 , , n . After iteration, if the current optimal solution remains optimal, the probability magnitude of positivity α i will be increased by QRG, which means that the current optimal solution P g b t will be more likely to be considered as the global optimal solution. Otherwise, the probability amplitudes will be re-initialized to remain vigilant to the deceptiveness of the local optimal solution.
To prevent the premature convergence of the quantum bits from falling into 0 or 1 (cannot escape the state by itself), a constant ε close to 0 is applied for correction in this paper. The specific correction can be given by:
α i ( t + 1 ) = ε , α i ( t + 1 ) , 1 ε , if   α i ( t + 1 ) < ε   if   ε < α i ( t + 1 ) < 1 ε   if   α i ( t + 1 ) > 1 ε

3.7. Archive Controller

All obtained Pareto optimal solutions are saved in a storage space called the archive. The archive controller decides whether to include a particular solution in the list or not. The algorithm compares the obtained objective values of the new solution with the individuals in the archive. The archive is updated with the following rules.
-
If the archive is empty, the current solution will be accepted;
-
If the new solution is dominated by an individual in the archive, then this solution should be discarded;
-
If solutions in the archive are dominated by the new solution, then they are discarded from the archive. Additionally, the new solution will be accepted;
-
If the new solution is not dominated by external solutions in the archive, then the particular solution should be accepted and stored within the archive. If the archive is full, then the solution with the largest GDR value is removed and the new solution goes into the archive for storage.

3.8. Algorithm Complexity

The MOQSOA employs strategies such as the seagull operator and real-coded quantum representation for finding the optimal solutions, and the computational complexity of the algorithm mainly comes from the archive controller of the non-dominated solutions. During the iteration, the complexity of comparison between the non-dominated solutions in the archive is O(mN2). Additionally, the complexity of the grid ranking mechanism is O(mN2), where m is the number of objectives and N is the number of population size. Therefore, the complexity of the MOQSOA is O(mN2). The complexity of the MOQSOA is equivalent to the NSGA-II, MOPSO, SPEA2 and other multi-objective algorithms.

4. Experimental Results and Discussion

In this section, the performance metrics and benchmark test functions sets used in the experiments are described. Then, the proposed MOQSOA is compared with three well-known and three state-of-the-art MOEAs named NSGA-II [1], MOEA/D [3], MOPSO [5], IMMOEA [52], RVEA [53], and LMEA [54] in order to evaluate the performance.

4.1. Experimental Setting

To evaluate the performance of the proposed algorithm, IGD [55] and Spacing [55] metrics were selected for the quantitative assessment of the performance of the optimization algorithms. The IGD metric measures the average distance from the point in the Pareto front to the nearest solution in the approximate front obtained by the algorithm to access the convergence and distribution of the solutions. The smaller the IGD value is, the better the convergence and distribution of the solutions obtained by the algorithm are. The Spacing metric measures the range variance of the neighbor solutions in the non-dominated solutions by comparison with the solutions converged to the true Pareto front. The smaller the SP value is, the better the distribution of the solutions obtained by the algorithm.
To evaluate the efficiency of the proposed MOQSOA, the proposed algorithm was validated with standard benchmark test problems including ZDT [56], DTLZ [57], and UF [58]. The characteristics of these test problems are shown in Table 1.
In the experiment, the size of the population and archive were set to 100. The maximum number of iterations in all cases was set to 1000. The parameters of the algorithms used in the experiments are presented in Table 2.
Thirty independent runs were executed for each test problem to avoid randomness. Moreover, the Wilcoxon signed-rank test [60] was adopted to compare the results obtained by the MOQSOA and the six compared algorithms in Table 3 and Table 4. The test used a significance level α = 0.05, and “+”, “−”, and “=” indicate that the algorithm is superior, inferior, or equal to the MOQSOA, respectively.

4.2. Evaluation Performance

The IGD metric results of the benchmark test functions for the MOQSOA, three well-known classical algorithms (NSGA-II, MOEA/D, and MOPSO), and three state-of-the-art algorithms (IMMOEA, RVEA, and LMEA) are presented in Table 3, including mean values and standard deviations. The best values of the IGD metric are in bold. The Pareto front of each algorithm for ZDT3, ZDT4, DTLZ2, and DTLZ5 are shown in Figure 4, Figure 5, Figure 6 and Figure 7.
From the statistical results of the IGD metrics in Table 3, it can be seen that the MOQSOA performed well on problems ZDT1, ZDT2, ZDT4, ZDT6, DTLZ1, DTLZ8, and DTLZ9, and achieved the best values for these test problems. On problems DTLZ2, DTLZ3, DTLZ5, and DTLZ6, although the best values of the indicators were obtained by the LMEA algorithm, the performance of MOQSOA was not significantly different from LMEA according to the results of the Wilcoxon signed-rank test, and was significantly better than the results obtained by the other algorithms on these problems.
LMEA performed better on problems ZDT3 and UF4-UF10, but MOQSOA also showed a good performance and the results obtained rank in the top three when comparing all algorithms. For the IGD metric, the MOQSOA obtained a mediocre performance only on problems DTLZ4 and UF1-UF3. The Pareto fronts of each algorithm in Figure 4, Figure 5, Figure 6 and Figure 7 also showed the excellent performance of MOQSOA.
Comparing the performance of the IGD metrics for the two-objective and three-objective test problems, it can be seen that the MOQSOA outperformed NSGA-II, MOEA/D, MOPSO, IMMOEA, and RVEA on the two-objective test problem, and was basically equal to the LMEA algorithm. Additionally, for the three-objective test problem, the advantage over NSGA-II, MOEA/D, MOPSO, IMMOEA, and RVEA was obvious, but the algorithm was still slightly inferior to LMEA.
The Spacing metric results obtained for each algorithm on the benchmark test functions are presented in Table 4, where mean values and standard deviations of the results have been tabulated. Additionally, the best values of the Spacing metric for each test problem are shown in bold.
Compared to the other algorithms in Table 4, the MOQSOA exhibited a better performance with the Spacing metric. Specifically, for problems ZDT2, ZDT6, DTLZ4, DTLZ9, and UF6, the MOQSOA achieved the best values. For problems ZDT1, DTLZ8, UF1, and UF4, although the best values of the indicators were obtained by NSGA-II, MOEA/D, and RVEA, the results obtained by the MOQSOA did not differ significantly from the optimal values according to the Wilcoxon signed-rank test, which reflects the advantage of the proposed algorithm in the distribution of the solutions. In addition, for problems DTLZ5-DTLZ7, UF3, UF5, and UF7-UF10, the LMEA and MOEA/D algorithms performed better, but the MOQSOA also showed a good performance and the obtained results ranked in the top three when comparing all algorithms. For the Spacing metric, the MOQSOA only underperformed on problems ZDT3-ZDT4, DTLZ1-DTLZ3, and UF2.
When comparing the performance of the Spacing metric for the two-objective and three-objective test problems, it can be seen that the MOQSOA performed better in the distribution of the solutions than MOPSO, IMMOEA, and RVEA on the two-objective test problem, and was basically equal to NSGA-II and MOEA/D. Additionally, for the three-objective test problem, the advantage over MOPSO, IMMOEA and RVEA was more obvious, but the algorithm was still slightly inferior to LMEA.
Through statistics and the analysis of the experimental results, it has been proven that the proposed MOQSOA has good performance in dealing with multi-objective optimization problems. The MOQSOA significantly improved the convergence and distribution of solutions in the test problems compared to the other multi-objective optimization algorithms. Specifically, the convergence of the MOQSOA was significantly enhanced compared to the classical multi-objective optimization algorithms, and the distribution of the solutions was improved compared to the novel multi-objective optimization algorithm. In addition, the MOQSOA can balance the convergence and the distribution of solutions well.

4.3. The Influence of Strategies

4.3.1. The Influence of Real-Coded Quantum Representation

Inspired by the QEA, the proposed MOQSOA treats the current optimal solution as a linear superposition of two probabilistic states with real-coded quantum representation. Each seagull individual makes its own judgment on whether to accept the current optimal solution as the global optimum during the iterations. To demonstrate the effectiveness of this strategy, the proposed MOQSOA was compared with the MOSOA [24] that adopts the basic seagull optimization algorithm by IGD metrics and Spacing metrics on the test functions ZDT1, ZDT2, ZDT3, ZDT6, DTLZ4, DTLZ6, and UF6. The population size and the maximum capacity of the archive were set to 100, the maximum number of iterations was set to 1000, and each algorithm was implemented for 30 independent runs. The means and standard deviations are presented in Table 5. The Wilcoxon signed-rank test [60] was performed. The results are shown in Table 5 with a significance level of α = 0.05, and “+”, “−”, and “=” indicate that the MOSOA is superior, inferior, or equal to the MOQSOA, respectively.
Based on the results, as shown in Table 5, it can be seen that for problems ZDT3, ZDT6, DTLZ4, DTLZ6, and UF6, the IGD values obtained by MOQSOA were better than those of the MOSOA. In contrast, for problems ZDT1 and ZDT2, the performance of the MOQSOA was not significantly different from the performance of the MOSOA. As for the Spacing metric, the MOQSOA did not differ significantly from the MOSOA on the majority of problems according to the Wilcoxon signed-rank test. The experimental results illustrate that, by adding real-coded quantum representation for current optimal solution, the MOQSOA is able to improve the convergence of the algorithm without affecting the distribution of the solutions, and shows better performance.
Generally, the real-coded quantum representation strategy helps to improve the algorithm in searching for global optimal solutions and identifying local optimal stagnation.

4.3.2. The Influence of Nonlinear Migration Operation

Instead of adopting the linear descent approach for the additional variable A in the migration operation in the basic SOA, the proposed MOQSOA uses a nonlinearly varying variable A to match the migration process of the actual seagull population better, as well as accelerate the convergence and improve the search accuracy of the algorithm. To demonstrate the effectiveness of this strategy, the proposed MOQSOA was compared with the MOQSOA that adopts the linear descent method of the control variable A in the basic SOA (denoted as MOQSOA-LD) on test problems ZDT1, ZDT3, and DTLZ6. Additionally, the experiment metrics were the IGD metrics of the 200th, 500th, and 1000th generations. The population size and the maximum capacity of the archive were set to 100, the maximum number of iterations was set to 1000, and each algorithm was implemented for 30 independent runs. The means and standard deviations are presented in Table 6. The Wilcoxon signed-rank test [60] was performed. The results are shown in Table 5 with a significance level of α = 0.05, and “+”, “−”, and “=” indicate that MOQSOA-LD is superior, inferior, or equal to MOQSOA, respectively.
Based on the results illustrated in Table 6, it can be seen that the MOQSOA generally performs better than MOQSOA-LD for problems ZDT1, ZDT3, and DTLZ6 at the 200th generation. Although the effect produced by the nonlinear migration strategy is no longer apparent in the later stage of iteration, this strategy can improve exploitation in the early stage of iteration.
As a summary, the abilities of exploitation and convergence are emphasized due to the employed real-coded quantum representation and nonlinear migration operation strategies, which help the proposed MOQSOA to obtain a better performance for different kinds of problems.

5. Conclusions

Multi-objective optimization algorithms need to balance convergence with distribution. However, many multi-objective optimization algorithms are prone to local optimization, leading to unbalanced convergence and distribution problems. In order to counterpoise the convergence and distribution of Pareto optimal solutions in MOPs, a multi-objective quantum-inspired seagull optimization algorithm, termed MOQSOA, was proposed in this paper. The proposed algorithm combined opposite-based learning, the migration and attacking behavior of seagulls, grid ranking, and the superposition principles of quantum computing. To obtain a better initialized population in the absence of a priori knowledge, an OBL mechanism was used to initialize the seagull population. Furthermore, it contained the nonlinear migration and attacking operations of the SOA. To maintain a better balance between exploitation and exploration when searching global optimal solutions, the proposed algorithm adapted the real-coded quantum representation of the current optimal solution and quantum rotation gate. In addition, the grid mechanism with GGR and GDR provided a criterion for leader selection and archive control. To evaluate the performance of the proposed algorithm in this paper, NSGA-II, MOEA/D, MOPSO, IMMOEA, RVEA, and LMEA were selected as comparative algorithms. The results of the tests performed on the ZDT, DTLZ, and UF test suites demonstrated that the MOQSOA was able to enhance the distribution and convergence performance of MOPs.
The proposed MOQSOA showed effectiveness and efficiency in the MOP benchmark test problems. However, there is still a lot of potential future work that deserves to be studied in depth. One desirable future investigation is to solve specific real-life difficult engineering problems with the proposed algorithm, such as circuit designing, electronic component arrangement, cost optimization, automatic navigation, and sustainable energy systems. Additionally, it is worthy studying how to determine whether an optimal solution is positive or deceptive more scientifically. In addition, the potential capability of the MOQSOA to solve many objective optimization problems should be demonstrated. Moreover, it will be interesting to investigate how to use the principles of quantum computing in other multi-objective optimization algorithms.

Author Contributions

Conceptualization, Y.W., W.W. and I.A.; methodology, Y.W. and I.A.; software, Y.W. and I.A.; validation, Y.W.; formal analysis, Y.W.; investigation, Y.W.; resources, Y.W., W.W. and I.A.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W., W.W., I.A. and E.T.-E.; visualization, Y.W.; supervision, W.W.; project administration, W.W.; funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 61873240) and Faculty of Engineering and Technology, Future University in Egypt, 440 New Cairo 11845, Egypt.

Data Availability Statement

The data presented in this study are openly available in PlatEMO at https://doi.org/10.1109/MCI.2017.2742868 (accessed on 23 May 2022), reference number [59].

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments and suggestions.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T.A.M.T. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  2. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm. Available online: https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/145755/eth-24689-01.pdf (accessed on 27 September 2001).
  3. Ahmad, I.; Liu, Y.; Javeed, D.; Ahmad, S. A decision-making technique for solving order allocation problem using a genetic algorithm. In Proceedings of the 2020 6th International Conference on Electrical Engineering, Control and Robotics, Xiamen, China, 10–12 January 2020. [Google Scholar]
  4. Gong, M.; Jiao, L.; Du, H.; Bo, L. Multiobjective immune algorithm with nondominated neighbor-based selection. Evol. Comput. 2008, 16, 225–255. [Google Scholar] [CrossRef] [PubMed]
  5. Coello, C.C.; Lechuga, M.S. MOPSO: A proposal for multiple objective particle swarm optimization. In Proceedings of the 2002 Congress on Evolutionary Computation (CEC 2002), Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1051–1056. [Google Scholar]
  6. Han, F.; Chen, W.; Ling, Q.; Han, H. Survey Paper Multi-objective particle swarm optimization with adaptive strategies for feature selection. Swarm Evol. Comput. 2021, 62, 100847. [Google Scholar] [CrossRef]
  7. Tufail, A.B.; Ullah, I.; Khan, W.U.; Asif, M.; Ahmad, I.; Ma, Y.-K.; Khan, R.; Kalimullah; Ali, M.S. Diagnosis of Diabetic Retinopathy through Retinal Fundus Images and 3D Convolutional Neural Networks with Limited Number of Samples. Wirel. Commun. Mob. Comput. 2021, 2021, 6013448. [Google Scholar] [CrossRef]
  8. Raza, A.; Ayub, H.; Khan, J.A.; Ahmad, I.; SSalama, A.; Daradkeh, Y.I.; Javeed, D.; Ur Rehman, A.; Hamam, H. A Hybrid Deep Learning-Based Approach for Brain Tumor Classification. Electronics 2022, 11, 1146. [Google Scholar] [CrossRef]
  9. Cui, Y.; Meng, X.; Qiao, J. A multi-objective particle swarm optimization algorithm based on two-archive mechanism. Appl. Soft Comput. 2022, 119, 108532. [Google Scholar] [CrossRef]
  10. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S. A novel whale optimization algorithm integrated with Nelder–Mead simplex for multi-objective optimization problems. Knowl.-Based Syst. 2021, 212, 106619. [Google Scholar] [CrossRef]
  11. Wu, Z.; Xie, Z. A multi-objective lion swarm optimization based on multi-agent. J. Ind. Manag. Optim. 2022. [Google Scholar] [CrossRef]
  12. Zheng, J.; Zhang, Z.; Zou, J.; Yang, S.; Ou, J.; Hu, Y. A dynamic multi-objective particle swarm optimization algorithm based on adversarial decomposition and neighborhood evolution. Swarm Evol. Comput. 2022, 69, 100987. [Google Scholar] [CrossRef]
  13. Gu, Q.; Wang, Q.; Li, X.; Li, X. A surrogate-assisted multi-objective particle swarm optimization of expensive constrained combinatorial optimization problems. Knowl.-Based Syst. 2021, 223, 107049. [Google Scholar] [CrossRef]
  14. Adetunji, K.E.; Hofsajer, I.W.; Abu-Mahfouz, A.M.; Cheng, L. A review of metaheuristic techniques for optimal integration of electrical units in distribution networks. IEEE Access 2020, 9, 5046–5068. [Google Scholar] [CrossRef]
  15. Liu, H.; Lei, Y.; Fu, Y.; Li, X. A novel hybrid-point-line energy management strategy based on multi-objective optimization for range-extended electric vehicle. Energy 2022, 247, 123357. [Google Scholar] [CrossRef]
  16. Xie, X.; Zheng, J.; Feng, M.; He, S.; Lin, Z. Multi-Objective Mayfly Optimization Algorithm Based on Dimensional Swap Variation for RFID Network Planning. IEEE Sens. J. 2022, 22, 7311–7323. [Google Scholar] [CrossRef]
  17. Ahmad, I.; Ullah, I.; Khan, W.U.; Rehman, A.U.; Adrees, M.S.; Saleem, M.Q.; Cheikhrouhou, O.; Hamam, H.; Shafiq, M. Efficient algorithms for E-healthcare to solve multiobject fuse detection problem. J. Healthc. Eng. 2021, 2021, 9500304. [Google Scholar] [CrossRef]
  18. Javeed, D.; Gao, T.; Khan, M.T.; Shoukat, D. A Hybrid Intelligent Framework to Combat Sophisticated Threats in Secure Industries. Sensors 2022, 22, 1582. [Google Scholar] [CrossRef]
  19. Javeed, D.; Gao, T.; Khan, M.T. SDN-Enabled Hybrid DL-Driven Framework for the Detection of Emerging Cyber Threats in IoT. Electronics 2021, 10, 918. [Google Scholar] [CrossRef]
  20. Ahmad, I.; Liu, Y.; Javeed, D.; Shamshad, N.; Sarwr, D.; Ahmad, S. A review of artificial intelligence techniques for selection & evaluation. In Proceedings of the 2020 6th International Conference on Electrical Engineering, Control and Robotics, Xiamen, China, 10–12 January 2020. [Google Scholar]
  21. Javeed, D.; Gao, T.; Khan, M.T.; Ahmad, I. A Hybrid Deep Learning-Driven SDN Enabled Mechanism for Secure Communication in Internet of Things (IoT). Sensors 2021, 21, 4884. [Google Scholar] [CrossRef]
  22. Wang, W.L.; Li, W.K.; Wang, Z.; Li, L. Opposition-based multi-objective whale optimization algorithm with global grid ranking. Neurocomputing 2019, 341, 41–59. [Google Scholar] [CrossRef]
  23. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  24. Dhiman, G.; Singh, K.K.; Soni, M.; Nagar, A.; Dehghani, M.; Slowik, A.; Kaur, A.; Sharma, A.; Houssein, E.H.; Cengiz, K. MOSOA: A new multi-objective seagull optimization algorithm. Expert Syst. Appl. 2021, 167, 114150. [Google Scholar] [CrossRef]
  25. Dhiman, G.; Singh, K.K.; Slowik, A.; Chang, V.; Yildiz, A.R.; Kaur, A.; Garg, M. EMoSOA: A new evolutionary multi-objective seagull optimization algorithm for global optimization. Int. J. Mach. Learn. Cybern. 2021, 12, 571–596. [Google Scholar] [CrossRef]
  26. Narayanan, A.; Moore, M. Quantum-inspired genetic algorithms. In Proceedings of the IEEE International Conference on Evolutionary Computation (CEC 1996), Nagoya, Japan, 20–22 May 1996; pp. 61–66. [Google Scholar]
  27. Han, K.-H.; Kim, J.-H. Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Trans. Evol. Comput. 2002, 6, 580–593. [Google Scholar] [CrossRef] [Green Version]
  28. Da Cruz, A.V.A.; Barbosa, C.R.H.; Pacheco, M.A.C.; Vellasco, M.B.R. Quantum-inspired evolutionary algorithms and its application to numerical optimization problems. In Proceedings of the International Conference on Neural Information Processing, Siem Reap, Cambodia, 13–16 December 2018; Springer: Berlin/Heidelberg, Germany, 2004; pp. 212–217. [Google Scholar]
  29. Jiao, L.; Li, Y.; Gong, M.; Zhang, X. Quantum-inspired immune clonal algorithm for global numerical optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2008, 38, 1234–1253. [Google Scholar] [CrossRef] [PubMed]
  30. Layeb, A. A hybrid quantum inspired harmony search algorithm for 0–1 optimization problems. J. Comput. Appl. Math. 2013, 253, 14–25. [Google Scholar] [CrossRef]
  31. Sun, J.; Feng, B.; Xu, W. Particle swarm optimization with particles having quantum behavior. In Proceedings of the 2004 Congress on Evolutionary Computation (CEC 2004), Portland, OR, USA, 19–23 June 2004; Volume 1, pp. 325–331. [Google Scholar]
  32. Sun, J.; Xu, W.; Feng, B. A global search strategy of quantum-behaved particle swarm optimization. In Proceedings of the IEEE Conference on Cybernetics and Intelligent Systems, Singapore, 1–3 December 2004; Volume 1, pp. 111–116. [Google Scholar]
  33. Sun, J.; Xu, W.; Feng, B. Adaptive parameter control for quantum-behaved particle swarm optimization on individual level. In Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 12 October 2005; Volume 4, pp. 3049–3054. [Google Scholar]
  34. Sun, J.; Fang, W.; Palade, V.; Wu, X.; Xu, W. Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point. Appl. Math. Comput. 2011, 218, 3763–3775. [Google Scholar] [CrossRef]
  35. Li, Y.; Xiang, R.; Jiao, L.; Liu, R. An improved cooperative quantum-behaved particle swarm optimization. Soft Comput. 2012, 16, 1061–1069. [Google Scholar] [CrossRef]
  36. Mezura-Montes, E.; Coello, C.A.C. Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput. 2011, 1, 173–194. [Google Scholar] [CrossRef]
  37. Guo, Y.; Chen, N.Z.; Mou, J.; Zhang, B. A quantum-behaved particle swarm optimization algorithm with the flexible single-/multi-population strategy and multi-stage perturbation strategy based on the characteristics of objective function. Soft Comput. 2020, 24, 6909–6956. [Google Scholar] [CrossRef]
  38. You, Q.; Sun, J.; Pan, F.; Palade, V.; Ahmad, B. Dmo-qpso: A multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition with diversity control. Mathematics 2021, 9, 1959. [Google Scholar] [CrossRef]
  39. Fan, W.; Liu, Q.; Wang, M. Bi-Level Multi-Objective Optimization Scheduling for Regional Integrated Energy Systems Based on Quantum Evolutionary Algorithm. Energies 2021, 14, 4740. [Google Scholar] [CrossRef]
  40. Hesar, A.; Kamel, S.; Houshmand, M. A quantum multi-objective optimization algorithm based on harmony search method. Soft Comput. 2021, 25, 9427–9439. [Google Scholar] [CrossRef]
  41. Dayana, R.; Kalavathy, G. Quantum Firefly Secure Routing for Fog Based Wireless Sensor Networks. Intell. Autom. Soft Comput. 2022, 31, 1511–1528. [Google Scholar] [CrossRef]
  42. Hu, W.; Wang, H.; Qiu, Z.; Nie, C.; Yan, L. A quantum particle swarm optimization driven urban traffic light scheduling model. Neural Comput. Appl. 2018, 29, 901–911. [Google Scholar] [CrossRef]
  43. Xu, H.; Hu, Z.; Zhang, P.; Gu, F.; Wu, F.; Song, W.; Wang, C. Optimization and Experiment of Straw Back-Throwing Device of No-Tillage Drill Using Multi-Objective QPSO Algorithm. Agriculture 2021, 11, 986. [Google Scholar] [CrossRef]
  44. Zhang, L.; Xu, L. Multi-objective QPSO for short-term load forecast based on diagonal recursive neural network. J. Comput. Methods Sci. Eng. 2021, 21, 1113–1124. [Google Scholar] [CrossRef]
  45. Wang, L.; Liu, L.; Qi, J.; Peng, W. Improved quantum particle swarm optimization algorithm for offline path planning in AUVs. IEEE Access 2020, 8, 143397–143411. [Google Scholar] [CrossRef]
  46. Al-Wesabi, F.; Obayya, M.; Hilal, A.; Castillo, O.; Gupta, D.; Khanna, A. Multi-objective quantum tunicate swarm optimization with deep learning model for intelligent dystrophinopathies diagnosis. Soft Comput. 2022. [Google Scholar] [CrossRef]
  47. Grotti, E.; Mizushima, D.M.; Backes, A.D.; de Freitas Awruch, M.D.; Gomes, H.M. A novel multi-objective quantum particle swarm algorithm for suspension optimization. Comput. Appl. Math. 2020, 39, 105. [Google Scholar] [CrossRef]
  48. Hou, J.; Wang, W.; Zhang, Y.; Liu, X.; Xie, Y. Multi-objective quantum inspired evolutionary SLM scheme for PAPR reduction in multi-carrier modulation. IEEE Access 2020, 8, 26022–26029. [Google Scholar] [CrossRef]
  49. Hou, G.; Gong, L.; Yang, Z.; Zhang, J. Multi-objective economic model predictive control for gas turbine system based on quantum simultaneous whale optimization algorithm. Energy Convers. Manag. 2020, 207, 112498. [Google Scholar] [CrossRef]
  50. Mahdavi, S.; Rahnamayan, S.; Deb, K. Opposition based learning: A literature review. Swarm Evol. Comput. 2018, 39, 1–23. [Google Scholar] [CrossRef]
  51. Chen, B.; Lei, H.; Shen, H.; Liu, Y.; Lu, Y. A hybrid quantum-based PIO algorithm for global numerical optimization. Sci. China Inf. Sci. 2019, 62, 70203. [Google Scholar] [CrossRef] [Green Version]
  52. Cheng, R.; Jin, Y.; Narukawa, K.; Sendhoff, B. A multiobjective evolutionary algorithm using Gaussian process-based inverse modeling. IEEE Trans. Evol. Comput. 2015, 19, 838–856. [Google Scholar] [CrossRef]
  53. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef] [Green Version]
  54. Zhang, X.; Tian, Y.; Cheng, R.; Jin, Y. A decision variable clustering based evolutionary algorithm for large-scale many-objective optimization. IEEE Trans. Evol. Comput. 2018, 22, 97–112. [Google Scholar] [CrossRef] [Green Version]
  55. García, S.; Fernández, A.; Luengo, J.; Herrera, F. A study of statistical techniques and performance measures for genetics-based machine learning: Accuracy and interpretability. Soft Comput. 2009, 13, 959. [Google Scholar] [CrossRef]
  56. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multiobjective evolutionary algorithms: Empirical results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [Green Version]
  57. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable test problems for evolutionary multiobjective optimization. In Evolutionary Multiobjective Optimization: Theoretical Advances and Applications; Abraham, A., Jain, L., Goldberg, R., Eds.; Springer: London, UK, 2005; pp. 105–145. [Google Scholar]
  58. Zhang, Q.; Zhou, A.; Zhao, S.; Suganthan, P.N.; Liu, W.; Tiwari, S. Multiobjective Optimization Test Instances for the CEC 2009 Special Session and Competition. Available online: https://www.al-roomi.org/multimedia/CEC_Database/CEC2009/MultiObjectiveEA/CEC2009_MultiObjectiveEA_TechnicalReport.pdf (accessed on 20 April 2009).
  59. Ye, T.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar]
  60. Rey, D.; Neuhäuser, M. Wilcoxon-signed-rank test. In International Encyclopedia of Statistical Science; Lovric, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1658–1659. [Google Scholar]
Figure 1. Framework of MOQSOA.
Figure 1. Framework of MOQSOA.
Electronics 11 01834 g001
Figure 2. Initialization based on OBL.
Figure 2. Initialization based on OBL.
Electronics 11 01834 g002
Figure 3. Schematic diagram of a seagull agent approaching the current optimal individual.
Figure 3. Schematic diagram of a seagull agent approaching the current optimal individual.
Electronics 11 01834 g003
Figure 4. Pareto fronts of each algorithm for ZDT4.
Figure 4. Pareto fronts of each algorithm for ZDT4.
Electronics 11 01834 g004
Figure 5. Pareto fronts of each algorithm for DTLZ5.
Figure 5. Pareto fronts of each algorithm for DTLZ5.
Electronics 11 01834 g005
Figure 6. Pareto fronts of each algorithm for ZDT3.
Figure 6. Pareto fronts of each algorithm for ZDT3.
Electronics 11 01834 g006
Figure 7. Pareto fronts of each algorithm for DTLZ2.
Figure 7. Pareto fronts of each algorithm for DTLZ2.
Electronics 11 01834 g007
Table 1. Characteristics of benchmark test problems.
Table 1. Characteristics of benchmark test problems.
Test ProblemsPropertiesNumber of Objectives
ZDT1Convex2
ZDT2Concave2
ZDT3Disconnected2
ZDT4Convex2
ZDT6Concave2
DTLZ1Linear3
DTLZ2Concave3
DTLZ3Concave3
DTLZ4Concave3
DTLZ5Concave3
DTLZ6Concave3
DTLZ7Disconnected3
DTLZ8Linear3
DTLZ9Concave3
UF1Convex2
UF2Convex2
UF3Convex2
UF4Concave2
UF5Disconnected2
UF6Disconnected2
UF7Linear2
UF8Concave3
UF9Disconnected3
UF10Concave3
All experiments were conducted with Matlab R2016b and PlatEMO v2.9 [59] running on an Intel Core i7-4790 CPU @ 3.60 GHz and Windows 7 Ultimate Edition.
Table 2. Parameters of the algorithms in the experiment.
Table 2. Parameters of the algorithms in the experiment.
AlgorithmParameterValue
NSGA-IICrossover probability pc0.8
Mutation probability pm0.1
MOEA/DNumber of neighbors T10
Probability of selecting parents pp0.9
Distribution index Di30
Differential weight0.5
MOPSONumber of grids nGrid10
Inertia weight w0.5
Personal coefficient c11
Social coefficient c22
IMMOEAK10
RVEAα2
fr0.1
LMEAnSel5
nPer50
nCor5
Table 3. IGD metric results.
Table 3. IGD metric results.
FunctionMetricsNSGAIIMOEA/DMOPSOIMMOEARVEALMEAMOQSOA
ZDT1Average4.8116 × 10−3 (−)4.6968 × 10−3 (−)4.8735 × 10−3 (−)7.9935 × 10−3 (−)5.5528 × 10−3 (−)5.1070 × 10−3 (−)4.1101 × 10−3
Std2.06 × 10−43.14 × 10−41.77 × 10−41.12 × 10−49.19 × 10−45.00 × 10−47.28 × 10−5
ZDT2Average4.8719 × 10−3 (−)5.4469 × 10−3 (−)5.2469 × 10−3 (−)1.1104 × 10−2 (−)7.2369 × 10−3 (−)4.6753 × 10−3 (−)3.8136 × 10−3
Std1.81 × 10−44.94 × 10−43.24 × 10−41.67 × 10−41.74 × 10−31.51 × 10−45.67 × 10−5
ZDT3Average5.2780 × 10−3 (+) 1.4329 × 10−2 (−)5.5320 × 10−3 (+)1.2438 × 10−2 (−)7.9570 × 10−3 (−)1.0940 × 10−2 (−)6.4169 × 10−3
Std2.39 × 10−42.67 × 10−32.12 × 10−45.25 × 10−41.88 × 10−42.69 × 10−31.39 × 10−4
ZDT4Average5.1081 × 10−3 (−)7.2439 × 10−3 (−)4.8082 × 10−3 (−)4.8076 × 10−3 (−)5.4592 × 10−3 (−)4.7664 × 10−3 (−)4.1890 × 10−3
Std6.53 × 10−41.69 × 10−33.00 × 10−49.77 × 10−51.44 × 10−33.22 × 10−42.64 × 10−4
ZDT6Average3.6160 × 10−3 (−)4.3208 × 10−3 (−)4.2740 × 10−3 (−)9.4301 × 10−1 (−)3.3918 × 10−3 (−)3.3078 × 10−3 (−)3.0046 × 10−3
Std1.20 × 10−43.92 × 10−32.73 × 10−44.10 × 10−22.25 × 10−43.60 × 10−52.18 × 10−4
DTLZ1Average2.7850 × 10−2 (−)2.0565 × 10−2 (=)2.7105 × 10−2 (−)1.2940 (−)2.0558 × 10−2 (=)2.1025 × 10−2 (=)2.0557 × 10−2
Std2.03 × 10−31.07 × 10−55.59 × 10−46.58 × 10−26.79 × 10−52.55 × 10−41.92 × 10−4
DTLZ2Average6.8830 × 10−2 (−)5.4464 × 10−2 (=)6.9818 × 10−2 (−)7.9795 × 10−2 (−)5.4465 × 10−2 (=)5.3844 × 10−2 (=) 5.4465 × 10−2
Std3.19 × 10−31.65 × 10−53.22 × 10−33.65 × 10−31.28 × 10−52.76 × 10−41.34 × 10−4
DTLZ3Average6.297 × 10−2 (−)5.5516 × 10−2 (=)3.8831 × 10−1 (−)2.8865 (−)5.5358 × 10−2 (=)5.4197 × 10−2 (=) 5.4711 × 10−2
Std4.49 × 10−31.32 × 10−35.40 × 10−15.21 × 10−11.15 × 10−34.76 × 10−41.23 × 10−4
DTLZ4Average6.7988 × 10−2 (+)5.4464 × 10−2 (+) 7.1277 × 10−2 (+)7.7431 × 10−2 (+)5.4465 × 10−2 (+)9.6411 × 10−2 (+)3.7873 × 10−1
Std4.16 × 10−34.87 × 10−41.60 × 10−33.48 × 10−34.01 × 10−47.37 × 10−22.81 × 10−1
DTLZ5Average5.4096 × 10−3 (=)3.3904 × 10−2 (−)6.3037 × 10−3 (−)2.0947 × 10−2 (−)6.2925 × 10−2 (−)4.6503 × 10−3 (=) 5.0756 × 10−3
Std6.67 × 10−51.63 × 10−59.93 × 10−53.35 × 10−32.09 × 10−35.79 × 10−51.90 × 10−4
DTLZ6Average5.8399 × 10−3 (−)3.3926 × 10−2 (−)6.7322 × 10−3 (−)3.9896 (−)1.1591 × 10−1 (−)4.4731 × 10−3 (=) 4.9789 × 10−3
Std6.36 × 10−53.15 × 10−58.78 × 10−47.10 × 10−26.35 × 10−49.76 × 10−53.38 × 10−5
DTLZ7Average8.0964 × 10−2 (+)1.5431 × 10−1 (=)9.0199 × 10−2 (+)3.2745 × 10−1 (−)1.0659 × 10−1 (+)5.8854 × 10−2 (+) 1.6107 × 10−1
Std4.70 × 10−32.15 × 10−41.30 × 10−25.03 × 10−22.21 × 10−34.52 × 10−41.62 × 10−1
DTLZ8Average4.4234 × 10−2 (=)NaNNaNNaN5.8818 × 10−2 (−)NaN4.3927 × 10−2
Std3.83 × 10−3NaNNaNNaN1.43 × 10−3NaN2.44 × 10−3
DTLZ9Average5.9530 × 10−3 (−)NaNNaN4.4744 (−)2.6833 × 10−2 (−)NaN5.1492 × 10−3
Std4.08 × 10−4NaNNaN6.01 × 10−21.19 × 10−3NaN4.92 × 10−4
UF1Average9.7093 × 10−2 (=)2.6021 × 10−1 (−)7.8469 × 10−2 (+)6.7741 × 10−2 (+)8.2189 × 10−2 (+)1.6381 × 10−2 (+) 1.4502 × 10−1
Std3.41 × 10−31.08 × 10−17.31 × 10−26.25 × 10−35.02 × 10−33.87 × 10−34.59 × 10−2
UF2Average3.2343 × 10−2 (+)8.2332 × 10−2 (−)2.3067 × 10−2 (+)5.4120 × 10−2 (=)7.2716 × 10−2 (−)1.5039 × 10−2 (+) 5.3761 × 10−2
Std3.81 × 10−34.43 × 10−23.00 × 10−33.78 × 10−27.74 × 10−31.34 × 10−32.03 × 10−2
UF3Average1.8641 × 10−1 (+)3.1030 × 10−1 (=)1.1567 × 10−1 (+)1.1030 × 10−1 (+) 3.1836 × 10−1 (=)1.6484 × 10−1 (+)2.8865 × 10−1
Std1.19 × 10−24.77 × 10−22.14 × 10−21.60 × 10−22.26 × 10−35.07 × 10−31.40 × 10−2
UF4Average4.9113 × 10−2 (=)8.3692 × 10−2 (−)4.5962 × 10−2 (=)6.6832 × 10−2 (−)9.5207 × 10−2 (−)3.7822 × 10−2 (+) 4.6945 × 10−2
Std1.75 × 10−33.42 × 10−32.79 × 10−34.24 × 10−32.04 × 10−35.38 × 10−41.89 × 10−3
UF5Average3.9011 × 10−1 (−)5.8082 × 10−1 (−)6.7952 × 10−1 (−)6.6328 × 10−1 (−)3.4518 × 10−1 (=)2.1331 × 10−1 (+) 3.2603 × 10−1
Std1.19 × 10−18.90 × 10−21.64 × 10−19.79 × 10−28.50 × 10−23.20 × 10−29.33 × 10−2
UF6Average1.2527 × 10−1 (+) 4.9777 × 10−1 (−)4.3455 × 10−12.6280 × 10−1 (=)1.2725 × 10−1 (+)3.1444 × 10−1 (−)2.6237 × 10−1
Std1.25 × 10−24.34 × 10−18.01 × 10−21.39 × 10−19.55 × 10−31.26 × 10−21.35 × 10−1
UF7Average1.7075 × 10−1 (=)4.3857 × 10−1 (−)6.2768 × 10−2 (+) 1.5326 × 10−1 (=)1.3181 × 10−1 (=)1.1450 × 10−1 (=)1.4492 × 10−1
Std1.56 × 10−11.87 × 10−17.63 × 10−21.55 × 10−11.70 × 10−16.78 × 10−21.32 × 10−1
UF8Average2.7066 × 10−1 (−)3.2370 × 10−1 (−)2.6221 × 10−1 (−)2.7670 × 10−1 (−)3.3376 × 10−1 (−)1.5603 × 10−1 (+) 2.1688 × 10−1
Std7.49 × 10−23.07 × 10−27.18 × 10−23.10 × 10−35.66 × 10−31.60 × 10−27.29 × 10−2
UF9Average2.6615 × 10−1 (=)3.4263 × 10−1 (−)2.8580 × 10−1 (−)3.0671 × 10−1 (−)3.6412 × 10−1 (−)9.0845 × 10−2 (+) 2.3198 × 10−1
Std9.57 × 10−28.27 × 10−32.09 × 10−21.22 × 10−11.89 × 10−23.04 × 10−25.95 × 10−2
UF10Average4.3683 × 10−1 (=)7.9220 × 10−1 (−)5.5064 × 10−1 (−)2.9879 × 10−1 (+) 6.5234 × 10−1 (−)4.6990 × 10−1 (=)4.3228 × 10−1
Std5.88 × 10−21.35 × 10−13.59 × 10−25.14 × 10−32.11 × 10−13.90 × 10−21.41 × 10−1
+ / / = 6/11/71/16/57/14/14/16/34/14/69/6/7
Table 4. Spacing metric results.
Table 4. Spacing metric results.
FunctionMetricsNSGAIIMOEA/DMOPSOIMMOEARVEALMEAMOQSOA
ZDT1Average6.8090 × 10−3 (−)5.3019 × 10−3 (−) 7.7359 × 10−3 (−)1.4474 × 10−2 (−)9.7893 × 10−3 (−)1.3206 × 10−2 (−)5.3225 × 10−3
Std5.22 × 10−44.93 × 10−47.03 × 10−46.41 × 10−35.08 × 10−43.48 × 10−36.57 × 10−4
ZDT2Average7.5232 × 10−3 (−)5.0442 × 10−3 (=)7.7292 × 10−3 (−)9.2439 × 10−3 (−)7.1899 × 10−3 (−)5.0437 × 10−3 (=)4.4243 × 10−3
Std8.84 × 10−45.62 × 10−44.86 × 10−44.41 × 10−42.32 × 10−31.07 × 10−31.27 × 10−4
ZDT3Average7.4689 × 10−3 (+) 1.9846 × 10−2 (=)7.8757 × 10−3 (+)3.3663 × 10−2 (−)1.1866 × 10−2 (+)1.2563 × 10−2 (=)1.3825 × 10−2
Std6.95 × 10−42.08 × 10−37.14 × 10−49.87 × 10−49.30 × 10−42.53 × 10−33.47 × 10−5
ZDT4Average7.1956 × 10−3 (+)5.6232 × 10−3 (+) 7.1330 × 10−3 (+)7.4260 × 10−3 (+)9.7327 × 10−3 (+)1.4311 × 10−2 (−)1.0226 × 10−2
Std5.70 × 10−41.18 × 10−35.23 × 10−45.01 × 10−43.54 × 10−41.59 × 10−36.01 × 10−4
ZDT6Average5.6569 × 10−3 (−)3.2179 × 10−3 (=)7.6746 × 10−3 (−)5.3819 × 10−2 (−)2.3782 × 10−3 (−)3.7229 × 10−3 (−)2.1262 × 10−3
Std5.31 × 10−42.58 × 10−44.97 × 10−41.60 × 10−27.91 × 10−54.36 × 10−43.36 × 10−4
DTLZ1Average2.1135 × 10−2 (+)3.7899 × 10−5 (+) 2.2926 × 10−2 (+)2.2280 (−)1.6424 × 10−4 (+)1.8593 × 10−2 (+)3.1105 × 10−2
Std1.33 × 10−37.94 × 10−51.98 × 10−38.39 × 10−16.07 × 10−41.74 × 10−31.15 × 10−3
DTLZ2Average5.7049 × 10−2 (+)5.7179 × 10−2 (+)6.0138 × 10−2 (+)8.6819 × 10−2 (=)5.7164 × 10−2 (+)2.7904 × 10−2 (+) 8.4078 × 10−2
Std4.56 × 10−34.49 × 10−56.81 × 10−33.42 × 10−36.08 × 10−52.99 × 10−33.39 × 10−3
DTLZ3Average1.4015 × 10−1 (−)5.6364 × 10−2 (+)8.4753 × 10−2 (=)5.5285 (−)5.4953 × 10−2 (+)3.0975 × 10−2 (+) 8.3840 × 10−2
Std1.47 × 10−11.51 × 10−33.47 × 10−21.633.93 × 10−31.06 × 10−32.00 × 10−3
DTLZ4Average5.4198 × 10−2 (−)5.7166 × 10−2 (−)6.3045 × 10−2 (−)7.1517 × 10−2 (−)5.7146 × 10−2 (−)5.8702 × 10−2 (−)3.2762 × 10−2
Std4.46 × 10−31.01 × 10−44.18 × 10−35.65 × 10−32.37 × 10−45.31 × 10−24.08 × 10−2
DTLZ5Average8.8231 × 10−3 (+)1.3776 × 10−2 (=)1.1250 × 10−2 (=)5.2354 × 10−2 (−)1.2206 × 10−1 (−)7.9189 × 10−3 (+) 1.2907 × 10−2
Std1.53 × 10−48.58 × 10−52.95 × 10−42.41 × 10−31.43 × 10−24.14 × 10−46.24 × 10−4
DTLZ6Average1.1714 × 10−2 (=)1.2549 × 10−2 (=)1.1354 × 10−2 (=)4.6209 × 10−1 (−)1.0722 × 10−1 (−)7.0155 × 10−3 (+) 1.2237 × 10−2
Std5.38 × 10−46.61 × 10−54.92 × 10−59.05 × 10−21.32 × 10−31.25 × 10−32.05 × 10−4
DTLZ7Average6.3775 × 10−2 (+)1.9627 × 10−1 (−)7.6841 × 10−2 (=)2.5247 × 10−1 (−)1.1524 × 10−1 (−)5.9592 × 10−2 (+) 8.2865 × 10−2
Std9.14 × 10−39.65 × 10−44.31 × 10−33.51 × 10−21.25 × 10−37.30 × 10−32.81 × 10−2
DTLZ8Average3.6512 × 10−2 (=)NaNNaNNaN3.3160 × 10−2 (=) NaN3.8990 × 10−2
Std9.12 × 10−3NaNNaNNaN5.54 × 10−3NaN4.24 × 10−3
DTLZ9Average8.4354 × 10−3 (−)NaNNaN8.7077 × 10−2 (−)3.0606 × 10−2NaN7.1642 × 10−3
Std6.68 × 10−4NaNNaN2.46 × 10−25.96 × 10−3NaN1.29 × 10−3
UF1Average2.3718 × 10−3 (=)3.5533 × 10−3 (−)1.5285 × 10−2 (−)2.4196 × 10−2 (−)2.3252 × 10−2 (−)1.6681 × 10−2 (−)2.4022 × 10−3
Std2.35 × 10−35.10 × 10−32.12 × 10−22.77 × 10−26.39 × 10−33.62 × 10−31.53 × 10−3
UF2Average5.2999 × 10−3 (+)8.4767 × 10−3 (+)5.9590 × 10−3 (+)1.0650 × 10−2 (+)1.1906 × 10−2 (=)1.3808 × 10−2 (=)1.4313 × 10−2
Std7.03 × 10−45.03 × 10−33.62 × 10−48.10 × 10−48.61 × 10−44.06 × 10−31.12 × 10−2
UF3Average2.0528 × 10−2 (−)2.5562 × 10−3 (+)1.1321 × 10−2 (−)1.0894 × 10−2 (−)6.5446 × 10−4 (+) 4.3037 × 10−2 (−)4.7750 × 10−3
Std1.75 × 10−25.03 × 10−31.11 × 10−21.92 × 10−37.68 × 10−41.45 × 10−26.76 × 10−3
UF4Average6.6588 × 10−3 (=) 9.0938 × 10−3 (−)7.2809 × 10−3 (=)1.1696 × 10−2 (−)1.8483 × 10−2 (−)1.0004 × 10−2 (−)6.9043 × 10−3
Std8.69 × 10−41.51 × 10−35.88 × 10−41.18 × 10−35.22 × 10−31.44 × 10−36.55 × 10−4
UF5Average2.7938 × 10−2 (=)4.5153 × 10−4 (+) 1.4962 × 10−2 (+)6.5039 × 10−2 (−)6.7016 × 10−2 (−)1.4595 × 10−1 (−)2.7739 × 10−2
Std2.15 × 10−29.05 × 10−41.29 × 10−24.55 × 10−24.03 × 10−28.47 × 10−22.45 × 10−2
UF6Average6.5459 × 10−2 (−)5.4275 × 10−2 (−)1.0577 × 10−2 (−)2.4009 × 10−2 (−)2.3187 × 10−1 (−)9.4877 × 10−2 (−)5.8230 × 10−3
Std6.07 × 10−27.86 × 10−21.83 × 10−21.45 × 10−22.95 × 10−14.76 × 10−25.38 × 10−3
UF7Average2.5451 × 10−3 (+) 4.0496 × 10−3 (+)7.8085 × 10−3 (−)1.2477 × 10−2 (−)1.6111 × 10−2 (−)3.7577 × 10−2 (−)6.0753 × 10−3
Std1.89 × 10−37.63 × 10−35.71 × 10−32.76 × 10−38.23 × 10−33.28 × 10−26.67 × 10−3
UF8Average1.3926 × 10−1 (=)2.1788 × 10−1 (−)1.0950 × 10−1 (=)1.5689 × 10−1 (−)2.6998 × 10−1 (−)6.3257 × 10−2 (+) 1.2436 × 10−1
Std1.65 × 10−27.52 × 10−22.95 × 10−23.80 × 10−25.11 × 10−25.47 × 10−33.87 × 10−2
UF9Average1.0959 × 10−1 (−)8.7028 × 10−2 (−)9.9912 × 10−2 (−)6.0917 × 10−1 (−)1.2816 × 10−1 (−)6.4226 × 10−2 (+) 7.6352 × 10−2
Std1.96 × 10−21.70 × 10−22.67 × 10−21.72 × 10−15.27 × 10−36.94 × 10−31.44 × 10−2
UF10Average2.1199 × 10−1 (−)2.8860 × 10−3 (+) 1.7352 × 10−1 (−)3.2344 × 10−1 (−)5.5619 × 10−1 (−)9.4595 × 10−2 (+)1.3956 × 10−1
Std9.62 × 10−22.66 × 10−32.98 × 10−23.67 × 10−24.79 × 10−17.18 × 10−21.61 × 10−1
+ / / = 8/10/69/8/56/10/62/20/16/16/29/10/3
Table 5. Results influenced with real-coded quantum representation.
Table 5. Results influenced with real-coded quantum representation.
FunctionMetricsMOSOA
IGD
MOQSOA
IGD
MOSOA
Spacing
MOQSOA
Spacing
ZDT1Average4.0025 × 10−3 (=)4.1101 × 10−35.1220 × 10−3 (=)5.3225 × 10−3
Std6.55 × 10−57.28 × 10−53.26 × 10−46.57 × 10−4
ZDT2Average3.8531 × 10−3 (=)3.8136 × 10−34.7874 × 10−3 (=)4.4243 × 10−3
Std3.04 × 10−55.67 × 10−52.14 × 10−41.27 × 10−4
ZDT3Average7.4483 × 10−3 (−)6.4169 × 10−31.4084 × 10−2 (=)1.3825 × 10−2
Std5.28 × 10−41.39 × 10−41.78 × 10−43.47 × 10−5
ZDT6Average3.9728 × 10−3 (−)3.0046 × 10−32.2837 × 10−3 (=)2.1262 × 10−3
Std3.59 × 10−42.18 × 10−41.11 × 10−53.36 × 10−4
DTLZ4Average5.1377 × 10−1 (−)3.7873 × 10−13.1023 × 10−2 (=)3.2762 × 10−2
Std4.46 × 10−12.81 × 10−14.54 × 10−24.08 × 10−2
DTLZ6Average6.3906 × 10−3 (−)4.9789 × 10−31.2702 × 10−2 (=)1.2237 × 10−2
Std2.06 × 10−53.38 × 10−52.85 × 10−42.05 × 10−4
UF6Average3.6369 × 10−1 (−)2.6237 × 10−18.1922 × 10−3 (−)5.8230 × 10−3
Std1.95 × 10−11.35 × 10−15.45 × 10−35.38 × 10−3
+ / / = 0/5/2 0/1/6
Table 6. Results influenced with nonlinear migration operation.
Table 6. Results influenced with nonlinear migration operation.
FunctionMetricsMOQSOA-LD
200th Iteration
MOQSOA
200th Iteration
MOQSOA-LD
500th Iteration
MOQSOA
500th Iteration
MOQSOA-LD
1000th Iteration
MOQSOA
1000th Iteration
ZDT1Average5.7792 × 10−3 (−)4.1651 × 10−33.8902 × 10−3 (=)3.8901 × 10−33.8882 × 10−3 (=)3.8881 × 10−3
Std3.70 × 10−41.16 × 10−48.27 × 10−51.18 × 10−65.25 × 10−88.11 × 10−8
ZDT2Average9.7120 × 10−3 (−)6.6238 × 10−36.4306 × 10−3 (=)6.4205 × 10−36.4202 × 10−3 (=)6.4162 × 10−3
Std4.52 × 10−31.61 × 10−42.00 × 10−51.07 × 10−52.72 × 10−67.51 × 10−6
DTLZ6Average5.0310 × 10−3 (=)4.9659 × 10−35.0066 × 10−3 (=)4.9540 × 10−34.9813 × 10−3 (=)4.9534 × 10−3
Std4.28 × 10−57.84 × 10−59.53 × 10−53.38 × 10−55.38 × 10−58.62 × 10−5
+ / / = 0/2/1 0/0/3 0/0/3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Wang, W.; Ahmad, I.; Tag-Eldin, E. Multi-Objective Quantum-Inspired Seagull Optimization Algorithm. Electronics 2022, 11, 1834. https://doi.org/10.3390/electronics11121834

AMA Style

Wang Y, Wang W, Ahmad I, Tag-Eldin E. Multi-Objective Quantum-Inspired Seagull Optimization Algorithm. Electronics. 2022; 11(12):1834. https://doi.org/10.3390/electronics11121834

Chicago/Turabian Style

Wang, Yule, Wanliang Wang, Ijaz Ahmad, and Elsayed Tag-Eldin. 2022. "Multi-Objective Quantum-Inspired Seagull Optimization Algorithm" Electronics 11, no. 12: 1834. https://doi.org/10.3390/electronics11121834

APA Style

Wang, Y., Wang, W., Ahmad, I., & Tag-Eldin, E. (2022). Multi-Objective Quantum-Inspired Seagull Optimization Algorithm. Electronics, 11(12), 1834. https://doi.org/10.3390/electronics11121834

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop