1. Introduction
Population-based intelligence algorithms have been widely used in many fields because of their simple principle, easy implementation, strong scalability, and high optimization efficiency, such as in UAV path planning [
1,
2,
3], combinatorial optimization [
4,
5], and community detection [
6,
7]. With the increase in the speed of intelligent computing and the development of artificial intelligence, many excellent intelligent algorithms have been proposed such as the seagull optimization algorithm (SOA) [
8], artificial bee colony (ABC) algorithm [
9], and gray wolf optimization (GWO) algorithm [
10]. In addition, there are several intelligent algorithms that were proposed earlier and developed relatively well, such as the particle swarm optimization (PSO) algorithm [
11], genetic algorithm (GA) [
12], ant colony optimization (ACO) algorithm [
13], starling murmuration optimizer (SWO) [
14] algorithm, and simulated annealing (SA) algorithm [
15].
In 2017, the BAS algorithm was proposed by Jiang [
16]. The largest difference between the BAS algorithm and other intelligent algorithms is that the BAS algorithm only needs one beetle to search. Due to the advantages of having a simple principle, fewer parameters, and less calculation, it has been successfully applied to the following optimization fields. Khan et al., proposed an enhanced BAS with zeroing neural networks for solving constrained optimization problems online [
17]. Sabahat et al., solved the shortcomings of the low positioning accuracy of sensors in Internet of Things applications using the BAS algorithm [
18]. Khan et al., optimized the trajectory of a five-link biped robot based on the longhorn BAS algorithm [
19]. Jiang et al., implemented a dynamic attitude configuration of a wearable wireless body sensor network through a BAS strategy [
20]. Khan et al., proposed a strategy based on the BAS algorithm to search for the optimal control parameters of a complex nonlinear system [
21].
Although the BAS algorithm exhibits its unique advantages in terms of the calculation amount and principle, the BAS algorithm drastically reduces the optimization performance and even fails to search with high probability when dealing with multidimensional (more than three-dimensional) problems. The reason is that the BAS algorithm is a single search algorithm and, during the search process, the individual can only move towards one extreme point. In multidimensional problems, there is often more than one extreme point, so it is likely to fall into a local extreme point. On the other hand, the step size during the exploration of the beetle decreases exponentially, which means that the beetles may not be able to jump out of local optima. For these reasons, the BAS algorithm is not equipped to handle complex problems with three or more dimensions.
In order to solve the BAS algorithm’s defect of not being able to handle high-dimensional problems, a quantum-based beetle swarm optimization algorithm inspired by quantum evolution is proposed in this paper [
22]. On the one hand, quantum bits were used to represent the current best solution as a linear superposition of the probability states of “0” and “1” to improve the early exploration capability of the QBSO algorithm. On the other hand, replacing the individual search with a swarm search and a dynamic step adjustment strategy was introduced to improve the exploitation capability of the beetles. Our work has two main contributions:
We solved the shortcoming of the BAS algorithm in that it cannot handle high-dimensional optimization problems, and the designed QBSO algorithm has an excellent performance in solving 30-dimensional CEC benchmark functions.
We used quantum representation to deal well with the balance between the population size in terms of the exploratory power and the algorithmic speed, using fewer individuals to represent more information about the population.
The structure of this article is as follows.
Section 2 briefly describes the principle of the BAS algorithm, including the implications of the parameters and the procedure of the BAS algorithm. The innovations of the algorithm (i.e., quantum representation (QR) and quantum rotation gate (QRG)) are presented in
Section 3. A series of simulation tests are presented in
Section 4. The optimization performance of the QBSO algorithm was evaluated by solving four benchmark functions with three comparison algorithms under different populations.
Section 5 is the conclusion.
2. Related Work
Although the BAS algorithm shows better performance than other swarm intelligence algorithms in dealing with some low-dimensional problems, as mentioned above the performance of the BAS algorithm in high-dimensional variable optimization problems is poor or even largely ineffective. In order to solve this problem, some researchers have conducted related improvement work.
Khan [
17] explained the inability of the BAS algorithm to handle high-dimensional optimization problems. It is claimed that the BAS algorithm has a “virtual particle” limitation, which means it computes the objective function three times per iteration. To overcome around this problem, a continuous-time variant of the BAS was proposed in which the “virtual particle” limitation is eliminated. In this algorithm, a delay factor was introduced. It is critical to keep track of the previous states to determine the current states. Furthermore, the parallel processing nature of a zeroing neural network was integrated with BAS to further boost its search for an optimal solution.
Wang [
23] combined the population algorithm with a feedback-based step-size strategy, but this ignores the information interaction between individuals and the population, and just blindly expands the population size, which will inevitably increase the calculation. To accelerate the convergence speed and avoid falling into the local optimal solution, adaptive moment estimation was introduced into the algorithm [
24]. The algorithm adjusts different dimensional steps using ADAM update rules, replacing all dimensional steps with the same size. However, the algorithm only performs well on nonconvex problems. Lin [
25] added linearly decreasing inertia weights to the decaying process of the beetle step change to guarantee that the late step size is large enough to jump out of the local optimum. However, this also leads to a slow convergence of the algorithm in the later stages.
Zhou [
26] combined the BAS algorithm with the solid annealing process from the perspective of algorithm combination. The inability of the BAS algorithm to handle optimization problems in more than three dimensions is eliminated by complementary advantages. It seems that most current researchers are cleverly circumventing the shortcomings of the BAS through the fusion of multi-intelligent optimization algorithms. Shao [
27] proposed a beetle swarm algorithm that divides individuals into elite individuals and other individuals. Each elite individual forms a unique clique, and the individuals in the group will move toward the optimal solution under the guidance of the elite individuals. Yu [
28] incorporated the BAS algorithm as a search strategy into the gray wolf optimization algorithm to retain the advantages of the BAS algorithm while avoiding high-dimensional divergence. However, this does not essentially solve the deficiency of the BAS. Lv [
29] integrated variation and crossover into the population evolution process to improve the global search for better results. In simple terms, it is a fusion of the BAS algorithm and several features of the genetic algorithm. All of the studies above have similarities: using group search to expand the search dimension and solve the shortcoming of one individual’s lack of search ability in higher dimensions. However, this operation is contrary to the essence of the BAS algorithm, which is “simple” and “rapid”.
Quantum computing is based on quantum bits, which, unlike the 0.1 bits of a computer, can be a linear superposition of two states. Based on the unique superposition, entanglement, and interference properties of quantum computing, quantum-based algorithms in the field of optimization have great potential to maintain population diversity and prevent falling into local optima [
30].
Kundra [
31] combined the FIRED algorithm with the cuckoo search optimization algorithm to use quantum superposition state to ensure population diversity. Zamani [
32] proposed a quantum-based algorithm for bird navigation optimization. It extends the empirical and social learning in the PSO algorithm to short-term and long-term memory. The probability of the algorithm jumping out of the local optimum is improved by quantum mutation and quantum crossover using the 0–1 representation of quantum for the crossover operation, which is cleverly combined with differential evolution. Inspired by the literature [
32], Nadimi-Shahraki extended the QANA algorithm to a binary representation for solving the feature selection problem for large medical datasets, showing satisfactory results [
33]. Zhou [
34] introduced a truncated mean stabilization strategy based on the quantum particle swarm algorithm [
35], while using quantum wave functions to locate the global optimal solution. The improved algorithm improves the population diversity and fusion efficiency. Hao [
36] designed the Hamiltonian mapping between the problem domain and the quantum, and solved the general locally constrained combinatorial optimization problem based on the quantum tensor network algorithm. Amaro [
37] explored the use of causal cones to reduce the number of qubits required on a quantum computer, and introduced a filtering variational quantum eigen-solver to make combinatorial optimization more efficient. Fallahi [
38] used quantum solitons instead of a wave function, and combined them with the PSO algorithm to improve the performance of the algorithm. Soloviev [
39] proposed a quantum approximate optimization algorithm to solve the problem of Bayesian network structure learning. A study [
40] introduced the quantum computer mechanism into the bat algorithm. Incorporating a chaotic cloud mechanism to accelerate the convergence of positive individuals and chaotic perturbation of negative individuals with the aim of increasing population diversity, the algorithm’s ability to handle complex optimization problems is verified through comparative experiments.
In summary, it can be concluded that the integration of population-based BAS with quantum theory is a feasible solution.
4. Experiment
Since the BAS algorithm cannot solve high-dimensional complex optimization problems, it cannot be used for simulation comparison experiments with the QBSO algorithm. Therefore, the pigeon-inspired optimization algorithm (PIO), seagull optimization algorithm (SOA), gray wolf optimization algorithm (GWO), and beetle swarm optimization (BSO) algorithm [
41] were chosen as the comparison objects. To ensure the validity of the experimental results, the common parameter settings were identical in all algorithms, where the rotation angle in the QBSO was −11° [
22]. The other algorithm parameters remained the same as in the original literature. We used trial and error to select the number of iterations. In the context of a population size of 30, the Griewank function was optimized with different numbers of iterations (see
Figure 4).
When the number of iterations is 100, all algorithms basically converge to near the global optimal solution. Each algorithm is comparable. Therefore, we set the iteration number to 100.
To ensure that the PIO, SOA, and GWO algorithms are well explored and developed, researchers usually maintain the population size of the algorithms between 30 and100. If the population is too small, it will affect the searching and convergence abilities of the algorithm. Too large of a population can waste population resources and increase the search time. In order to verify that quantum expression can represent richer population forms with fewer individuals, a comparison experiment with the population size set to 8 and 30 was performed.
We conducted multiple comparison experiments on both the unimodal unconstrained optimization problem and multimodal unconstrained optimization problem. The unimodal benchmark function has only one optimal solution and can be used to detect how quickly the algorithm converges to the vicinity of the optimal solution. The multimodal benchmark function has multiple optimal solutions and is used to detect the ability of the algorithm to jump out of the local optimum.
4.1. Unimodal Unconstrained Optimization
The unimodal function only has a single optimum solution and the benchmark problems can be seen in
Table 1 [
42], where the decision variables of
and
are 2-dimensional and the other functions are 30-dimensional. The formulation of the functions
, their global minima
, and the value of the estimated variables
are shown in
Table 1.
To demonstrate that the QBSO algorithm can exhibit an excellent optimization performance at a relatively small population size, we conducted comparative experiments with population sizes of 8 and 30 under unimodal optimization problems. Each algorithm was run independently 100 times. The best, worst, average, and variance of the results obtained by each algorithm were collected and used to verify the performance of the algorithm. The optimization results of the unimodal benchmark functions are shown in
Table 2 and
Table 3.
We randomly chose 1 of the 100 independent runs and plotted the algorithm optimization iteration process as a graph, as shown in
Figure 4. Considering that when the population size was eight, the optimization results of the PIO algorithm, SOA, and GWO algorithm were so different from the QBSO that it was easy to compress the QBSO into an approximate horizontal line in the figure, we omitted the iterative curve plot here for the population size of eight.
4.2. Multimodal Unconstrained Optimization
Multimodal functions contain more than one optimal solution, which will also mean that the algorithm is more likely to fall into a local optimum when optimizing these functions. The population-based intelligence optimization algorithm has an upper hand in optimizing these functions, and this is the idea we improved. Collaborative search among multiple individuals is less likely to fall into local optima than single-individual algorithms such as the BAS algorithm. We dealt with these multimodal benchmark functions with the solution space dimensions set to 30. The formulation of the functions
, their global minima
, and the value of the estimated variables
are shown in
Table 4.
Each algorithm was run independently 100 times. The optimization results of the multimodal benchmark functions with the population sizes of 30 and 8 are shown in
Table 5 and
Table 6.
Similarly, we randomly selected 1 result from the 100 independent runs of the multimodal optimization problem. The iterative process is presented in
Figure 5. Here, for the reasons we mentioned above, the iteration curve of the algorithm for a population size of eight is not shown.
For algorithm designers, the accuracy and time consumption of the algorithm are difficult to balance. For population-based optimization algorithms, the larger the dimensionality, the larger the population size that needs to be consumed. It was clear from the computational complexity analysis that the time required to maintain accurate optimization results will grow exponentially. Our goal is to try to trade off dimensionality and time for the decision maker, and to handle the high-dimensional optimization problem with the smallest population size. We conducted a performance comparison of the algorithms in different dimensions with the Rastrigin function, and the results are shown in
Table 7.
4.3. Population Diversity Study
We introduced the population diversity metric to validate the QBSO algorithm diversity metric. The population diversity formula is follows:
where
is the mean value of individuals in the current generation. Considering that the BAS is a single individual search algorithm, it cannot constitute a population. Therefore, we chose the BSO algorithm as the diversity comparison algorithm.
5. Discussion
It can be observed from
Table 2 and
Table 5 that the QBSO algorithm showed relatively excellent performance in handing both unimodal optimization problems and multimodal optimization problems. This is due to the fact that quantum representation can carry more population information and prevent the loss of diversity. At the same time, the quantum rotation gate as a variational operator can better help the algorithm to jump out of a local optimal solution. The SOA did not perform well because the attack radius of the SOA did not decrease with iteration. This improves the probability of the SOA jumping out of local optimum, but it also loses the fast convergence capability. Therefore, the appropriate iteration size for the QBSO algorithm may not be suitable for the SOA.
As shown by the data in
Table 3 and
Table 6, the PIO algorithm, SOA, GWO algorithm, and BSO algorithm cannot converge to the optimal solution when the population size is eight. On the contrary, the QBSO algorithm continued to perform well.However, there are still several flaws in the QBSO algorithm. From the curves shown in
Figure 5 and
Figure 6, it can be found that the QBSO algorithm seems to be unable to trade off accuracy and convergence speed. There are two reasons for this: first, the step size adjustment strategy with feedback leads to a slower convergence of the algorithm; second, the variational operation of the quantum rotation gate maintains the diversity but also slightly sacrifices the convergence speed. This will be the focus of our research in the next phase of work.
Our design attempted to handle the high-dimensional optimization problem with a minimal population. For further validation, we measured the population diversity and the effect of dimensionality on the performance of the algorithms.
Figure 7 shows that the QBSO algorithm had a significant advantage in maintaining population diversity when the number of iterations was less than 40. This was due to the quantum representation of the QBSO algorithm that enriched the population information and the quantum rotation gate as a variational operator that improved the population variability.
Table 7 illustrates that, with increasing dimensionality and unchanged population size, the QBSO algorithm shows the best adaptability, verifying the feasibility of the QBSO for high-dimensional optimization problems.