Next Article in Journal
Algebraic Representations of Entropy and Fixed-Sign Information Quantities
Previous Article in Journal
Understanding the Impact of Evaluation Metrics in Kinetic Models for Consensus-Based Segmentation
Previous Article in Special Issue
Quantum Physics-Informed Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Dynamics Framework with Quantum Tunneling Effect for Numerical Optimization

1
Chengdu Institute of Computer Applications, China Academy of Sciences, Chengdu 610213, China
2
School of Electronic Information, Chengdu Jincheng College, Chegndu 611731, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Computer and Artificial Intelligence, Southwest Minzu University, Chengdu 610225, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(2), 150; https://doi.org/10.3390/e27020150
Submission received: 19 November 2024 / Revised: 15 January 2025 / Accepted: 27 January 2025 / Published: 1 February 2025

Abstract

:
In recent years, optimization algorithms have developed rapidly, especially those which introduce quantum ideas, which perform excellently. Inspired by quantum thought, this paper proposes a quantum dynamics framework (QDF) which converts optimization problems into the problem of the constrained ground state of the quantum system and analyzes optimization algorithms by simulating the dynamic evolution process of physical optimization systems in the ground state. Potential energy equivalence and Taylor expansion are performed on the objective function to obtain the basic iterative operations of optimization algorithms. Furthermore, a quantum dynamics framework based on the quantum tunneling effect (QDF-TE) is proposed which adopts dynamic multiple group collaborative sampling to improve the quantum tunneling effect of the QDF, thereby increasing the population diversity and improving algorithm performance. The probability distribution of solutions can be visually observed through the evolution of the wave function, which also indicates that the QDF-TE can strengthen the tunneling effect. The QDF-TE was evaluated on the CEC 2017 test suite and shown to be competitive with other heuristic optimization algorithms. The experimental results reveal the effectiveness of introducing a quantum mechanism to analyze and improve optimization algorithms.

1. Introduction

Since Holland introduced the genetic algorithm (GA) in the 1970s [1], and optimization algorithms have undergone over half a century of development, giving rise to a plethora of classic algorithms, such as the ant colony optimization (ACO) algorithm [2] and the particle swarm optimization (PSO) algorithm [3]. In recent years, an extensive array of novel algorithms, enhanced algorithms, and hybrid algorithms has been proposed. However, exploring the fundamental nature or discerning the commonalities among optimization algorithms to attain improved performance represents an urgently solvable problem at present. It is imperative to derive the core basic iterative process of optimization algorithms to construct sophisticated optimization algorithms which are apt for different application scenarios.
Some scholars have made efforts to extract the basic operation and iterative process. Examples of such research include the simplified version of PSO [4], the bare-bones differential evolution algorithm (BBDE) [5], and the bare-bones fireworks algorithm (FWA) [6]. These studies show that optimization algorithms adopt different models but still have similar basic operations, such as probability sampling, sampling step adjustment, and population strategies. Therefore, we make efforts to apply quantum dynamics theory to optimization algorithms so that similar operations which widely exist in different optimization algorithms are explained in theory, and a simple but effective basic frame of optimization algorithms is also introduced. From the perspective of quantum dynamics, the iterative process of optimization algorithms can be regarded as the evolution of a wave function with time, which is the nature of a dynamic process.
There are two reasons why we chose quantum dynamics as a theoretical basis and constructed a basic theoretical framework of optimization algorithms.
First of all, quantum theory can describe optimization problems. The quantum descriptiveness of optimization problems is proven indirectly with the emergence of quantum annealing theory. In 1994, Finnila proposed the equivalence of potential energy and believed that the ground state of the system can be obtained through quantum annealing (QA) or diffusion Monte Carlo (DMC) method [7]. In 2004, Sun J. proposed quantum particle swarm optimization (QPSO) by adopting the quantum wave function into probabilistic sampling behavior [8].
Secondly, the migration of solutions of optimization algorithms is comparable to the movement of microscopic particles. The study of quantum stochastic dynamics began with Nelson’s derivation of the Schrödinger equation through Newton’s mechanics [9]. It systematically expounded the relationship between the microscopic random force action and the macro process of system evolution [10]. Based on this theory, the evolution process from the arbitrary initial state to the final state can be deduced by analyzing microscopic forces. When using quantum dynamics to observe the iterative process of optimization algorithms, the optimization process is the evolution of a probability wave of solutions based on the time-dependent Schrödinger equation. This interpretation for optimization algorithms is essentially different from classical conditions [9].
Based on the above contents, this paper adopts the Schrödinger equation to construct a dynamic time evolutionary equation for optimization algorithms and puts forward a basic iterative structure of optimization algorithms. Without loss of generality, the one-dimensional objective function  f ( x )  (second-order differentiability) is used as the study object in order to express the core point of this paper more clearly and concisely.
The main contributions are as follows:
  • Establish the connection between a quantum system and optimization algorithms as well as the proposed quantum dynamics framework (QDF) of optimization algorithms, which provide a profound and extensive physical explanation for the iterative evolution process of optimization algorithms.
  • The physical explanation is given for the common operations of optimization algorithms, such as random group search, optimal solutions replacing inferior solutions, and multi-scale processes.
  • Based on the QDF and quantum tunneling effect, the quantum dynamics framework with a tunneling effect (QDF-TE) is proposed. The experimental results show that this method is accurate and stable, especially for high-dimensional complex optimization problems.
The remainder of this paper is organized as follows. Section 2 introduces some related work about the application of quantum theory in optimization. Section 3 introduces the relationship between optimization problems and the Schrödinger equation, and it proposes a QDF for optimization algorithms. Section 4 analyzes the quantum tunneling effect and introduces the QDF-TE. The experimental results for the QDF-TE are shown in Section 5. Section 6 concludes this paper.

2. Related Work

In 1994, Finnila regarded the objective function  f ( x )  of optimization problems as the potential energy  V ( x )  in the quantum system, which is called potential energy equivalence ( V ( x ) = f ( x ) ). Therefore, the ground state of the system was regarded as the solution to optimization problems and could be obtained through the quantum annealing process or diffusion Monte Carlo (DMC) method [7]. In 1998, Japanese scientist Kadowaki proposed a solution in which the Ising model could be used in a transverse magnetic field to realize the quantum annealing process [11]. In 1999, Brooke reported in Science that they successfully realized a simple quantum spin model using Ising magnets in a transverse magnetic field [12].
Using a quantum mechanism in computing has been an active research topic for the last few decades. The quantum mechanism brings a new philosophy to optimization due to its underlying concepts. Based on quantum mechanisms, numerous studies have been put forward to enhance both the efficiency and accuracy of typical optimization algorithms [13,14,15].
In 1996, Narayanan and Moore used the multiverse theory to propose the quantum-inspired genetic algorithm (QIGA) to solve the traveling salesman problem [16]. In 2000, Han et al. proposed the genetic quantum algorithm (GQA) at the 2000 IEEE Conference on Evolutionary Computation, in which the concept of qubits was used to improve the randomness and the global search capability of the algorithm to solve knapsack problems [17]. In 2002, Han proposed a complete version named the quantum-inspired evolutionary algorithm (QIEA) [14]. In 2007, quantum ant colony optimization (QACO) was proposed, which also uses the qubits mechanism [18]. Such research proves that quantum theory is a powerful tool for studying optimization theory. In this kind of research, some probabilistic properties such as the state superposition mechanism and DMC method in quantum mechanics are often applied to enhance the global search ability of algorithms by improving their performance.
However, these works are far from the quantum characteristics of optimization problems themselves. A quantum mechanism can be used to explain the iterative operation process of optimization algorithms essentially and give the basic framework of optimization algorithms to provide a theoretical basis for analyzing optimization algorithms and further proposing excellent optimization algorithms.

3. Quantum Dynamics Framework of Optimization Algorithms

3.1. Schrödinger Equation in Quantum Mechanics

The time-dependent Schrödinger equation, shown in Equation (1), which is the most basic equation in quantum mechanics, is a partial differential equation describing the movement of microscopic particles. It constitutes three equivalent description methods of quantum mechanics, with the matrix mechanics proposed by Heisenberg, the path integral proposed by Feynman, and the wave function proposed by Schrödinger:
i Ψ ( x , t ) t = 2 2 m 2 x 2 + V ( x ) Ψ ( x , t )
where  H ^ = 2 2 m 2 x 2 + V ( x )  is the Hamiltonian of the system,  2 2 m 2 x 2  is the kinetic energy term of the Hamiltonian, and  V ( x )  is the potential term of the Hamiltonian. Using appropriate dimensions, and by letting  D = 2 m , Equation (1) can be abbreviated as follows:
i Ψ ( x , t ) t = D 2 x 2 + V ( x ) Ψ ( x , t )
In 1927, Born put forward a probability interpretation of the wave function within the context of the Schrödinger equation [19]. He posited that the modulus square of the wave function  | Ψ ( x , t ) | 2  represents the probability density of particles being present at a specific position  x  at time  t . Moreover, he asserted that the wave function fully determines the motion state of a microscopic particle.

3.2. Schrödinger Equation for Optimization Problems

The one-dimensional global optimization problem is defined as follows (to simplify, only the global minimum is defined):
A global minimum  x 0 X  of one function  f ( x ) ;
X R  is an input element with  f ( x 0 ) f ( x ) x X .
According to the definition of optimization problems, from a physical point of view, when the quantum system is in the ground state, particles will appear near the lowest point of potential, well within the maximum probability. When  V ( x ) = f ( x ) , the probability distribution of the ground state wave function is the probability distribution of the solution for an objective function. To put forward the dynamic equation of optimization problems, two important transformations are needed:
  • The solution for the optimization problems is described by a probability distribution similar to the wave function  Ψ ( x , t ) .
  • The objective function of the optimization problems is regarded as the constrained potential energy of the quantum system, namely  V ( x ) = f ( x ) .
Accordingly, the dynamic equation of the optimization problems is as follows:
i Ψ ( x , t ) t = D 2 x 2 + f x Ψ ( x , t )
To express the optimization problems more generally, Wick rotation [20] was employed for the time-dependent Schrödinger equation to transform the real time into an imaginary time  τ = i t . Thus, Equation (3) is rewritten as follows:
Ψ ( x , τ ) τ = D 2 x 2 f ( x ) Ψ ( x , τ )
This equation is a general form of the time-dependent Schrödinger equation of optimization problems. It is a dynamic equation which can theoretically describe evolutionary behavior for the solutions to optimization algorithms in an iterative process.

3.3. Taylor Approximation of Objective Function

For the objective function of the optimization problems, its analytic structure is unknown in most cases. Therefore, we can only use a sampling operation to obtain the information for the objective function.
We consider using Taylor expansion to expand the complex objective function’s neighborhood at a certain point into a simple polynomial. Taylor expansion simplifies the objective function given by Equation (5). First, the objective function  f ( x )  is assumed to be continuously differentiable and is expanded as follows:
f ( x ) = n = 0 f ( n ) ( x 0 ) n ! ( x x 0 ) n
where  n !  represents the factorial of  n  and  f ( n ) ( x 0 )  is the  n th derivative of  f ( x )  at the point  x 0 . The zero-order Taylor approximation of  f ( x )  is  f ( 0 ) ( x ) = f ( x 0 ) , where  f ( x 0 )  is a constant. Thus, by translation,  f ( 0 ) ( x ) = 0 . The first-order Taylor approximation of  f ( x )  is  f ( 1 ) ( x ) = f ( 1 ) ( x 0 ) ( x x 0 ) . If the point  x 0  is an extreme point, then its first derivative  f ( 1 ) ( x 0 ) = 0 , and thus the first-order Taylor approximation near the extreme point is  f ( 1 ) ( x ) = 0 . The second-order Taylor approximation is  f ( 2 ) ( x ) = 1 2 f ( 2 ) x 0 x x 0 2 , assuming that the higher-order terms are negligible when  ( x x 0 )  is small enough.

3.3.1. Dynamic Equation of Optimization Algorithms Under Zero-Order Taylor Approximation

Zero-order Taylor expansion of the objective function  f ( x )  is approximate to a constant which does not contain guidance information. By translating the coordinate axis, the global optimal value of the function can be shifted to zero. Therefore, the objective function can be regarded as a constant equal to zero as follows:
f ( x ) = 0
The dynamic equation of the optimization problems, which is shown as Equation (4), can be recast as
Ψ ( x , τ ) τ = D 2 Ψ ( x , τ ) x 2
Equation (7) is the dynamic equation obtained from the Schrödinger equation of the optimization problems under zero-order Taylor approximation. This dynamic equation does not contain any information about the objective function  f ( x ) . It represents the dynamic behavior of the random sampling search process of optimization algorithms. This equation is also the Schrödinger equation for free particles in quantum mechanics.
This equation is completely isomorphic with the ordinary diffusion equation, with a diffusion coefficient of  D . Thus, the sampling process of optimization algorithms can be described and explained by the diffusion equation under quantum theory. The evolution of the iterative process of optimization algorithms is essentially a diffusion process. The diffusion process is a group behavior in physics, which shows that the sampling process of optimization algorithms should adopt the group strategy, which has been proven by the construction practice of many optimization algorithms. At present, almost all the algorithms with the best performance adopt the group strategy.
According to the dynamic equation of optimization algorithms, a sampling probe in the population (a walker) with an initial position at  x  diffuses with a diffusion coefficient  D . The probability of appearing at position  x  after  Δ t  can be expressed by Green’s function, corresponding to the diffusion equation:
G x , x , Δ t = 1 4 π D Δ t e x x 2 4 D Δ t
By letting  σ = 2 D Δ t , Equation (8) can be recast as
G x , x , Δ t = 1 2 π σ e x x 2 2 σ 2
where Green’s function of the diffusion equation is transformed into a normal distribution,  σ  is the scale parameter for the sampling, and  σ  is related to the diffusion coefficient  D  and the time interval  Δ t  of the diffusion equation. Green’s function  G ( x , x , Δ t )  describes the Markov process of sampling. Let a walker’s position at time  t  be  x ( t )  and the position  x ( t + Δ t )  be
x ( t + Δ t ) = x ( t ) + σ N ( 0 , 1 )
where  N ( 0 , 1 )  is the standard normal distribution and  σ  is the sampling scale, which continuously decreases when the time interval  Δ t  decreases and can gradually improve the accuracy of the search. This corresponds to the multi-scale sampling behavior, which exists in the sampling process of a large number of optimization algorithms. It can be understood that normal sampling is the best choice without any prior knowledge.

3.3.2. Dynamic Equation of Optimization Algorithms Under First-Order Taylor Approximation

The first-order Taylor approximation obtains the guidance information of the objective function near  x 0 , and its expansion is as follows:
f ( 1 ) ( x ) = f ( 1 ) x 0 x x 0
The analytic equation of  f ( x )  is unknown, and thus we can only obtain the guidance by using the position and function value of double sampling and let  x 0  be the current position and  x  be the new sampling position, Equation (11) is recast as
f ( 1 ) ( x ) f ( x ) f x 0 x x 0 x x 0 = f ( x ) f ( x 0 )
Equation (12) describes the process of optimization algorithms by sampling the neighborhood near  x 0  to obtain the guidance information of the objective function.
Let  Δ f = f ( x ) f ( x 0 ) , and substitute it into Equation (4) to obtain
Ψ ( x , τ ) τ = D 2 x 2 Δ f Ψ ( x , τ )
Equation (13) is the dynamic equation obtained from the Schrödinger equation of the optimization problems under first-order Taylor approximation.
In order to express the influence of the guidance information of the objective function obtained in sampling process, the Fokker–Planck equation is utilized, which introduces a drift term on the basis of the diffusion equation [21] as follows:
Ψ ( x , τ ) τ = D 2 x 2 f ( x ) x Ψ ( x , τ )
where  Ψ ( x , τ ) 2  is the probability distribution of the solutions at  τ  in the iterative process of optimization algorithms,  D 2 2 x  is diffusion term, and  f ( x ) x  is drift term. Equation (13) and the Fokker–Planck equation are isomorphic, and thus  Δ f  can be regarded as the drift term of the quantum dynamics equation under first-order Taylor approximation.
Equation (13) shows that there are two core driving mechanisms in the iterative process of optimization algorithms; one is the normal random sampling mechanism driven by diffusion behavior, corresponding to the diffusion term  D 2 2 x , and the other is the guiding mechanism driven by the objective function as a conservative force field, corresponding to the drift term  Δ f . When  Δ f > 0 , the current potential is increasing, and the algorithm tends to prevent the walker from moving to this point. When  Δ f < 0 , the current potential is decreasing, and the algorithm tends to push the walker toward this point. The drift term  Δ f  corresponds to the drag and guide effect of the objective function as a conservative force field in the search process, and  Δ f  corresponds to the first-order Taylor approximation of the objective function.

3.3.3. Ground-State Wave Function of Optimization Algorithms Under Second-Order Taylor Approximation

Before the Wick rotation, the second-order Taylor approximation of the objective function near the extreme point  x 0  of the global optimal solution is as follows:
f ( 2 ) ( x ) = 1 2 f ( 2 ) x 0 x x 0 2
Equation (4) can be obtained under the second-order Taylor approximation of the objective function as follows:
ψ ( x , τ ) τ = D 2 x 2 1 2 f ( 2 ) x 0 x x 0 2 ψ ( x , τ )
Equation (16) is the Schrödinger equation of the harmonic oscillator in quantum theory. Generally, the complex vibration of the equilibrium position and Lennard-Jones potential are approximated by the harmonic oscillator potential. The ground-state wave function of a quantum system with a harmonic oscillator potential energy constraint has a normal distribution. Therefore, the normal distribution can be used as the objective criterion for the population to converge to the ground state at a certain scale in the iterative process.

3.4. Algorithm of Quantum Dynamics Framework

As mentioned above, zero-order Taylor approximation of an objective function is a diffusion equation, which explains the random normal sampling search behavior of optimization algorithms. The first-order Taylor approximation of an objective function can be written as a Fokker–Planck equation, which explains the mechanism of obtaining the guidance information of an objective function in the sampling process. The second-order Taylor approximation of an objective function is a quantum harmonic oscillator, which indicates that a normal distribution can be used as the ground-state convergence criterion in the iterative process of optimization algorithms.
According to the above conclusions from quantum dynamics, the algorithm process of the QDF can be given as shown in Algorithm 1.
Algorithm 1: General framework of QDF.
Entropy 27 00150 i002
In summary, the sampling process of optimization algorithms includes three basic strategies: (1) the population strategy; (2) normal distribution sampling strategy; and (3) multi-scale strategy. These three strategies and their variants are usually used in most optimization algorithms.
In addition to the three basic strategies mentioned above, utilization of the mean information in Algorithm 1 refers to the estimation of the optimal solution region by using the current population distribution. The common methods include the introduction of an average optimal position vector in the position update equation in QPSO [22] and using the population mean coordinate to generate new individuals to replace the individuals with the worst fitness in the multi-scale quantum harmonic oscillator optimization algorithm (MQHOA). Utilization of the mean information can effectively improve the accuracy of algorithms. According to the above conclusion that the ground-state wave function of a harmonic oscillator has a normal distribution under second-order Taylor approximation of the objective function, and by taking the normal distribution as the approximate wave function of the ground state, a variance of  p  walkers less than  σ  is regarded as an approximated ground-state criterion to judge the ground state of the system.
The QDF is the core iterative process of optimization algorithms. This basic structure is highly similar to the bare-bones edition of some optimization algorithms [5,6]. An optimization algorithm with good performance could usually be formed by adding some strategies with prior knowledge of this basic iterative structure.

4. Quantum Dynamics Framework with the Tunneling Effect (QDF-TE)

In complex optimization problems, there are numerous local optimal regions separated by potential barriers, causing particles to become trapped and unable to find the global optimum. The quantum tunneling effect reflects that the particles in the quantum system can penetrate through the potential barrier and appear in the classically forbidden zone. In optimization problems, the particle (sampling point) has a probability of jumping from the local optimal to the global optimal region, which is called the quantum tunneling effect.
In optimization fields, the quantum tunneling effect first appeared in quantum annealing (QA) [7], which has better optimization performance than simulated annealing (SA) [23] in some cases. QA can help a system escape the local minima by using quantum tunneling through the barriers rather than thermally overcoming them, with an artificial and appropriate source of quantum fluctuations which is slowly switched off according to the annealing schedule [24]. During the annealing process, the system visits various local energy minima freely. But the annealing schedule is controlled by a specific function, such as a power law or linear function. Once the function is selected, the system can only iterate according to a fixed annealing schedule. Therefore, there is a probability that the tunneling effect cannot be controlled according to the iteration situation automatically.
Based on the QDF, this section further investigates the relationship between the probability of the tunneling effect and the fitness of the population, and it proposes the QDF-TE, which can dynamically adjust the probability of the tunneling effect based on the current fitness landscape to improve the optimization capability.

4.1. Motivation

According to path integration [25], the general solution of the Schrödinger equation can be written as follows:
ψ ( x , τ ) = d x 0 K x , τ x 0 , 0 ψ x 0 , 0
where  K x , τ x 0 , 0  is the propagator, representing the probability of particles migrating from position  x 0  to position x and from  t = 0  to  τ  as follows:
K x , τ x 0 , 0 = lim N d x 1 d x N 1 m 2 π Δ τ N 2 × exp Δ τ j = 1 N m x j x j 1 2 2 Δ τ 2 + V x j E R
where  Δ τ  is the time interval, and when  N , an accurate migration probability can be obtained.
We define the kinetic energy term  P x n , x n 1  in Equation (19), which represents the Gaussian probability density of the random variable  x n , with a mean of  x n 1  and a variance of  σ 2 = Δ τ / m :
P x n , x n 1 = m 2 π Δ τ 1 2 exp m x n x n 1 2 2 Δ τ
We also define the weight factor  W x n  in Equation (20), which is related to the potential energy and reference energy:
W x n = exp Δ τ V x n E R
We then rewrite the solution of the imaginary time Schrödinger equation as
ψ ( x , τ ) = lim N j = 0 N 1 d x j × n = 1 N W x n P x n , x n 1 ψ x 0 , 0
This formula can only be solved for a specific  V ( x )  value. When the objective function is unknown for some optimization problems, the Monte Carlo method is used to solve the formula. The basic idea is to consider the wave function itself a probability density.
The wave function modulus represents the probability distribution of particles in the feasible solution space. The wave function of the system at a certain moment  τ  is approximated as the superposition of all particle sampling functions. According to Equation (10), new solutions are obtained through normal sampling. We can calculate the probability distribution of a single particle using the following equation:
ψ ( x ) = 1 2 π σ e x x 1 2 2 σ 2 d x
where  x 1  is the position of the first particle and  x 1  is also the mean. Therefore, the probability distribution of k particles can be calculated using Equation (23):
ψ s ( x ) = i = 1 k 1 2 π σ e x x i 2 2 σ 2 d x
where  ψ s ( x )  is the system wave function,  x i  is the position of the  i th  particle, and  x i  is also the mean. Correspondingly, the tunneling effect probability of a particle can be approximately calculated using the normal distribution probability density as follows:
P t e ( x ) = x 1 Δ x 1 + Δ 1 2 π σ e x x 1 2 2 σ 2 d x
where  P t e ( x )  is the probability of the tunneling effect of a particle at time  τ  in  x 1 ± Δ  on the horizontal axis and  Δ  is positively correlated with the sampling step size  σ .
At the beginning iteration, the sampling step size  σ  is relatively large, and  P t e ( x )  is also correspondingly large, which can meet the optimization requirements. But when  σ  is small in the later stage of iteration,  P t e ( x )  is small, and the tunneling effect is weakened. It is difficult to escape from local optima for some optimization problems with small local optima structures.
According to Equation (24), the tunneling ability of a single particle is defined by the sampling scale, and increasing the number of sampling particles k can improve the tunneling ability of the system. But if we only increase k, the performance of the algorithm will decline because it affects the sampling rate of the algorithm. The sampling rate refers to the proportion of the number of sampling points to the solution space, which is defined as
η = k × n ( d max d min ) / ε
where n is the total number of iterations for function testing,  d max  and  d min  are the upper lower bounds of the domain, respectively, and  ε  is the calculation accuracy. Smaller  η  values can achieve higher efficiency. Merely increasing the total number of particles k to improve the tunneling ability of the system is not feasible. Using multi-population imbalanced sampling methods in the QDF-TE to increase tunneling effects can address this issue.

4.2. Proposal of QDF-TE

In the QDF-TE, the k sampling points are divided into S subpopulations. Then, the size of each subpopulation  X i ( i = 1 , 2 , 3 S )  is adjusted according to the fitness difference of the subpopulations, where poor fitness will gain more particles. We use the dynamic tunneling effect to allocate the number of subpopulation particles for non-equilibrium sampling. The expansion of the poor subpopulation size increases the probability of the population reaching the optimal global region through quantum tunneling, and thus the migration ability of the subpopulation to the optimal region is improved. On the premise of ensuring search efficiency, this strategy aims to concentrate more computing resources on the solution space with poor fitness of subpopulations to improve the global search capability.
Indeed, this strategy has significant advantages from the perspective of exploration ability. On the one hand, these additional particles can increase the coverage of the search space. In complex optimization problems, the global optimal solution may be hidden in rather remote or difficult-to-reach areas, while populations with poor fitness may be in “barren zones” in the search space. By allocating more particles, the search range can be wider, and new potential areas are more likely to be discovered. On the other hand, from an evolutionary perspective, although these additional particles may not have shown significant advantages at present, they provide a guarantee for the diversity of the population. In the subsequent iteration process, as the environment and search state change, these seemingly useless particles may become key factors in finding the global optimal solution due to changes in certain conditions.
However, this strategy inevitably brings about a significant increase in computational costs, and many of these additional particles assigned to populations with poor fitness may not produce any useful results in the actual search process. A large amount of meaningless particle computing significantly prolongs the running time of the entire algorithm, especially for scenarios which have limitations on their computing times and resources.
In the QDF-TE, the total number of particles is fixed, and the computational cost will be relatively lower compared with the case where the total number of particles increases. At the same time, in the later stage of optimization, as the sampling scale decreases, particles gradually gather, and the distance between subpopulations gradually approaches zero. The size difference between subpopulations decreases, and the advantage of subpopulations with poor fitness obtaining more particles also decreases. Therefore, in the QDF-TE, the algorithm mainly consumes resources during early exploration to find the global optimum. In the future, we will further explore the computational resource cost problem of optimization algorithms.
Based on this situation, we need further weigh and choose the optimization algorithm in the real world [26]. If there is a scenario with strict requirements for the global optimal solution or relatively sufficient computing resources, then this strategy is a better choice.
The QDF-TE mainly consists of three steps: acquisition of reference energy (ER), determination of the subpopulation size, and Monte Carlo sampling.

4.2.1. Acquisition of Reference Energy  E R

The fitness value of the optimal solution in the current iterative population is taken as the reference energy of the system, as shown in the following equation:
E R g = f ( x b g ) = m i n ( f ( x i g ) )
where  E R g  is the system reference energy for the gth iteration,  f ( x i g )  is the fitness value of the ith particle in the gth iteration, and  x b g  is the reference solution. A potential barrier is formed from the optimal value of each subpopulation to the reference energy  E R , and the particles of each subpopulation are redistributed through the tunneling effect. The subpopulation with a larger distance to the reference solution  x b g  (i.e., the subpopulation with poorer fitness values) will obtain more sampling particles. By using the tunneling effect, more sampling particles will appear in poor areas, which increases the global search capability of the algorithm.

4.2.2. Determination of Subpopulation Size

Based on the distance between the optimal solution  x i _ b g  of the subpopulation and the reference solution  x b g , as shown in Equation (27), the size of the subpopulation is determined:
d x i b g , x b g = x i b g , x b g
where  x i _ b g  represents the optimal solution of the ith subpopulation in the gth iteration. The calculation of the weight factors  W ( x n )  in Equation (20) can lead to dynamic changes in the total number of particles in the population, which may result in the algorithm not converging or converging quickly. Since the total number of particles k in this algorithm is fixed, adjusting the weight factor  W ( x n )  of the subpopulation to change the number of particles obtained by the subpopulation will not cause the convergence problem mentioned above. Adjusting the weight factor  W ( x n )  through Equation (28) to satisfy  i = 1 S W ( x i _ b g ) = 1 W ( x )  is the key to controlling the quantum tunneling of subpopulations:
W x i b g = d x i b g , x b g i = 1 S d x i b g , x b g
Correspondingly, the number of particles obtained by each subpopulation is
m i g = W x i b g · k
where  m i g  is the number of particles in the ith subpopulation of the gth iteration, which is the size of the ith subpopulation.

4.2.3. Monte Carlo Sampling

During the migration process of particles, Monte Carlo sampling is used to determine the positions of new solutions. Equation (19) is the Monte Carlo sampling function.
For each subpopulation, their regeneration is as follows:
X i g + 1 = N ( X i g , cov g )
cov g = σ g 2             σ g 2
σ g + 1 = σ g / C r
where  X i g  is the ith population of the gth iteration,  σ g  is the scale of the gth iteration, and  C r  is a scale reduction coefficient, usually taken to be  C r = 1.2 .

4.3. Algorithm of QDF-TE

Algorithm 2 shows the complete process of the QDF-TE. Compared with the QDF, the QDF-TE adds Equations (29) and (30). Equation (29) dynamically adjusts the size of the subpopulation based on the fitness values and increases sampling in areas with poor fitness to improve particle tunneling effects, correspondingly increasing the population diversity. Equation (30) adopts the same scale for normal sampling of each particle in the subpopulation.
Algorithm 2: General framework of QDF-TE.
Entropy 27 00150 i001

5. Experimental Results and Discussion

5.1. Experiment for Quantum Tunneling Effect

The dynamic evolution of the wave function over time actually reflects the process of the sampling population of optimization algorithms migrating to the optimal solution region. The wave function represents the probability distribution of the solutions (sampling particles). To explore the quantum tunneling effect of the QDF-TE, a wave function tracking experiment was designed according to Algorithm 2 (QDF-TE).
We selected a double-well function (DWF) as the objective function [27]. The formula of the DWF is as follows:
f ( x ) = i = 1 n ( V 0 ( x i 2 a 2 ) 2 a 4 + δ x i )
where  i N  , N is the dimension of the DWF, and the extreme value of  f ( x )  is obtained near  x = ± a . When  δ > 0 , the global minimum  a δ  is obtained near  x = a , and when  δ = 0 , the DWF has  2 N  symmetric global optimal solutions. The height difference between the minima can be adjusted by changing the value of  δ V 0  is the barrier height of the DWF, which is obtained at  x = 0 .
In the experiment, a two-dimensional DWF was adopted which had four extreme points (one global optimum and two local optima). The function definition field  [ L B , U B ]  was set to  [ 10 , 10 ] , the initial sampling scale  σ = 20 , the sampling points  k = 400 , and the number of subpopulations  S = 30 . The evolution process of the wave function with time could be mapped to the reduction process of the algorithm sampling scale  σ , where the reduction coefficient of  σ  was set to 1.2. The evolution process of the wave function over time is shown in Figure 1.
In Figure 1, t is the number of iterations, and subgraphs (a–f) show the evolution process of the wave function from a high-energy state to the ground state, respectively. The particles which entered the global optimum are represented in red, while the others are black. In Figure 1a, the initial image of the wave function represents the random distribution of particles throughout the entire domain. Figure 1b–d shows particles crossing the potential barrier of the objective function and continuously approaching four extreme points. Figure 1e contains three peaks, corresponding to two local optima and one global optimum, with significantly more particles clustered in the global optimum than in the local optima. As shown in Figure 1f, the wave function has a steep peak, indicating that the vast majority of the particles entered the global optimal region through the tunneling effect, and the system eventually evolved to the ground state.

5.2. Comparison with Other Typical Evolutionary Algorithms

5.2.1. Experimental Settings

In this subsection, the QDF-TE is compared with some classic and competitive swarm intelligent optimization algorithms, including ABC [28], dynamic search FWA (dynFWA) [29], loser-out tournament-based FWA (LOTFWA) [30], PVADE [31], and standard PSO 2011 (SPSO2011) [32]. The parameters of these algorithms were set to the suggested values in their original papers. The parameter settings of the QDF-TE were consistent with Section 5.1. All of these algorithms were tested under the same conditions in the CEC 2017 single-objective optimization benchmark suite [33], which includes unimodal functions (F1–F3), multimodal functions (F4–F10), hybrid functions (F11–F20), and composition functions (F21–F30).
Experiments were conducted on 30D and 50D problems for each test function. The results of each experiment were obtained based on 51 independent trial runs, and the maximum number of function evaluations (MaxFES) in each run was  10,000 × N , where N is the dimension of the problem. When the difference between the best solution found and the best solution was  10 8  or less, the error was zero.

5.2.2. Comparative Experiments

The statistical results are listed in Table 1 and Table 2, where the best values for each function are highlighted. The average ranking (AR) of the mean errors (MeanErr) and the standard deviations (StdErr) of the six algorithms were calculated separately for the unimodal, multimodal, hybrid, and composition functions as well as all functions. The MeanErr values were adopted for comparison using the Wilcoxon signed-rank test (confidence level of 95%), where the symbols “+”, “≈”, and “−” indicate that the proposed algorithm performed better than, comparable to, or worse than the compared algorithm, respectively.
For unimodal functions, SPSO2011 showed excellent performance and ranked first for the 30D and 50D problems. DynFWA, LOTFWA, and PVADE were tied for second place for the 30D problems. The QDF-TE and ABC were ranked in fourth and fifth place, respectively. However, for the 50D problems, the QDF-TE ranked second, and for the StdErr, the QDF-TE ranked first, showing the best stability.
In terms of multimodal functions, the QDF-TE ranked first in terms of absolute advantage, especially on F5, F7, and F8 for both the 30D and 50D problems. LOTFWA and ABC achieved comparable performances, ranking second and third, respectively. DynFWA was ranked last.
For hybrid functions, PVADE was obviously superior and far ahead of the other algorithms. Although not as good as PVADE, the QDF-TE performed better than the other four algorithms. Compared with the 30D problems, the AR of the QDF-TE significantly improved for the 50D problems.
With respect to the composition functions, ABC and the QDF-TE had comparable performances for the 30D problems, while for the 50D problems, the QDF-TE was slightly more accurate than ABC. In terms of the StdErr, the QDF-TE definitively ranked first for both 30D and 50D problems, indicating that the algorithm had improved stability.
Based on the above results in the two tables, the QDF-TE was the most competitive framework among the above-mentioned algorithms for solving the multimodal and composition optimization problems, especially for high-dimensional functions. For hybrid functions, the QDF-TE performed slightly worse but was also competitive. Overall, due to the quantum tunneling effect of the QDF-TE, which can cross potential barriers and guide particles from local optima to global optima, it performed well for high-dimensional objective functions with a massive number of local optima. However, for unimodal functions, the tunneling ability of the QDF-TE had no significant effect, and thus its performance had no significant improvement.
The last row of Table 1 and Table 2 provides the overall results for the Wilcoxon signed-rank test, from which it can be seen that the performance of the QDF-TE was slightly worse than PVADE in the 30D problems, while in the 50D problems, the QDF-TE obviously outperformed all five other algorithms; that is to say, the QDF-TE performed excellently in high-dimensional function optimization.

5.2.3. Convergence Discussion of QDF-TE

The convergence curve is a relation between the values of fitness functions and the number of iterations for all competitive algorithms. The convergence curve plots the best fitness value at a certain number of iterations until the maximum number of iterations of the algorithm is met. Figure 2 (30D, F1–F15), Figure 3 (30D, F16–F30), Figure 4 (50D, F1–F15), and Figure 5 (50D, F16–F30) display the convergence curves of ABC, dynFWA, SPSO2011, GWO, and the QDF-TE on the CEC2017 benchmark suite, respectively.
In Figure 2 and Figure 3, for the 30D problems, the QDF-TE was able to find the optimal solution with high accuracy among 12 functions, including F5, F7, F8, F10, F16, F17, F18, F22, F23, F25, F27, and F28, showing that the QDF-TE ranked first among all six algorithms, and the overall performance of the QDF-TE was powerful. Due to the application of the tunneling effect to enhance the global search capability, the convergence speed of the QDF-TE was not the fastest at the beginning of iteration. However, once it found the global optimal region, the later exploration process would be fast.
Meanwhile, in Figure 4 and Figure 5, for the 50D problems, the QDF-TE was able to find the optimal solution with high accuracy among 15 functions, adding 3 functions (F1, F10, and F24) compared with the 30D problems. The QDF-TE still ranked first among all six algorithms. As composition functions have a large number of local optima, especially when the dimension increases, the local optimum increases exponentially, which requires an algorithm with excellent performance. Due to the dynamic population and tunneling effect mechanism, the QDF-TE could explore the solution space more thoroughly. Therefore, the QDF-TE had excellent performance in optimizing the combination functions. From Figure 5, it can be seen that the QDF-TE ranked first in the composite functions, including F22, F23, F24, F25, F27, and F28, which shows that the overall performance of the QDF-TE was so powerful that it could perform a smoother transition between exploration and exploration trends.

6. Conclusions

This paper introduced the Schrödinger equation to obtain a quantum dynamics equation, The QDF was further constructed using potential energy equivalence and Taylor expansion. The basic operations of optimization algorithms were obtained. A wave function was introduced to simulate the distribution of solutions in optimization algorithms. Furthermore, the QDF-TE was proposed, which divides subpopulations based on fitness values and increases the sampling capability for disadvantaged areas to strengthen the tunneling effect. Theoretical analysis and experimental results showed that this method is accurate and stable. Moreover, the results of the tunneling experiment and a comparison with other well-known optimization algorithms showed that the QDF-TE has comparable performance for complex optimization problems. However, as mentioned in Section 4.2, further research may be needed in the future, such as how to improve the utilization efficiency of these additional particles and reduce unnecessary computational overhead while ensuring the exploration capability.
The QDF-TE algorithm exhibited considerable performance. This outcome suggests that the incorporation of quantum dynamics for the purpose of explaining and enhancing optimization algorithms holds great promise. However, there remains substantial room for further improvement of the QDF algorithm. For instance, the utilization of adaptability scales for sampling across different dimensions and within subpopulations could be employed to precisely realize tunneling effects. Consequently, the potential of quantum dynamics can be explored in greater depth. Undoubtedly, this exploration represents an intriguing and potentially fruitful area of research which merits further investigation in the future.

Author Contributions

Conceptualization, P.W.; methodology, P.W.; software, Q.T.; validation, Q.T.; investigation, Q.T.; resources, P.W.; data curation, Q.T.; writing—original draft preparation, Q.T.; writing—review and editing, Q.T.; visualization, Q.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  2. Colorni, A. Distributed optimization by ant colonies. In Proceedings of the First European Conference on Artificial Life, Paris, France, 11–13 December 1991. [Google Scholar]
  3. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  4. Kennedy, J. Probability and dynamics in the particle swarm. In Proceedings of the 2004 Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; Volume 1, pp. 340–347. [Google Scholar] [CrossRef]
  5. Omran, M.G.; Engelbrecht, A.P.; Salman, A. Bare bones differential evolution. Eur. J. Oper. Res. 2009, 196, 128–139. [Google Scholar] [CrossRef]
  6. Li, J.Z.; Tan, Y. The bare bones fireworks algorithm: A minimalist global optimizer. Appl. Soft Comput. 2018, 62, 454–462. [Google Scholar] [CrossRef]
  7. Finnila, A.B.; Gomez, M.A.; Sebenik, C.; Stenson, C.; Doll, J.D. Quantum annealing: A new method for minimizing multidimensional functions. Chem. Phys. Lett. 1994, 219, 343–348. [Google Scholar] [CrossRef]
  8. Sun, J.; Feng, B.; Xu, W.B. Particle swarm optimization with particles having quantum behavior. In Proceedings of the Evolutionary Computation, 2004. CEC2004, Portland, OR, USA, 19–23 June 2004; Volume 1, pp. 325–331. [Google Scholar] [CrossRef]
  9. Nelson, E. Derivation of the Schrödinger equation from Newtonian mechanics. Phys. Rev. 1966, 150, 1079. [Google Scholar] [CrossRef]
  10. Wang, M.S. Stochastic interpretation of quantum mechanics in complex space. Phys. Rev. Lett. 1997, 79, 3319. [Google Scholar] [CrossRef]
  11. Kadowaki, T.; Nishimori, H. Quantum annealing in the transverse Ising model. Phys. Rev. E 1998, 58, 5355. [Google Scholar] [CrossRef]
  12. Brooke, J.; Bitko, D.; Rosenbaum, T.F.; Aeppli, G. Quantum annealing of a disordered magnet. Science 1999, 284, 779–781. [Google Scholar] [CrossRef]
  13. Cheung, N.J.; Ding, X.M.; Shen, H.B. A nonhomogeneous cuckoo search algorithm based on quantum mechanism for real parameter optimization. IEEE Trans. Cybern. 2016, 47, 391–402. [Google Scholar] [CrossRef] [PubMed]
  14. Han, K.H.; Kim, J. Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Trans. Evol. Comput. 2002, 6, 580–593. [Google Scholar] [CrossRef]
  15. Draa, A.; Meshoul, S.; Talbi, H.; Batouche, M. A quantum-inspired differential evolution algorithm for solving the N-queens problem. Neural Netw. 2011, 1, 21–27. [Google Scholar]
  16. Narayanan, A.; Moore, M. Quantum-inspired genetic algorithms. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 61–66. [Google Scholar] [CrossRef]
  17. Han, K.H.; Kim, J.H. Genetic quantum algorithm and its application to combinatorial optimization problem. In Proceedings of the 2000 Congress on Evolutionary Computation, La Jolla, CA, USA, 16–19 July 2000; Volume 2, pp. 1354–1360. [Google Scholar] [CrossRef]
  18. Wang, L.; Niu, Q.; Fei, M.R. A novel quantum ant colony optimization algorithm. In International Conference on Life System Modeling and Simulation; Springer: Berlin/Heidelberg, Germany, 2007; pp. 277–286. [Google Scholar] [CrossRef]
  19. Born, M. Statistical interpretation of quantum mechanics. Science 1955, 122, 675–679. [Google Scholar] [CrossRef] [PubMed]
  20. Wick, G.C. Properties of Bethe-Salpeter wave functions. Phys. Rev. 1954, 96, 1124. [Google Scholar] [CrossRef]
  21. Risken, H. Fokker-planck equation. In The Fokker-Planck Equation; Springer: Berlin/Heidelberg, Germany, 1996; pp. 63–95. [Google Scholar] [CrossRef]
  22. Sun, J.; Fang, W.; Wu, X.J.; Palade, V.; Xu, W.B. Quantum-behaved particle swarm optimization: Analysis of individual particle behavior and parameter selection. Evol. Comput. 2012, 20, 349–393. [Google Scholar] [CrossRef] [PubMed]
  23. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  24. Santoro, G.E.; Tosatti, E. Optimization using quantum mechanics: Quantum annealing through adiabatic evolution. J. Phys. A Math. Gen. 2006, 39, R393–R431. [Google Scholar] [CrossRef]
  25. Feynman, R.P.; Hibbs, A.R. Quantum Mechanics and Path Integrals; Sveučilište u Zagrebu: Zagreb, Croatia, 1965. [Google Scholar]
  26. Wolpert, D. The lack of A priori distinctions between learning algorithms. Neural Comput. 1996, 8, 1341–1390. [Google Scholar] [CrossRef]
  27. Grabert, H.; Weiss, U. Quantum tunneling rates for asymmetric double-well systems with Ohmic dissipation. Phys. Rev. Lett. 1985, 54, 1605. [Google Scholar] [CrossRef] [PubMed]
  28. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  29. Yu, C.; Kelley, L.C.; Tan, Y. Dynamic search fireworks algorithm with covariance mutation for solving the CEC 2015 learning based competition problems. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1106–1112. [Google Scholar] [CrossRef]
  30. Li, J.; Tan, Y. Loser-out tournament-based fireworks algorithm for multimodal function optimization. IEEE Trans. Evol. Comput. 2017, 22, 679–691. [Google Scholar] [CrossRef]
  31. Coelho, L.D.S.; Ayala, H.V.H.; Freire, R.Z. Population’s variance-based Adaptive Differential Evolution for real parameter optimization. In Proceedings of the Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1672–1677. [Google Scholar]
  32. Zambrano-Bigiarini, M.; Clerc, M.; Rojas, R. Standard particle swarm optimisation 2011 at cec-2013: A baseline for future pso improvements. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 2337–2344. [Google Scholar] [CrossRef]
  33. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
Figure 1. Two-dimensional schematic diagram of wave function changing with time when  V 0 = 3.0 a = 2.0 , and  δ = 0.3 .
Figure 1. Two-dimensional schematic diagram of wave function changing with time when  V 0 = 3.0 a = 2.0 , and  δ = 0.3 .
Entropy 27 00150 g001
Figure 2. Fitness-iteration comparison among ABC, dynFWA, SPSO2011, GWO, and QDF-TE for 30 dimensional function evaluations (F1–F15).
Figure 2. Fitness-iteration comparison among ABC, dynFWA, SPSO2011, GWO, and QDF-TE for 30 dimensional function evaluations (F1–F15).
Entropy 27 00150 g002aEntropy 27 00150 g002b
Figure 3. Fitness-iteration comparison among ABC, dynFWA, SPSO2011, GWO, and QDF-TE for 30 dimensional function evaluations (F16–F30).
Figure 3. Fitness-iteration comparison among ABC, dynFWA, SPSO2011, GWO, and QDF-TE for 30 dimensional function evaluations (F16–F30).
Entropy 27 00150 g003
Figure 4. Fitness-iteration comparison among ABC, dynFWA, SPSO2011, GWO, and QDF-TE for 50 dimensional function evaluations (F1–F15).
Figure 4. Fitness-iteration comparison among ABC, dynFWA, SPSO2011, GWO, and QDF-TE for 50 dimensional function evaluations (F1–F15).
Entropy 27 00150 g004
Figure 5. Fitness-iteration comparison among ABC, dynFWA, SPSO2011, GWO, and QDF-TE for 50 dimensional function evaluations (F16–F30).
Figure 5. Fitness-iteration comparison among ABC, dynFWA, SPSO2011, GWO, and QDF-TE for 50 dimensional function evaluations (F16–F30).
Entropy 27 00150 g005
Table 1. Comparison of statistical results evaluated by five algorithms on 30D CEC 2017 benchmark functions.
Table 1. Comparison of statistical results evaluated by five algorithms on 30D CEC 2017 benchmark functions.
FABCdynFWALOTFWAPVADESPSO2011QDF-TE
MeanErrWinStdErrMeanErrWinStdErrMeanErrWinStdErrMeanErrWinStdErrMeanErrWinStdErrMeanErrStdErr
1   2.28 × 10 2 +   6.13 × 10 4   4.12 × 10 3 +   1.99 × 10 7   1 . 17 × 10 2   3 . 36 × 10 4   3.15 × 10 3   1.35 × 10 7   3.36 × 10 3 +   9.89 × 10 6   1.70 × 10 3   1.51 × 10 6
2   5.82 × 10 9   2.67 × 10 20   7.38 × 10 1   3.43 × 10 4   4.81 × 10 9   1.47 × 10 20   2.36 × 10 4   4.47 × 10 9   6 . 29 × 10 5   1 . 39 × 10 9   5.11 × 10 8   7.91 × 10 17
3   1.11 × 10 5   2.55 × 10 8   6.16 × 10 12   1.74 × 10 22   2.87   7.47   5.74 × 10 6   7.97 × 10 11   0 . 00   0 . 00   6.65   1.48 × 10 1
AR.1-34.67 4.673.33 3.333.33 3.333.33 3.672.33 2.004.004.00
4   1 . 29 × 10 1 +   2 . 97 × 10 2   7.97 × 10 1 +   8.31 × 10 2   6.51 × 10 1   1.22 × 10 3   5.75 × 10 1   9.69 × 10 2   5.85 × 10 1   2.11 × 10 3   6.79 × 10 1   1.32 × 10 3
5   8.51 × 10 1   1.53 × 10 2   1.28 × 10 2 +   2.02 × 10 3   6.46 × 10 1   1.13 × 10 2   7.91 × 10 1   2.09 × 10 2   6.03 × 10 1   1.68 × 10 2   2 . 80 × 10 1   3 . 71 × 10 1
6   0 . 00 +   0 . 00   2.37 × 10 1 +   1.02 × 10 2   1.01 × 10 1 +   6.89 × 10 2   1.89 × 10 1 +   1.28 × 10 1   8.66 +   1.33 × 10 1   8.77 × 10 1   1.73 × 10 1
7   1.07 × 10 2   7.79 × 10 1   2.00 × 10 2 +   2.17 × 10 3   6.42 × 10 1   5.40 × 10 1   1.13 × 10 2 +   8.67 × 10 1   8.73 × 10 1   2.24 × 10 2   5 . 00 × 10 1   2 . 00 × 10 1
8   9.16 × 10 1   1.72 × 10 2   1.31 × 10 2 +   1.14 × 10 3   6.80 × 10 1   2.13 × 10 2   7.64 × 10 1   9.80 × 10 1   5.18 × 10 1   1.51 × 10 2   2 . 22 × 10 1   2 . 06 × 10 1
9   1.19 × 10 3   2.32 × 10 5   3.57 × 10 3 +   2.78 × 10 6   2.89 × 10 1   3.29 × 10 4   5 . 68   2 . 43 × 10 1   1.16 × 10 2   1.33 × 10 4   8.02   6.44 × 10 1
10   2.47 × 10 3   4 . 96 × 10 4   3.92 × 10 3 +   3.50 × 10 5   2 . 35 × 10 3   1.44 × 10 5   4.06 × 10 3 +   1.37 × 10 5   3.93 × 10 3 +   5.47 × 10 5   2.48 × 10 3   7.38 × 10 4
AR.4-103.29 2.575.71 5.292.57 3.293.57 3.003.43 4.572.432.29
11   7.90 × 10 2   2.50 × 10 5   1.24 × 10 2   1.94 × 10 3   1.51 × 10 2   1.46 × 10 3   6 . 09 × 10 1   1.17 × 10 3   1.13 × 10 2   1.31 × 10 3   7.13 × 10 1   3 . 08 × 10 2
12   1.01 × 10 6   1.49 × 10 11   9.13 × 10 5   8.57 × 10 11   8.35 × 10 6 +   3.06 × 10 13   8 . 38 × 10 3   4 . 97 × 10 7   4.01 × 10 5   8.11 × 10 10   4.16 × 10 5   7.37 × 10 10
13   1.56 × 10 4 +   6.95 × 10 7   2.35 × 10 4   2.71 × 10 8   1.52 × 10 4   4.56 × 10 7   2 . 97 × 10 2   2 . 09 × 10 5   5.79 × 10 4 +   9.56 × 10 8   2.32 × 10 4   3.12 × 10 7
14   1.61 × 10 5   1.02 × 10 10   5.15 × 10 3   1.21 × 10 7   2.28 × 10 3   4.11 × 10 6   5 . 78 × 10 1   3 . 06 × 10 2   1.68 × 10 3   4.16 × 10 6   5.68 × 10 2   1.23 × 10 5
15   3.30 × 10 3 +   7.06 × 10 6   1.26 × 10 4   1.38 × 10 8   7.61 × 10 3 +   1.32 × 10 7   9 . 57 × 10 1   3 . 84 × 10 3   2.92 × 10 4 +   2.28 × 10 8   1.11 × 10 4   1.30 × 10 7
16   6.36 × 10 2 +   1 . 32 × 10 4   9.78 × 10 2 +   8.68 × 10 4   4.95 × 10 2   1.52 × 10 4   4.94 × 10 2   7.34 × 10 4   7.19 × 10 2 +   1.01 × 10 5   4 . 87 × 10 2   1.34 × 10 4
17   1.99 × 10 2   2.81 × 10 3   4.08 × 10 2 +   3.26 × 10 4   1.68 × 10 2   3.36 × 10 3   1.06 × 10 2   3.62 × 10 3   1.85 × 10 2   6.41 × 10 3   1 . 05 × 10 2   1 . 26 × 10 3
18   3.09 × 10 5   2.15 × 10 10   1.48 × 10 5   8.93 × 10 9   4.15 × 10 4   4.29 × 10 8   1 . 72 × 10 2   4 . 49 × 10 3   6.41 × 10 4   9.65 × 10 8   4.90 × 10 4   4.35 × 10 8
19   9.36 × 10 3 +   3.63 × 10 7   1.64 × 10 4 +   2.45 × 10 8   1.32 × 10 4 +   1.31 × 10 8   5 . 21 × 10 1   1 . 02 × 10 3   2.35 × 10 4 +   9.76 × 10 7   5.04 × 10 4   5.55 × 10 8
20   2.68 × 10 2   4.50 × 10 3   4.51 × 10 2 +   3.75 × 10 4   2.80 × 10 2   4.48 × 10 3   1 . 46 × 10 2   1 . 68 × 10 3   2.74 × 10 2   6.69 × 10 3   2.67 × 10 2   3.64 × 10 3
AR.11-204.20 3.705.00 5.203.60 3.501.20 1.704.20 4.502.802.40
21   2.19 × 10 2   6.18 × 10 3   3.19 × 10 2 +   9.59 × 10 2   1 . 00 × 10 2   4 . 38 × 10 3   2.84 × 10 2 +   1.96 × 10 2   2.52 × 10 2 +   2.23 × 10 2   2.27 × 10 2   6.48 × 10 1
22   1.13 × 10 2   7.53 × 10 1   2.38 × 10 3 +   3.68 × 10 6   1 . 00 × 10 2   1.65 × 10 2   1.02 × 10 2   5.62   3.12 × 10 2 +   7.45 × 10 5   1 . 00 × 10 2   5 . 18 × 10 8
23   4.22 × 10 2   4.04 × 10 2   5.02 × 10 2 +   1.49 × 10 3   6.16 × 10 2 +   1.49 × 10 3   3.96 × 10 2   6.01 × 10 2   4.25 × 10 2 +   4.68 × 10 2   3 . 78 × 10 2   6 . 39 × 10 1
24   3.43 × 10 2 +   3.06 × 10 4   5.57 × 10 2 +   1.80 × 10 3   2 . 92 × 10 2   7.37 × 10 2   4.55 × 10 2 +   2.04 × 10 2   4.83 × 10 2 +   3.76 × 10 2   4.41 × 10 2   4 . 06 × 10 1
25   3.84 × 10 2 +   8 . 33 × 10 2   3.89 × 10 2 +   8.72 × 10 1   4.10 × 10 2 +   2.15 × 10 2   4.03 × 10 2 +   3.67 × 10 2   4.18 × 10 2 +   6.40 × 10 2   3 . 81 × 10 2 2.53
26   3 . 13 × 10 2 +   4 . 59 × 10 3   2.60 × 10 3 +   4.47 × 10 5   4.37 × 10 2 +   3.11 × 10 5   4.65 × 10 2 +   1.91 × 10 5   1.53 × 10 3 +   5.56 × 10 5   1.07 × 10 3   1.81 × 10 5
27   5 . 12 × 10 2 +   1 . 84 × 10 1   5.60 × 10 2 +   8.80 × 10 2   1.20 × 10 3 +   1.16 × 10 4   5.16 × 10 2 +   2.54 × 10 2   5.70 × 10 2 +   8.25 × 10 2   5.21 × 10 2   5.87 × 10 1
28   3.96 × 10 2   1 . 78 × 10 2   3.53 × 10 2   3.57 × 10 3   4.27 × 10 2 +   1.02 × 10 3   3.68 × 10 2   2.83 × 10 3   3.21 × 10 2   2.27 × 10 3   3 . 12 × 10 2   4.38 × 10 2
29   6.19 × 10 2 +   5.31 × 10 3   1.04 × 10 3 +   5.62 × 10 4   6.84 × 10 2 +   5.04 × 10 3   4 . 99 × 10 2   3.83 × 10 3   8.79 × 10 2 +   2.84 × 10 4   6.56 × 10 2   3 . 77 × 10 3
30   1.57 × 10 4 +   1.69 × 10 7   1.13 × 10 5 +   4.01 × 10 9   3.43 × 10 5   3.32 × 10 10   2 . 23 × 10 3   5 . 05 × 10 4   8.94 × 10 4 +   5.65 × 10 9   2.86 × 10 5   1.28 × 10 10
AR.21-302.40 2.804.90 5.003.90 3.802.90 3.104.40 4.402.501.90
AR.1-303.54 3.254.96 5.003.32 3.462.64 2.753.89 4.212.642.32
sum 13/3/14 21/3/6 10/5/15 8/6/16 16/3/11
Note: The bold mark indicates that they are the best results among the algorithms.
Table 2. Comparison of statistical results evaluated by five algorithms on 50D CEC 2017 benchmark functions.
Table 2. Comparison of statistical results evaluated by five algorithms on 50D CEC 2017 benchmark functions.
FABCdynFWALOTFWAPVADESPSO2011QDF-TE
MeanErrWinStdErrMeanErrWinStdErrMeanErrWinStdErrMeanErrWinStdErrMeanErrWinStdErrMeanErrStdErr
1   2.35 × 10 3 +   3.27 × 10 6   9.36 × 10 3 +   8.23 × 10 7   7 . 41 × 10 2   1.36 × 10 6   2.43 × 10 3 +   7.24 × 10 6   2.70 × 10 3 +   1.38 × 10 7   9.62 × 10 2   9 . 86 × 10 5
2   9.95 × 10 21 +   2.75 × 10 45   1.22 × 10 5   2.74 × 10 11   2.17 × 10 24 +   1.04 × 10 50   1.14 × 10 15 +   3.01 × 10 31   8 . 87 × 10 5   1 . 34 × 10 9   1.65 × 10 11   1.12 × 10 23
3   2.15 × 10 5 +   1.85 × 10 8   2 . 25 × 10 8   1 . 25 × 10 15   7.67 × 10 3 +   6.65 × 10 6   1.41 × 10 1   2.55 × 10 2   2.63 × 10 3   1.76 × 10 4   1.82 × 10 1   2.00 × 10 2
AR.1-34.67 4.673.00 3.004.00 4.333.67 4.002.67 2.673.002.33
4   3 . 05 × 10 1   9 . 52 × 10 1   1.18 × 10 2   2.58 × 10 3   1.02 × 10 2   6.97 × 10 2   1.10 × 10 2   2.53 × 10 3   1.21 × 10 2 +   3.44 × 10 3   1.01 × 10 2   2.12 × 10 3
5   2.05 × 10 2 +   3.33 × 10 2   2.83 × 10 2 +   3.81 × 10 3   1.57 × 10 2 +   5.92 × 10 2   2.10 × 10 2 +   1.88 × 10 3   1.40 × 10 2 +   5.40 × 10 2   6 . 96 × 10 1   1 . 38 × 10 2
6   0 . 00   0 . 00   4.11 × 10 1 +   7.57 × 10 1   9.30 × 10 1   1.34   1.05   9.72 × 10 1   1.80 × 10 1 +   2.66 × 10 1   2.57   8.44 × 10 1
7   2.32 × 10 2 +   3.71 × 10 2   4.52 × 10 2 +   1.05 × 10 4   1.24 × 10 2 +   2.52 × 10 2   2.83 × 10 2 +   3.56 × 10 2   2.25 × 10 2 +   1.98 × 10 3   1 . 16 × 10 2   1 . 37 × 10 2
8   2.12 × 10 2 +   3.98 × 10 2   2.83 × 10 2 +   6.31 × 10 3   1.76 × 10 2 +   8.72 × 10 2   2.08 × 10 2 +   1.43 × 10 3   1.47 × 10 2 +   4.53 × 10 2   7 . 01 × 10 1   1 . 33 × 10 2
9   8.61 × 10 3 +   4.21 × 10 6   1.10 × 10 4 +   1.05 × 10 7   2.53 × 10 3 +   2.33 × 10 6   4 . 04 × 10 1   2 . 33 × 10 3   1.45 × 10 3 +   7.09 × 10 5   3.33 × 10 2   2.56 × 10 4
10   2.56 × 10 4 +   1 . 27 × 10 5   6.45 × 10 3 +   7.18 × 10 5   4.67 × 10 3   2.31 × 10 5   7.79 × 10 3 +   2.51 × 10 6   7.39 × 10 3 +   1.87 × 10 6   4 . 59 × 10 3   2.06 × 10 5
AR.4-103.29 2.295.57 5.572.71 3.294.00 3.863.71 4.291.711.71
11   2.49 × 10 3 +   1.24 × 10 6   2.13 × 10 2 +   3.45 × 10 3   3.72 × 10 2 +   6.36 × 10 3   1.54 × 10 2 +   1.55 × 10 3   1.78 × 10 2 +   1.51 × 10 3   1 . 33 × 10 2   4 . 80 × 10 2
12   4.87 × 10 6 +   2.90 × 10 12   5.01 × 10 6 +   7.93 × 10 12   6.95 × 10 6 +   1.03 × 10 13   1 . 38 × 10 4   7 . 94 × 10 7   2.19 × 10 6   1.24 × 10 12   2.20 × 10 6   6.59 × 10 11
13   9.25 × 10 3   3.98 × 10 7   1.01 × 10 5 +   2.74 × 10 9   4.77 × 10 4 +   3.28 × 10 8   1 . 91 E × 10 3   7 . 90 × 10 6   7.20 × 10 4 +   2.02 × 10 9   3.28 × 10 4   6.92 × 10 7
14   1.16 × 10 6 +   2.49 × 10 11   2.98 × 10 4 +   7.09 × 10 8   1.97 × 10 4 +   2.19 × 10 8   2 . 04 × 10 2   2 . 09 × 10 3   8.68 × 10 3   6.24 × 10 7   7.80 × 10 3   2.86 × 10 7
15   1.59 × 10 4 +   1.31 × 10 7   1.35 × 10 4   8.13 × 10 7   1.77 × 10 4 +   6.08 × 10 7   3 . 73 × 10 2   1 . 34 × 10 4   2.80 × 10 4 +   2.91 × 10 8   1.11 × 10 4   1.93 × 10 7
16   1.25 × 10 3 +   4.25 × 10 4   1.66 × 10 3 +   1.81 × 10 5   1.25 × 10 3 +   5.29 × 10 4   6.77 × 10 3   5.24 × 10 4   1.25 × 10 3 +   1.44 × 10 5   6 . 46 × 10 2   3 . 43 × 10 4
17   9.83 × 10 2 +   1.84 × 10 4   1.50 × 10 3 +   1.20 × 10 5   5 . 52 × 10 2   2.10 × 10 4   6.16 × 10 2 +   2.87 × 10 4   1.11 × 10 3 +   5.25 × 10 4   5 . 34 × 10 2   2.54 × 10 4
18   1.90 × 10 6 +   5.14 × 10 11   1.90 × 10 5 +   5.34 × 10 9   2.32 × 10 5 +   1.11 × 10 10   5 . 47 × 10 2   1 . 98 × 10 5   6.84 × 10 4   5.49 × 10 8   7.38 × 10 4   5.79 × 10 8
19   2.16 × 10 4   3.97 × 10 7   1.74 × 10 4   2.15 × 10 8   1.72 × 10 5   1.59 × 10 10   1 . 18 × 10 2   1 . 46 × 10 3   6.04 × 10 4   2.89 × 10 8   1.20 × 10 5   1.99 × 10 9
20   8.16 × 10 2 +   2.21 × 10 4   1.20 × 10 3 +   1.13 × 10 5   5.72 × 10 2 +   1.92 × 10 4   3.01 × 10 2   1.75 × 10 4   7.28 × 10 2 +   4.41 × 10 4   2.44 × 10 2   1 . 71 × 10 4
AR.11-204.30 3.504.70 5.004.40 4.301.50 1.803.90 4.002.202.40
21   3.91 × 10 2 +   5.07 × 10 3   4.74 × 10 2 +   3.63 × 10 3   1 . 50 × 10 2   1 . 72 × 10 16   3.81 × 10 2 +   2.82 × 10 3   3.30 × 10 2 +   7.65 × 10 2   2.57 × 10 2   7.90 × 10 1
22   4.36 × 10 3   5.02 × 10 6   6.88 × 10 3 +   8.75 × 10 5   1.50 × 10 2   5 . 22 × 10 21   1 . 02 × 10 2   7.35   6.12 × 10 3 +   9.03 × 10 6   4.51 × 10 3   3.22 × 10 6
23   6.59 × 10 2 +   1.61 × 10 3   7.66 × 10 2 +   8.30 × 10 3   1.29 × 10 3 +   2.85 × 10 4   5.55 × 10 2 +   3.87 × 10 3   5.99 × 10 2 +   2.22 × 10 3   5 . 04 × 10 2   2 . 26 × 10 2
24   9.64 × 10 2 +   2.46 × 10 4   8.52 × 10 2 +   9.89 × 10 3   3 . 26 × 10 2   2.25 × 10 4   6.20 × 10 2 +   3.78 × 10 3   6.62 × 10 2 +   1.03 × 10 3   5.61 × 10 2   1 . 86 × 10 2
25   5.02 × 10 2   5 . 04 × 10 2   5.13 × 10 2   1.55 × 10 3   5 . 00 × 10 2   1.13 × 10 3   5.60 × 10 2 +   1.69 × 10 3   5.67 × 10 2 +   1.47 × 10 3   5.08 × 10 2   1.19 × 10 3
26   1.34 × 10 3   1.51 × 10 6   4.51 × 10 3 +   6.44 × 10 5   2 . 12 × 10 2   1 . 06 × 10 3   3.31 × 10 2   6.57 × 10 3   3.73 × 10 3 +   1.47 × 10 6   2.05 × 10 3   4.30 × 10 4
27   6 . 50 × 10 2   6 . 19 × 10 2   8.98 × 10 2 +   1.62 × 10 4   2.46 × 10 3 +   6.62 × 10 4   6.98 × 10 2 +   7.18 × 10 3   9.56 × 10 2 +   1.33 × 10 4   6 . 50 × 10 2   7.99 × 10 2
28   4 . 73 × 10 2   8 . 16 × 10 1   4.92 × 10 2   7.58 × 10 2   5.00 × 10 2 +   2.11 × 10 3   5.13 × 10 2 +   1.12 × 10 3   5.02 × 10 2 +   5.18 × 10 2   4.84 × 10 2   4.83 × 10 2
29   1.07 × 10 3   1 . 54 × 10 4   1.91 × 10 3 +   1.53 × 10 5   1.52 × 10 3 +   4.17 × 10 4   8 . 36 × 10 2   3.79 × 10 4   1.79 × 10 3 +   1.42 × 10 5   1.04 × 10 3   2.34 × 10 4
30   8.52 × 10 5   5 . 86 × 10 9   1.18 × 10 7   6.50 × 10 12   3.36 × 10 6   2.60 × 10 12   7 . 06 × 10 5   1.77 × 10 10   5.46 × 10 6   6.45 × 10 11   1.54 × 10 7   3.55 × 10 12
AR.21-303.00 3.005.00 4.702.90 3.602.80 3.404.50 3.802.802.50
AR.1-303.75 3.324.79 4.753.46 3.822.86 3.073.89 3.862.252.18
sum 19/4/7 22/4/4 18/5/7 14/3/13 23/3/4
Note: The bold mark indicates that they are the best results among the algorithms.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, Q.; Wang, P. Quantum Dynamics Framework with Quantum Tunneling Effect for Numerical Optimization. Entropy 2025, 27, 150. https://doi.org/10.3390/e27020150

AMA Style

Tang Q, Wang P. Quantum Dynamics Framework with Quantum Tunneling Effect for Numerical Optimization. Entropy. 2025; 27(2):150. https://doi.org/10.3390/e27020150

Chicago/Turabian Style

Tang, Quan, and Peng Wang. 2025. "Quantum Dynamics Framework with Quantum Tunneling Effect for Numerical Optimization" Entropy 27, no. 2: 150. https://doi.org/10.3390/e27020150

APA Style

Tang, Q., & Wang, P. (2025). Quantum Dynamics Framework with Quantum Tunneling Effect for Numerical Optimization. Entropy, 27(2), 150. https://doi.org/10.3390/e27020150

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop