Next Article in Journal
Extended Comparative Study between Newton’s and Steffensen-like Methods with Applications
Next Article in Special Issue
A More Flexible Asymmetric Exponential Modification of the Laplace Distribution with Applications for Chemical Concentration and Environment Data
Previous Article in Journal
On the Forced Vibration of Bending-Torsional-Warping Coupled Thin-Walled Beams Carrying Arbitrary Number of 3-DoF Spring-Damper-Mass Subsystems
Previous Article in Special Issue
Decomposition of Finitely Additive Markov Chains in Discrete Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Influence of Binomial Crossover on Approximation Error of Evolutionary Algorithms

1
School of Science, Wuhan University of Technology, Wuhan 430070, China
2
Department of Computer Science, Nottingham Trent University, Clifton Campus, Nottingham NG11 8NS, UK
3
School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
4
Computational Science Hubei Key Laboratory, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2850; https://doi.org/10.3390/math10162850
Submission received: 24 July 2022 / Revised: 6 August 2022 / Accepted: 9 August 2022 / Published: 10 August 2022
(This article belongs to the Special Issue Probability, Stochastic Processes and Optimization)

Abstract

:
Although differential evolution (DE) algorithms perform well on a large variety of complicated optimization problems, only a few theoretical studies are focused on the working principle of DE algorithms. To make the first attempt to reveal the function of binomial crossover, this paper aims to answer whether it can reduce the approximation error of evolutionary algorithms. By investigating the expected approximation error and the probability of not finding the optimum, we conduct a case study comparing two evolutionary algorithms with and without binomial crossover on two classical benchmark problems: OneMax and Deceptive. It is proven that using binomial crossover leads to the dominance of transition matrices. As a result, the algorithm with binomial crossover asymptotically outperforms that without crossover on both OneMax and Deceptive, and outperforms on OneMax, however, not on Deceptive. Furthermore, an adaptive parameter strategy is proposed which can strengthen the superiority of binomial crossover on Deceptive.

1. Introduction

Evolutionary algorithms (EAs) are a family of randomized search heuristics inspired from biological evolution, and many empirical studies demonstrate that crossovers that combine genes of two parents to generate new offspring could be helpful to the convergence of EAs [1,2,3]. Meanwhile, theoretical results on runtime analysis validate the promising function of crossover in EAs [4,5,6,7,8,9,10,11,12,13,14,15], whereas there are also some cases that crossover cannot be helpful [16,17].
By exchanging components of target vectors with donor vectors, differential evolution (DE) algorithms implement crossover operations in a different way. Numerical results show that continuous DE algorithms can achieve competitive performance on a large variety of complicated problems [18,19,20,21], and its competitiveness is to great extent attributed to the employed crossover operations [22]. However, the binary differential evolution (BDE) algorithm [23], which simulates the working mechanism of continuous DE, is not as competitive as its continuous counterpart. Analysis of the working principle indicates that the mutation and update strategies result in poor convergence of BDE [24], but there were no theoretical results reported on how crossover influences the performance of discrete-coded DE algorithms.
This paper is dedicated to investigating the influence of binomial crossover by introducing it to the ( 1 + 1 )EA, excluding the impacts of population and mutation strategies of DE. Although the expected hitting time/runtime is popularly investigated in the theoretical study of randomized search heuristics (RSHs), there is a gap between runtime analysis and practice because their optimization time to reach an optimum is uncertain and could be even infinite in continuous optimization [25]. Due to this reason, optimization time is seldom used in computer simulation for evaluating the performance of EAs, and their performance is evaluated after running finite generations by solution quality such as the mean and median of the fitness value or approximation error [26]. In theory, solution quality can be measured for a given iteration budget by the expected fitness value [27] or approximation error [28,29], which contributes to the analysis framework named fixed-budget analysis (FBA). An FBA on immune-inspired hypermutations led to theoretical results that are very different from those of runtime analysis but consistent with the empirical results, which demonstrates that the perspective of fixed-budget computations provides valuable information and additional insights for the performance of randomized search heuristics [30].
Accordingly, we evaluate the solution quality of an EA after running finite generations by the expected approximation error and the error tail probability. The former measures the fitness gap between a solution and optimum, and the latter is the probability distribution of the error over error levels, which measures the probability of finding the optimum. An EA is said to outperform another if, for the former EA, its error and tail probability are smaller. Furthermore, an EA is said to asymptotically outperform another if, for the former EA, its error and tail probability are smaller after a sufficiently large number of generations.
The research question of this paper is whether the binomial crossover operator can help reduce the approximation error of EA. As a pioneering work on this topic, we investigate a ( 1 + 1 ) E A C that performs the binomial crossover on an individual and an offspring generated by mutation, and compare a ( 1 + 1 ) E A without crossover and its variant ( 1 + 1 ) E A C on two classical problems, OneMax and Deceptive. By splitting the objective space into error levels, the analysis is performed based on the Markov chain models [31,32]. Given the two EAs, the comparison of their performance is drawn from the comparison of their transition probabilities, which are estimated by investigating the bits preferred by evolutionary operations. Under some conditions, ( 1 + 1 ) E A C with binomial crossover outperforms ( 1 + 1 ) E A on OneMax, but not on Deceptive; however, by adding an adaptive parameter mechanism arising from theoretical results, ( 1 + 1 ) E A C with binomial crossover outperforms ( 1 + 1 ) E A on Deceptive too.
This work presents the first study on how binomial crossover influences the expected runtime and tail probability of randomized search heuristics. Meanwhile, we also propose a feasible routine to get adaptive parameter settings of EAs from theoretical results. The rest of this paper is organized as follows. Section 2 reviews related theoretical work. Preliminary contents for our theoretical analysis are presented in Section 3. Then, the influence of the binomial crossover on transition probabilities is investigated in Section 4. Section 5 conducts an analysis of the asymptotic performance of EAs. To reveal how binomial crossover works on the performance of EAs for consecutive iterations, the OneMax problem and the Deceptive problem are investigated in Section 6 and Section 7, respectively. Finally, Section 8 presents the conclusions and discussions.

2. Related Work

2.1. Theoretical Analysis of Crossover in Evolutionary Algorithms

To understand how crossover influences the performance of EAs, Jansen et al. [4] proved that an EA using crossover can reduce the expected optimization time from super-polynomial to a polynomial of small degree on the function Jump. Kötzing et al. [5] investigated crossover-based EAs on the functions OneMax and Jump and showed the potential speedup by crossover when combined with a fitness-invariant bit shuffling operator in terms of optimization time. For a simple GA without shuffling, they found that the crossover probability has a drastic impact on the performance on Jump. Corus and Oliveto [6] obtained an upper bound on the runtime of standard steady-state GAs to hillclimb the OneMax function and proved that the steady-state EAs are 25% faster than their mutation-only counterparts. Their analysis also suggests that larger populations may be faster than populations of size 2. Dang et al. [7] revealed that the interplay between crossover and mutation may result in a sudden burst of diversity on the Jump test function and reduce the expected optimization time compared to mutation-only algorithms such as (1 + 1) EA. For royal road functions and OneMax, Sudholt [8] analyzed uniform crossover and k-point crossover and proved that crossover makes every ( μ + λ ) EA at least twice as fast as the fastest EA using only standard bit mutation. Pinto and Doerr  [9] provided a simple proof of a crossover-based genetic algorithm (GA) outperforming any mutation-based black-box heuristic on the classic benchmark OneMax. Oliveto et al. [10] obtained a tight lower bound on the expected runtime of the (2 + 1) GA on OneMax. Lengler and Meier [11] studied the positive effect of using larger population sizes and crossover on Dynamic BinVal.
For non-artificial problems, Lehre and Yao [12] proved that the use of crossover in the ( μ + 1 ) steady-state genetic algorithm may reduce the runtime from exponential to polynomial for some instance classes of the problem of computing unique input–output (UIO) sequences. Doerr et al. [13,14] analyzed EAs on the all-pairs shortest path problem. Their results confirmed that the EA with a crossover operator is significantly faster in terms of the expected optimization time. Sutton  [15] investigated the closest string problem and proved that a multi-start ( μ + 1 ) GA required less randomized fixed-parameter tractable (FPT) time than that with disabled crossover.
However, there is some evidence that crossover is not always helpful. Richter et al. [16] constructed Ignoble Trail functions and proved that mutation-based EAs optimize them more efficiently than GAs with crossover. The later need exponential optimization time. Antipov and Naumov [17] compared crossover-based algorithms on RealJump functions with a slightly shifted optimum, which increases the runtime of all considered algorithms on RealJump. The hybrid GA fails to find the shifted optimum with high probability.

2.2. Theoretical Analysis of Differential Evolution Algorithms

Most existing theoretical studies on DE are focused on continuous variants [33]. By estimating the probability density function of generated individuals, Zhou et al. [34] demonstrated that the selection mechanism of DE, which chooses mutually different parents for the generation of donor vectors, sometimes does not work positively on the performance of DE. Zaharie and Micota [35,36,37] investigated the influence of the crossover rate on both the distribution of the number of mutated components and the probability for a component to be taken from the mutant vector, as well as the influence of mutation and crossover on the diversity of the intermediate population. Wang and Huang [38] attributed the DE to a one-dimensional stochastic model, and investigated how the probability distribution of population is connected to the mutation, selection, and crossover operations of DE. Opara and Arabas [39] compared several variants of the differential mutation using characteristics of their expected mutants’ distribution, which demonstrated that the classic mutation operators yield similar search directions and differ primarily by the mutation range. Furthermore, they formalized the contour fitting notion and derived an analytical model that links the differential mutation operator with the adaptation of the range and direction of search [40].
By investigating the expected runtime of BDE, Doerr and Zhang [24] performed a first fundamental analysis on the working principles of discrete-coded DE. It was shown that BDE optimizes the important decision variables, but is hard to find the optima for decision variables with a small influence on the objective function. Since BDE generates trial vectors by implementing a binary variant of binomial crossover accompanied by the mutation operation, it has characteristics significantly different from classic EAs or estimation-of-distribution algorithms.

2.3. Fixed-Budget Analysis and Approximation Error

To bridge the wide gap between theory and application, Jasen and Zarges [27] proposed an FBA framework of RSHs, by which the fitness of random local search and (1 + 1) EA were investigated for given iteration budgets. Under the framework of FBA, Jasen and Zarges [41] analyzed the any-time performance of EAs and artificial immune systems on a proposed dynamic benchmark problem. Nallaperuma et al. [42] considered the well-known traveling salesperson problem (TSP) and derived the lower bounds of the expected fitness gain for a specified number of generations. Based on the Markov chain model of RSHs, Wang et al. [29] constructed a general framework of FBA, by which they found the analytic expression of approximation error instead of asymptotic results of expected fitness values. Doerr et al. [43] built a bridge between runtime analysis and FBA, by which a huge body of work and a large collection of tools for the analysis of the expected optimization time could meet the new challenges introduced by the new fixed-budget perspective.
Noting that hypermutations tend to be inferior to typical example functions in terms of runtime, Jansen and Zarges [30] conducted an FBA to explain why artificial immune systems are popular in spite of these proven drawbacks. It was shown that the inversely fitness-proportional mutation (IFPM) and the somatic contiguous hypermutation (CHM) could perform better than the single point mutation on OneMax while FBA is performed by considering different starting points and varied iteration budgets. It indicates that the traditional perspective of expected optimization time may be unable to explain the observed good performance, which is due to the limited length of runs. Therefore, the perspective of fixed-budget computations provides valuable information and additional insights.

3. Preliminaries

3.1. Problems

Considering a maximization problem
max f ( x ) , x = ( x 1 , , x n ) { 0 , 1 } n ,
denote its optimal solution by x * and optimal objective value by f * . The quality of a solution x is evaluated by its approximation error e ( x ) : = f ( x ) f * . The error e ( x ) takes finite values, called error levels:
e ( x ) { e 0 , e 1 , , e L } , 0 = e 0 e 1 e L ,
where L is a non-negative integer. x is called at the level i if e ( x ) = e i , i { 0 , 1 , , L } . The collection of solutions at level i is denoted by X i .
We investigate the optimization problem in the form
max f ( x ) ,
where x : = i = 1 n x i . Error levels of (1) take only n + 1 values. Two instances, the uni-modal OneMax problem and the multi-modal Deceptive problem, are considered in this paper.
Problem 1
(OneMax).
max f ( x ) = i = 1 n x i , x = ( x 1 , , x n ) { 0 , 1 } n .
Problem 2
(Deceptive).
max f ( x ) = i = 1 n x i , i f i = 1 n x i > n 1 , n 1 i = 1 n x i , o t h e r w i s e , x = ( x 1 , , x n ) { 0 , 1 } n .
For the OneMax problem, both exploration and exploitation are helpful to the convergence of EAs to the optimum, because exploration accelerates the convergence process and exploitation refines the precision of approximation solutions. However, for the Deceptive problem, local exploitation leads to convergence to the local optimum, but it in turn increases the difficulty to jump to the global optimum. That is, exploitation hinders convergence to the global optimum of the Deceptive problem, thus, the performance of EAs is dominantly influenced by their exploration ability.

3.2. Evolutionary Algorithms

For the sake of analysis on binomial crossover excluding the influence of population and mutation, the  ( 1 + 1 ) E A presented in Algorithm 1 is taken as the baseline algorithm in our study. Its candidate solutions are generated by the bitwise mutation with probability p m . The binomial crossover is appended to ( 1 + 1 ) E A , getting ( 1 + 1 ) E A C which is illustrated in Algorithm 2. The  ( 1 + 1 ) E A C first performs bitwise mutation with probability q m , and then applies binomial crossover with rate C R to generate a candidate solution for selection.
The EAs investigated in this paper can be modeled as homogeneous Markov chains [31,32]. Given the error vector
e ˜ = ( e 0 , e 1 , , e L ) ,
and the initial distribution
q ˜ [ 0 ] = ( q 0 [ 0 ] , q 1 [ 0 ] , , q L [ 0 ] )
the transition matrix of ( 1 + 1 ) E A and ( 1 + 1 ) E A C for the optimization problem (1) can be written in the form
R ˜ = ( r i , j ) ( L + 1 ) × ( L + 1 ) ,
where
r i , j = Pr { x t + 1 X i x t X j } , i , j = 0 , , L .
Algorithm 1 ( 1 + 1 ) E A
1:
counter t = 0 ;
2:
randomly generate a solution x 0 = ( x 1 , , x n ) ;
3:
while the stopping criterion is not satisfied do
4:
   generate the mutant y t = ( y 1 , , y n ) by bitwise mutation:
for   i = 1 , , n , y i = 1 x i , if   r n d i < p m , x i , otherwise , r n d i U [ 0 , 1 ] ;
5:
   if  f ( y ) f ( x t )  then
6:
      x t + 1 = y t ;
7:
   else
8:
      x t + 1 = x t ;
9:
   end if
10:
    t = t + 1 ;
11:
end while
Algorithm 2 ( 1 + 1 ) E A C
1:
counter t = 0 ;
2:
randomly generate a solution x 0 = ( x 1 , , x n ) ;
3:
while the stopping criterion is not satisfied do
4:
   Generate the mutant v = ( v 1 , , v n ) by bitwise mutation:
for   i = 1 , , n , v i = 1 x i , if   r n d 1 i < q m , x i , otherwise , r n d 1 i U [ 0 , 1 ] ;
5:
   set r n d i U { 1 , 2 , , n } ;
6:
   generate the offspring y = ( y 1 , , y n ) by performing binomial crossover on v :
for   i = 1 , , n , y i = v i , if   i = r n d i   or   r n d 2 i < C R , x i , otherwise , r n d 2 i U [ 0 , 1 ] ;
7:
   if  f ( y ) f ( x t )  then
8:
      x t + 1 = y t ;
9:
   else
10:
      x t + 1 = x t ;
11:
   end if
12:
    t = t + 1 ;
13:
end while
Recalling that the solutions are updated by the elitist selection, we know R ˜ is an upper triangular matrix that can be partitioned as
R ˜ = 1 r 0 0 R ,
where r 0 represents the probabilities to transfer from non-optimal statuses to the optimal status, and R is the transition submatrix depicting the transitions between non-optimal statuses.

3.3. Transition Probabilities

Transition probabilities can be confirmed by considering generation of a candidate y with f ( y ) f ( x ) , which is achieved if “l preferred bits” of x are changed. If there are multiple solutions that are better than x , there could be multiple choices for both the number l and the location of “l preferred bits”.
Example 1.
For the OneMax problem, e ( x ) equals to the amount of ‘0’-bits in x . Denoting e ( x ) = j and e ( y ) = i , we know y replaces x if and only if j i . Then, to generate a candidate y replacing x , “l preferred bits” can be confirmed as follows.
  • If i = j , “l preferred bits” consist of l / 2 ‘1’-bits and l / 2 ‘0’-bits, where l is an even number that is not greater than min { 2 j , 2 ( n j ) } .
  • While i < j , “l preferred bits” could be combinations of j i + k ‘0’-bits and k ‘1’-bits ( l = j i + 2 k ), where 0 k min { i , n j } . Here, k is not greater than i, because j i + k could not be greater than j, the number of ‘0’-bits in x . Meanwhile, k does not exceed n j , the number of ‘1’-bits in x .
If an EA flips each bit with an identical probability, the probability of flipping l bits are related to l and independent of their locations. Denoting the probability of flipping l bits by P ( l ) , we can confirm the connection between the transition probability r i , j and P ( l ) .
As presented in Example 1, transition from level j to level i ( i < j ) results from flips of j i + k ‘0’-bits and k ‘1’-bits. Then, transition probabilities for OneMax are confirmed as
r i , j = k = 0 M C n j k C j k + j i P ( 2 k + j i ) ,
where M = min n j , i , 0 i < j n .
According to definition of the Deceptive problem, we get the following map from x to e ( x ) .
x : 0 1 n 1 n e ( x ) : 1 2 n 0
Transition from level j to level i ( 0 i < j n ) is attributed to one of the following cases.
  • If i 1 , the amount of ‘1’-bits decreases from j 1 to i 1 . This transition results from a change of j i + k ‘1’-bits and k ‘0’-bits, where 0 k min { n j + 1 , i 1 } ;
  • if i = 0 , all of n j + 1 ‘0’-bits are flipped, and all of its ‘1’-bits keep unchanged.
Accordingly, transition probabilities for Deceptive are confirmed as
r i , j = k = 0 M C n j + 1 k C j 1 k + j i P ( 2 k + j i ) , i 1 , P ( n j + 1 ) , i = 0 ,
where M = min n j + 1 , i 1 .

3.4. Performance Metrics

To evaluate the performance of EAs, we propose two metrics for a given iteration budget, the expected approximation error (EAE) and the tail probability (TP) of EAs for t consecutive iterations.
Definition 1.
Let { x t , t = 1 , 2 } be the individual sequence of an individual-based EA.
(1)
The expected approximation error (EAE) after t consecutive iterations is
e [ t ] = E [ e ( x t ) ] = i = 0 L e i Pr { e ( x t ) = e i } .
(2)
Given i > 0 , the tail probability (TP) of the approximation error that e ( x t ) is greater than or equal to e i is defined as
p [ t ] ( e i ) = Pr { e ( x t ) e i } .
EAE is the fitness gap between a solution and the optimum. It measures solution quality after running t generations. TP is the probability distribution of a found solution over non-optimal levels where i > 0 . The sum of TP is the probability of not finding the optimum.
Given two EAs A and B , if both EAE and TP of Algorithm A are smaller than those of Algorithm B for any iteration budget, we say Algorithm A outperforms Algorithm B on problem (1).
Definition 2.
Let A and B be two EAs applied to problem (1).
1.
Algorithm A outperforms B , denoted by A B , if it holds that
  • e A [ t ] e B [ t ] 0 , t > 0 ;
  • p A [ t ] ( e i ) p B [ t ] ( e i ) 0 , t > 0 , 0 < i < L .
2.
Algorithm A asymptotically outperforms B on problem (1), denoted by A a B , if it holds that
  • lim t e A [ t ] e B [ t ] 0 ;
  • lim t + p A [ t ] ( e i ) p B [ t ] ( e i ) 0 .
The asymptotic outperformance is weaker than the outperformance.

4. Comparison of Transition Probabilities of Two EAs

In this section, we compare transition probabilities of ( 1 + 1 ) E A and ( 1 + 1 ) E A C . According to the connection between r i , j and P ( l ) , a comparison of transition probabilities can be conducted by considering the probabilities of flipping “l preferred bits”.

4.1. Probabilities of Flipping Preferred Bits

Denote probabilities of ( 1 + 1 ) E A and ( 1 + 1 ) E A C to flip “l preferred bits” by P 1 ( l , p m ) and P 2 ( l , C R , q m ) , respectively. By (5), we know
P 1 ( l , p m ) = ( p m ) l ( 1 p m ) n l .
Since the mutation and the binomial crossover in Algorithm 2 are mutually independent, we can get the probability by considering the crossover first. When flipping “l preferred bits” by ( 1 + 1 ) E A C , there are l + k ( 0 k n l ) bits of y set as v i by (7), the probability of which is
P C ( l + k , C R ) = l + k n ( C R ) l + k 1 ( 1 C R ) n l k .
If only “l preferred bits” are flipped, we know,
P 2 ( l , C R , q m ) = k = 0 n l C n l k P C ( l + k , C R ) ( q m ) l 1 q m k = 1 n l + ( n l ) C R n q m C R ( C R ) l 1 ( q m ) l 1 q m C R n l 1 .
Note that ( 1 + 1 ) E A C degrades to ( 1 + 1 ) E A when C R = 1 , and ( 1 + 1 ) E A becomes the random search while p m = 1 . Thus, we assume that p m , C R , and q m are located in ( 0 , 1 ) . A fair comparison of transition probabilities is investigated by considering the identical parameter setting
p m = C R q m = p , 0 < p < 1 .
Then, we know q m = p / C R , and Equation (14) implies
P 2 ( l , C R , p / C R ) = 1 n ( n l ) + l n p C R p l ( 1 p ) n l 1 .
Subtracting (13) from (16), we have
P 2 ( l , C R , p / C R ) P 1 ( l , p ) = 1 n ( n l ) + l n p C R ( 1 p ) p l ( 1 p ) n l 1 = 1 C R 1 l n p p l ( 1 p ) n l 1 .
From the fact that 0 < C R < 1 , we conclude that P 2 ( l , C R , p / C R ) is greater than P 1 ( l , p ) if and only if l > n p . That is, the introduction of the binomial crossover in ( 1 + 1 ) E A leads to the enhancement of the exploration ability of ( 1 + 1 ) E A C . We get the following theorem for the case that p 1 n .
Theorem 1.
While 0 < p 1 n , it holds for all 1 l n that P 1 ( l , p ) P 2 ( l , C R , p / C R ) .
Proof. 
The result can be obtained directly from Equation (17) by setting p 1 n . □
For the popular setting where the mutation probability of (1+1)EA is set as 1 / n , the introduction of binomial crossover does increase the ability to generate new candidate solutions. Then, we investigate how this improvement contributes to change of transition probabilities.

4.2. Comparison of Transition Probabilities

To validate that algorithm A is more efficient than algorithm B , it is assumed that the probability of A to transfer to promising statuses could be not smaller than that of B .
Definition 3.
Let A and B be two EAs with an identical initialization mechanism. A ˜ = ( a i , j ) and B ˜ = ( b i , j ) are the transition matrices of A and B , respectively. It is said that A ˜ dominates B ˜ , denoted by A ˜ B ˜ , if it holds that
1.
a i , j b i , j , 0 i < j L ;
2.
a i , j > b i , j , 0 i < j L .
Denote the transition probabilities of ( 1 + 1 ) E A and ( 1 + 1 ) E A C by p i , j and s i , j , respectively. For the OneMax problem and Deceptive problem, we get the relation of transition dominance on the premise that p m = C R q m = p 1 n .
Theorem 2.
For ( 1 + 1 ) E A and ( 1 + 1 ) E A C , denote their transition matrices by P ˜ and S ˜ , respectively. On the condition that p m = C R q m = p 1 n , it holds for problem (1) that S ˜ P ˜ .
Proof. 
Denote the collection of all solutions at level k by S ( k ) , k = 0 , 1 , , n . We prove the result by considering the transition probability
r i , j = Pr { y S ( i ) x S ( j ) } , ( i < j ) .
Since the function values of solutions are merely related to the number of ‘1’-bits, the probability to generate a solution y S ( i ) by performing mutation on x S ( j ) depends on the Hamming distance l = H ( x , y ) . Given x S j , S ( i ) is partitioned as S ( i ) = l = 1 L S l ( i ) , where S l ( i ) = { y S ( i ) H ( x , y ) = l } , and L is a positive integer that is smaller than or equal to n.
Accordingly, the probability to transfer from level j to i is confirmed as
r i , j = l = 1 L Pr { y S l ( i ) x S ( j ) } = l = 1 L S l ( i ) P ( l ) ,
where S l ( i ) is the size of S l ( i ) , P ( l ) the probability to flip “l preferred bits”. Then,
p i , j = l = 1 L Pr { y S l ( j ) x } = l = 1 L S l ( j ) P 1 ( l , p ) ,
s i , j = l = 1 L Pr { y S l ( j ) x } = l = 1 L S l ( j ) P 2 ( l , C R , p / C R ) .
Since p 1 / n , Theorem 1 implies that
P 1 ( l , p ) P 2 ( l , C R , p / C R ) , 1 l n .
Combining it with (18) and () we know
p i , j s i , j , 0 i < j n .
Then, we get the result by Definition 2. □
Example 2
(Comparison of transition probabilities for the OneMax problem). Let p m = C R q m = p 1 n . By (8), we have
p i , j = k = 0 M C n j k C j k + j i P 1 ( 2 k + j i , p ) ,
s i , j = k = 0 M C n j k C j k + j i P 2 ( 2 k + j i , C R , p / C R ) .
where M = min n j , i . Since p 1 / n , Theorem 1 implies that
P 1 ( 2 k + j i , p ) P 2 ( 2 k + j i , C R , p / C R ) ,
and by (21) and () we have p i , j s i , j , 0 i < j n .
Example 3
(Comparison of transition probabilities for the Deceptive problem). Let p m = C R q m = p 1 n . Equation (10) implies that
p i , j = k = 0 M C n j + 1 k C j 1 k + j i P 1 ( 2 k + j i , p ) , i > 0 , P 1 ( n j + 1 , p ) , i = 0 ,
s i , j = k = 0 M C n j + 1 k C j 1 k + j i P 2 ( 2 k + j i , C R , p C R ) , i > 0 , P 2 ( n j + 1 , C R , p / C R ) , i = 0 ,
where M = min n j + 1 , i 1 . Similar to the analysis of Example 2, we get the conclusion that p i , j s i , j , 0 i < j n .
The results demonstrate that when p 1 / n , the introduction of binomial crossover leads to transition dominance of ( 1 + 1 ) E A C over ( 1 + 1 ) E A . In the following section, we would like to answer if transition dominance leads to outperformance of ( 1 + 1 ) E A C over ( 1 + 1 ) E A .

5. Analysis of Asymptotic Performance

In this section, we will prove that ( 1 + 1 ) E A C asymptotically outperforms ( 1 + 1 ) E A using the average convergence rate [25,32].
Definition 4.
The average convergence rate (ACR) of an EA for t generation is
R E A ( t ) = 1 e [ t ] / e [ 0 ] 1 / t .
Lemma 1
([32], Theorem 1). Let R be the transition submatrix associated with a convergent EA. Under random initialization (i.e., the EA may start at any initial state with a positive probability), it holds
lim t + R E A ( t ) = 1 ρ ( R ) ,
where ρ ( R ) is the spectral radius of R .
Lemma 1 presents the asymptotic characteristics of the ACR, by which we get the result on the asymptotic performance of EAs.
Proposition 1.
If A ˜ B ˜ , there exists T > 0 such that
1.
e A [ t ] e B [ t ] , t > T ;
2.
p A [ t ] ( e i ) p B [ t ] ( e i ) , t > T , 1 i L .
Proof. 
By Lemma 1, we know ϵ > 0 , there exists T > 0 such that
e [ 0 ] ρ ( R ) ϵ t < e [ t ] < e [ 0 ] ρ ( R ) + ϵ t , t > T .
From the fact that the transition submatrix R of an RSH is upper triangular, we conclude
ρ ( R ) = max { r 1 , 1 , , r L , L } .
Denote
A ˜ = ( a i , j ) = 1 a 0 0 A , B ˜ = ( b i , j ) = 1 b 0 0 B .
While A ˜ B ˜ , it holds
a j , j = 1 i = 0 j 1 a i , j < 1 i = 0 j 1 b i , j = b j , j , 1 j L .
Then, Equation (28) implies that
ρ ( A ) < ρ ( B ) .
Applying it to (27) for ϵ < 1 2 ( ρ ( B ) ρ ( A ) ) , we have
e A [ t ] < e [ 0 ] ρ ( A ) + ϵ t < e [ 0 ] ρ ( B ) ϵ t < e B [ t ] ,
which proves the first conclusion.
Noting that the tail probability p [ t ] ( e i ) can be taken as the expected approximation error of an optimization problem with an error vector
e = ( 0 , , 0 i , 1 , , 1 ) ,
by (29) we have
p A [ t ] ( e i ) p B [ t ] ( e i ) , t > T , 1 i L .
The second conclusion is proven. □
By Definition 2 and Proposition 1, we get the following theorem for comparing the asymptotic performance of ( 1 + 1 ) E A and ( 1 + 1 ) E A C .
Theorem 3.
If C R = C R q m = p 1 n , the ( 1 + 1 ) E A C asymptotically outperforms ( 1 + 1 ) E A on problem (1).
Proof. 
The proof can be completed by applying Theorem 2 and Proposition 1. □
On condition that C R = C R q m = p 1 n , Theorem 3 indicates that after sufficiently many number of iterations, ( 1 + 1 ) E A C can performs better on problem (1) than ( 1 + 1 ) E A . A further question is whether ( 1 + 1 ) E A C outperforms ( 1 + 1 ) E A for t < + . We answer the question in next sections.

6. Comparison of the Two EAs on OneMax

In this section, we show that the outperformance introduced by binomial crossover can be obtained for the uni-modal OneMax problem based on the following lemma [29].
Lemma 2
([29], Theorem 3). Let
e ˜ = ( e 0 , e 1 , , e L ) , v ˜ = ( v 0 , v 1 , , v L ) ,
where 0 e i 1 e i , i = 1 , , L , v i > 0 , i = 0 , 1 , , L . If transition matrices R ˜ and S ˜ satisfy
s j , j r j , j , 1 j L ,
l = 0 i 1 ( r l , j s l , j ) 0 , 0 i < j L ,
l = 0 i ( s l , j 1 s l , j ) 0 , 0 i < j 1 < L ,
it holds
e ˜ R ˜ t v ˜ e ˜ S ˜ t v ˜ .
For the EAs investigated in this study, conditions (30)–() are satisfied thanks to the monotonicity of transition probabilities.
Lemma 3.
When p 1 / n ( n 3 ), P 1 ( l , p ) and P 2 ( l , C R , p / C R ) are monotonously decreasing in l.
Proof. 
When p 1 / n , Equations (13) and (14) imply that
P 1 ( l + 1 , p ) P 1 ( l , p ) = p 1 p 1 n 1 ,
P 2 ( l + 1 , C R , p / C R ) P 2 ( l , C R , p / C R ) = ( l + 1 ) ( 1 C R ) + n C R ( 1 p / C R ) l ( 1 C R ) + n C R ( 1 p / C R ) p 1 p l + 1 l p 1 p l + 1 l 1 n 1 ,
all of which are not greater than 1 when n 3 . Thus, P 1 ( l , p ) and P 2 ( l , C R , p / C R ) are monotonously decreasing in l. □
Lemma 4.
For the OneMax problem, p i , j and s i , j are decreasing in j.
Proof. 
We validate the monotonicity of p i , j for ( 1 + 1 ) E A , and that of s i , j can be confirmed in a similar way.
Let 0 i < j < n . By (21) we know
p i , j + 1 = k = 0 M C n j 1 k C j + 1 i k P 1 ( 2 k + j + 1 i , p ) ,
p i , j = k = 0 M C n j k C j i k P 1 ( 2 k + j i , p ) ,
where M = min n j 1 , i . Moreover, (33) implies that
C j + 1 i k P 1 ( 2 k + j + 1 i , p ) C j i k P 1 ( 2 k + j i , p ) = j + 1 ( j + 1 ) ( i k ) p 1 p j + 1 2 1 n 1 < 1 ,
and we know
C j + 1 i k P 1 ( 2 k + j + 1 i , p ) < C j i k P 1 ( 2 k + j i , p ) .
Note that
min n j 1 , i min n j , i , C n j 1 k < C n j k .
From (35)–(38) we conclude that
p i , j + 1 < p i , j , 0 i < j < n .
Similarly, we can validate that
s i , j + 1 < s i , j , 0 i < j < n .
In conclusion, p i , j and s i , j are monotonously decreasing in j. □
Theorem 4.
On condition that p m = C R q m = p 1 n , it holds for the OneMax problem that
( 1 + 1 ) E A C ( 1 + 1 ) E A .
Proof. 
Given the initial distribution q ˜ [ 0 ] and transition matrix R ˜ , the level distribution at iteration t is confirmed by
q ˜ [ t ] = R ˜ t q ˜ [ 0 ] .
Denote
e ˜ = ( e 0 , e 1 , , e L ) , o ˜ i = ( 0 , , 0 i , 1 , , 1 ) .
By premultiplying (39) with e ˜ and o ˜ i , respectively, we get
e [ t ] = e ˜ R ˜ t q ˜ [ 0 ] ,
p [ t ] ( e i ) = Pr { e ( x t ) } e i } = o ˜ i R ˜ t q ˜ [ 0 ] .
Meanwhile, by Theorem 2 we have
q j , j s j , j p j , j ,
l = 0 i 1 ( q l , j s l , j ) 0 , l = 0 i 1 ( s l , j p l , j ) 0 , i < j ,
and Lemma 4 implies
l = 0 i ( s l , j 1 s l , j ) 0 , l = 0 i ( p l , j 1 p l , j ) 0 i < j 1 .
Then, (42)–(44) validate satisfaction of conditions (30)–(), and by Lemma 2 we know
e ˜ S ˜ t q ˜ [ 0 ] e ˜ P ˜ t q ˜ [ 0 ] , t > 0 ; o ˜ i S ˜ t q ˜ [ 0 ] o ˜ i P ˜ t q ˜ [ 0 ] , t > 0 , 1 i < n .
Then, we get the conclusion by Definition 2. □
The above theorem demonstrates that the dominance of transition matrices introduced by the binomial crossover operator leads to the outperformance of ( 1 + 1 ) E A C on the uni-modal problem OneMax.

7. Comparison of the Two EAs on Deceptive

In this section, we show that the outperformance of ( 1 + 1 ) E A C over ( 1 + 1 ) E A may not always hold on Deceptive. Then, we propose an adaptive strategy of parameter setting arising from the theoretical analysis, with which ( 1 + 1 ) E A C performs better in terms of tail probability.

7.1. Numerical Demonstration for Inconsistency between the Transition Dominance and the Algorithm Outperformance

For the Deceptive problem, we first present a counterexample to show even if the transition matrix of an EA dominates another EA, we cannot conclude that the former EA outperforms the latter.
Example 4.
We construct two artificial Markov chains as the models of two EAs. Let E A R and E A S be two EAs starting with an identical initial distribution
p [ 0 ] = 1 n , 1 n , , 1 n t ,
and the respective transition matrices are
R ˜ = 1 1 n 3 2 n 3 n n 3 1 1 n 3 1 n 2 1 1 n 2 2 n 3 n 1 n 2 1 1 n
and
S ˜ = 1 2 n 3 4 n 3 2 n n 3 1 2 n 3 1 n 2 + 1 2 n 1 n 2 + 2 n + 8 2 n 3 n 1 n 2 + n 1 2 n 1 n 2 + n + 2 2 n 2 .
Obviously, it holds S ˜ R ˜ . Through computer simulation, we get the curve of EAE difference of the two EAs in Figure 1a and the curve of TPs difference between the two EAs in Figure 1b. From Figure 1b, it is clear that E A R does not always outperform E A S because the difference of TPs is negative at the early stage of the iteration process but later positive.
Now we turn to discuss ( 1 + 1 ) E A and ( 1 + 1 ) E A C on Deceptive. We demonstrate ( 1 + 1 ) E A C may not outperform ( 1 + 1 ) E A over all generations although the transition matrix of ( 1 + 1 ) E A C dominates that of ( 1 + 1 ) E A .
Example 5.
In ( 1 + 1 ) E A and ( 1 + 1 ) E A C , set p m = C R q m = 1 / n . For ( 1 + 1 ) E A C , let q m = 1 2 , C R = 2 n . The numerical simulation results of EAEs and TPs for 5000 independent runs are depicted in Figure 2. It is shown that when n 9 , both EAEs and TPs of ( 1 + 1 ) E A could be smaller than those of ( 1 + 1 ) E A C . This indicates that the dominance of the transition matrix does not always guarantee the outperformance of the corresponding algorithm.
With p m = C R q m = p 1 / n , although the binomial crossover leads to transition dominance of ( 1 + 1 ) E A C over ( 1 + 1 ) E A , the enhancement of exploitation plays a governing role in the iteration process. Thus, the imbalance of exploration and exploitation leads to poor performance of ( 1 + 1 ) E A C at some stage of the iteration process. As shown in the previous two examples, the outperformance of ( 1 + 1 ) E A C cannot be drawn from the dominance of transition matrices.
The fitness landscape of Deceptive confirms that global convergence of EAs on Deceptive is principally attributed to the direct transition from level j to level 0, quantified by the transition probability r 0 , j . By investigating the impact of binomial crossover on the transition probability r 0 , j , we arrive at an adaptive strategy for the regulation of the mutation rate and the crossover rate, by which performance of both ( 1 + 1 ) E A and ( 1 + 1 ) E A C are enhanced.

7.2. Comparisons on the Probabilities to Transfer from Non-Optimal Statuses to the Optimal Status

A comparison between p 0 , j and s 0 , j is performed by investigating their monotonicity. Substituting (13) and (14) into (23) and (24), respectively, we have
p 0 , j = P 1 ( n j + 1 , p m ) = ( p m ) n j + 1 ( 1 p m ) j 1 , s 0 , j = P 3 ( n j + 1 , C R , q m )
= 1 n ( j 1 ) ( 1 C R ) + n C R ( 1 q m ) C R n j ( q m ) n j + 1 1 q m C R j 2 .
We first investigate the maximum values of p 0 , j to get the ideal performance of ( 1 + 1 ) E A on the Deceptive problem.
Theorem 5.
While
p m = n j + 1 n ,
p 0 , j gets its maximum values p 0 , j m a x = n j + 1 n n j + 1 j 1 n j 1 .
Proof. 
By (45), we know
p m p 0 , j = ( n j + 1 n p m ) p m n j 1 p m j 2 .
While p m = n j + 1 n , p 0 , j gets its maximum value
p 0 , j m a x = P 1 ( n j + 1 , n j + 1 n ) = n j + 1 n n j + 1 j 1 n j 1 .
Influence of the binomial crossover on s 0 , j is investigated on condition that p m = q m . By regulating C R , we compare p 0 , j with the maximum value s 0 , j m a x of s 0 , j .
Theorem 6.
On condition that p m = q m , the following results hold.
1.
p 0 , 1 = s 0 , 1 m a x .
2.
If q m > n 1 n , p 0 , 2 < s 0 , 2 m a x ; otherwise, p 0 , 2 = s 0 , 2 m a x .
3.
j { 3 , , n 1 } , p 0 , j s 0 , j m a x if q m > n j n 1 ; otherwise, s 0 , j m a x = p 0 , j .
4.
if q m > 1 n , p 0 , n < s 0 , n m a x ; otherwise, s 0 , n m a x = p 0 , n .
Proof. 
Note that ( 1 + 1 ) E A C degrades to ( 1 + 1 ) E A when C R = 1 . Then, if the maximum value s 0 , j m a x of s 0 , j is obtained by setting C R = 1 , we have s 0 , j m a x = p 0 , j ; otherwise, it holds s 0 , j m a x > p 0 , j .
(1)
For the case that j = 1 , Equation () implies
s 0 , 1 = q m n C R n 1 .
Obviously, s 0 , 1 is monotonously increasing in C R . It gets the maximum value while C R = 1 . Then, by (45) we get s 0 , 1 m a x = p 0 , 1 .
(2)
While j = 2 , by () we have
s 0 , 2 C R = n 1 n q m n 1 C R n 3 n 2 + 1 n q m C R .
  • If 0 < q m n 1 n , s 0 , 2 is monotonously increasing in C R , and gets its maximum value while C R = 1 . For this case, we know s 0 , 2 m a x = p 0 , 2 .
  • While n 1 n < q m < 1 , s 0 , 2 gets its maximum value s 0 , 2 m a x by setting
    C R = n 2 n q m 1 .
    Then, we have s 0 , 2 m a x > p 0 , 2 .
(3)
For the case that 3 j n 1 , we denote
s 0 , j = n j + 1 n q m n j + 1 I 1 + j 1 1 q m n q m n j + 1 I 2 ,
where
I 1 = C R n j 1 q m C R j 1 , I 2 = C R n j + 1 1 q m C R j 2 .
Then,
I 1 C R = C R n j 1 1 q m C R j 2 n j n 1 q m C R , I 2 C R = C R n j 1 C R n j 3 n j + 1 n 1 q m C R .
  • While 0 < q m n j n 1 , both I 1 and I 2 are increasing in C R . For this case, s 0 , j gets its maximum value when C R = 1 , and we have s 0 , j m a x = p 0 , j .
  • If n j + 1 n 1 q m 1 , I 1 gets its maximum value when C R = n j ( n 1 ) q m , and I 2 gets its maximum value when C R = n j + 1 ( n 1 ) q m . Then, s 0 , j get its maximum value s 0 , j m a x at some
    C R n j ( n 1 ) q m , n j + 1 ( n 1 ) q m .
    Accordingly, we know s 0 , j m a x > p 0 , j .
  • If n j n 1 < q m < n j + 1 n 1 , I 1 gets its maximum value when C R = n j ( n 1 ) q m , and I 2 is monotonously increasing in C R . Then, s 0 , j get its maximum value s 0 , j m a x at some
    C R n j ( n 1 ) q m , 1 ,
    and we know s 0 , j m a x > p 0 , j .
(4)
While j = n , Equation () implies that
s 0 , n C R = n 1 1 q m C R n 3 1 2 q m n 1 n q m q m C R .
Denoting
g q m , C R = 1 2 q m n 1 n q m q m C R ,
we can confirm the sign of s 0 , n / C R by considering
C R g q m , C R = n 1 n q m q m .
  • While 0 < q m n 1 n , g q m , C R is monotonously decreasing in C R , and its minimum value is
    g q m , 1 = n q m 1 q m 1 .
    The maximum value of g q m , C R is
    g q m , 0 = 1 2 q m .
    (a)
    If 0 < q m 1 n , we have
    g q m , C R g q m , 1 > 0 .
    Thus, s 0 , n C R 0 , and s 0 , n is increasing in C R . For this case, s 0 , n get its maximum value when C R = 1 , and we have s 0 , n m a x = p 0 , n .
    (b)
    If 1 n < q m 1 2 , s 0 , n gets the maximum value s 0 , n m a x when
    C R = 1 2 q m q m ( n 1 n q m ) .
    Thus, s 0 , n m a x > p 0 , n .
    (c)
    If 1 2 < q m n 1 n , g q m , 0 < 0 , and then, s 0 , n is decreasing in C R . Then, its maximum value is obtained by setting C R = 0 , and we know s 0 , n m a x > p 0 , n .
  • While n 1 n < q m 1 , g q m , C R is increasing in C R , and its maximum value is
    g q m , 1 = n q m 1 q m 1 < 0 .
    Then, s 0 , n is monotonously decreasing in C R , and its maximum value is obtained by setting C R = 0 . Accordingly, we know s 0 , n m a x > p 0 , n .
In summary, s 0 , n m a x > p 0 , n while q m > 1 n ; otherwise, s 0 , n m a x = p 0 , n .
Theorems 5 and 6 present the “best” settings to maximize the transition probabilities from non-optimal statuses to the optimal level, by which we get a parameter adaptive strategy that greatly enhances the exploration of compared EAs.

7.3. Parameter Adaptive Strategy to Enhance Exploration of EAs

Since the level index j is equal to the Hamming distance between x and x * , improvement of level index j is bounded by reduction of the Hamming distance obtained by replacing x with y . Then, while the local exploitation leads to a transition from level j to a non-optimal level i, the practically adaptive strategy of parameters can be obtained according to the Hamming distance between x and y .
When ( 1 + 1 ) E A is located at the solution x at status j, Equation (47) implies that the “best” setting of mutation rate is p m ( j ) = n j + 1 n . Once it transfers to solution y at status i ( i < j ) , the “best” setting changes to p m ( i ) = n i + 1 n . Then, the difference of “best” settings is j i n , bounded from above by H ( x , y ) n . Accordingly, the mutation rate of ( 1 + 1 ) E A can be updated to
p m = p m + H ( x , y ) n .
For ( 1 + 1 ) E A C , the parameter q m is adapted using the strategy consistent to that of p m to focus on influence of C R . That is,
q m = q m + H ( x , y ) n .
Since s 0 , j demonstrates different monotonicity for varied levels, one cannot get an identical strategy for the adaptive setting of C R . As a compromise, we would like to consider the case that 3 j n 1 , which is obtained by random initialization with overwhelming probability.
According to the proof of Theorem 6, we know C R should be set as great as possible for the case q m ( 0 , n j n 1 ] ; while q m ( n j n 1 , 1 ] , C R is located in intervals whose boundary values are n j ( n 1 ) q m and n j + 1 ( n 1 ) q m , given by (49) and (50), respectively. Then, while q m is updated by (52), the update strategy of C R can be confirmed to satisfy that
C R q m = C R q m + H ( x , y ) n 1 .
Accordingly, the adaptive setting of C R could be
C R = C R q m + H ( x , y ) n 1 / q m ,
where q m is updated by (52).
By incorporating the adaptive strategy (51) to ( 1 + 1 ) E A , we compare the performance of its adaptive variant with the adaptive ( 1 + 1 ) E A C that regulates its mutation rate and crossover rate by (52) and (53), respectively. For 13–20 dimensional Deceptive problems, numerical simulation of the tail probability is implemented by 10,000 independent runs. The initial value of p m is set as 1 n . To investigate the sensitivity of the adaptive strategy on initial values of q m , the mutation rate q m in ( 1 + 1 ) E A C is initialized with values 1 n , 3 2 n and 2 n , and the corresponding variants are denoted by ( 1 + 1 ) E A C 1 , ( 1 + 1 ) E A C 2 and ( 1 + 1 ) E A C 3 , respectively.
The converging curves of averaged TPs are illustrated in Figure 3. Compared to the EAs with fixed parameters during the evolution process, the performance of the adaptive EAs on Deceptive has been significantly improved. Furthermore, we also note that the converging curves of adaptive ( 1 + 1 ) E A C are not sensitive to the initial mutation rate. Although transition dominance does not necessarily lead to outperformance of ( 1 + 1 ) E A C over ( 1 + 1 ) E A , the proposed adaptive strategy can greatly enhance global exploration of ( 1 + 1 ) E A C to a large extent, and consequently, we get the improved adaptive ( 1 + 1 ) E A C that is not sensitive to initial mutation rates.

8. Conclusions and Discussions

Under the framework of fixed-budget analysis, we conduct a pioneering analysis of the influence of binomial crossover on the approximation error of EAs. The performance of EAs after running finite generations is measured by two metrics: the expected value of the approximation error and the error tail probability, by which we make a case study by comparing the performance of ( 1 + 1 ) E A and ( 1 + 1 ) E A C with binomial crossover.
Starting from the comparison of the probability of flipping “l preferred bits”, it is proven that under proper conditions, incorporation of binomial crossover leads to the dominance of transition probabilities, that is, the probability of transferring to any promising status is improved. Accordingly, the asymptotic performance of ( 1 + 1 ) E A C is superior to that of ( 1 + 1 ) E A .
It is found that the dominance of transition probability guarantees that ( 1 + 1 ) E A C outperforms ( 1 + 1 ) E A on OneMax in terms of both expected approximation error and tail probability. However, this dominance does lead to the outperformance on Deceptive. This means that using binomial crossover may improve the performance on some problems but not on other problems.
For Deceptive, an adaptive strategy of parameter setting is proposed based on the monotonicity analysis of transition probabilities. Numerical simulations demonstrate that it can significantly improve the exploration ability of both ( 1 + 1 ) E A C and ( 1 + 1 ) E A , and superiority of binomial crossover is further strengthened by the adaptive strategy. Thus, a problem-specific adaptive strategy is helpful for improving the performance of EAs.
Our future work will focus on a further study for the adaptive setting of crossover rate in population-based EAs on more complex problems, as well as the development of adaptive EAs improved by the introduction of binomial crossover.

Author Contributions

Conceptualization, J.H. and X.Z.; formal analysis, C.W.; writing—original draft preparation, C.W.; writing—review and editing, Y.C. and J.H.; funding acquisition, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities grant number WUT:2020IB006.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tam, H.H.; Leung, M.F.; Wang, Z.; Ng, S.C.; Cheung, C.C.; Lui, A.K. Improved adaptive global replacement scheme for MOEA/D-AGR. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2153–2160. [Google Scholar]
  2. Tam, H.H.; Ng, S.C.; Lui, A.K.; Leung, M.F. Improved activation schema on automatic clustering using differential evolution algorithm. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 1749–1756. [Google Scholar]
  3. Gao, W.; Li, G.; Zhang, Q.; Luo, Y.; Wang, Z. Solving nonlinear equation systems by a two-phase evolutionary algorithm. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 5652–5663. [Google Scholar] [CrossRef]
  4. Jansen, T.; Wegener, I. The analysis of evolutionary algorithms—A proof that crossover really can help. Algorithmica 2002, 34, 47–66. [Google Scholar] [CrossRef]
  5. Kötzing, T.; Sudholt, D.; Theile, M. How crossover helps in pseudo-boolean optimization. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, Dublin, Ireland, 12–16 July 2011; pp. 989–996. [Google Scholar]
  6. Corus, D.; Oliveto, P.S. Standard steady state genetic algorithms can hillclimb faster than mutation-only evolutionary algorithms. IEEE Trans. Evol. Comput. 2017, 22, 720–732. [Google Scholar] [CrossRef] [Green Version]
  7. Dang, D.C.; Friedrich, T.; Kötzing, T.; Krejca, M.S.; Lehre, P.K.; Oliveto, P.S.; Sudholt, D.; Sutton, A.M. Escaping local optima using crossover with emergent diversity. IEEE Trans. Evol. Comput. 2017, 22, 484–497. [Google Scholar] [CrossRef] [Green Version]
  8. Sudholt, D. How crossover speeds up building block assembly in genetic algorithms. Evol. Comput. 2017, 25, 237–274. [Google Scholar] [CrossRef] [Green Version]
  9. Pinto, E.C.; Doerr, C. A simple proof for the usefulness of crossover in black-box optimization. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Coimbra, Portugal, 8–12 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 29–41. [Google Scholar]
  10. Oliveto, P.S.; Sudholt, D.; Witt, C. A tight lower bound on the expected runtime of standard steady state genetic algorithms. In Proceedings of the the 2020 Genetic and Evolutionary Computation Conference, Cancun, Mexico, 8–12 July 2020; pp. 1323–1331. [Google Scholar]
  11. Lengler, J.; Meier, J. Large population sizes and crossover help in dynamic environments. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Leiden, The Netherlands, 5–9 September 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 610–622. [Google Scholar]
  12. Lehre, P.K.; Yao, X. Crossover can be constructive when computing unique input output sequences. In Proceedings of the Asia-Pacific Conference on Simulated Evolution and Learning, Melbourne, Australia, 7–10 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 595–604. [Google Scholar]
  13. Doerr, B.; Happ, E.; Klein, C. Crossover can provably be useful in evolutionary computation. Theor. Comput. Sci. 2012, 425, 17–33. [Google Scholar] [CrossRef] [Green Version]
  14. Doerr, B.; Johannsen, D.; Kötzing, T.; Neumann, F.; Theile, M. More effective crossover operators for the all-pairs shortest path problem. Theor. Comput. Sci. 2013, 471, 12–26. [Google Scholar] [CrossRef]
  15. Sutton, A.M. Fixed-parameter tractability of crossover: Steady-state GAs on the closest string problem. Algorithmica 2021, 83, 1138–1163. [Google Scholar] [CrossRef]
  16. Richter, J.N.; Wright, A.; Paxton, J. Ignoble trails-where crossover is provably harmful. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Dortmund, Germany, 13–17 September 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 92–101. [Google Scholar]
  17. Antipov, D.; Naumov, S. The effect of non-symmetric fitness: The analysis of crossover-based algorithms on RealJump functions. In Proceedings of the the 16th ACM/SIGEVO Conference on Foundations of Genetic Algorithms, Virtual, 6–8 September 2021; pp. 1–15. [Google Scholar]
  18. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  19. Das, S.; Mullick, S.S.; Suganthan, P. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  20. Sepesy Maučec, M.; Brest, J. A review of the recent use of differential dvolution for large-scale global optimization: An analysis of selected algorithms on the CEC 2013 LSGO benchmark suite. Swarm Evol. Comput. 2019, 50, 100428. [Google Scholar] [CrossRef]
  21. Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  22. Lin, C.; Qing, A.; Feng, Q. A comparative study of crossover in differential evolution. J. Heuristics 2011, 17, 675–703. [Google Scholar] [CrossRef]
  23. Gong, T.; Tuson, A.L. Differential evolution for binary encoding. In Soft Computing in Industrial Applications; Saad, A., Dahal, K., Sarfraz, M., Roy, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 251–262. [Google Scholar]
  24. Doerr, B.; Zheng, W. Working principles of binary differential evolution. Theor. Comput. Sci. 2020, 801, 110–142. [Google Scholar] [CrossRef]
  25. Chen, Y.; He, J. Average convergence rate of evolutionary algorithms in continuous optimization. Inf. Sci. 2021, 562, 200–219. [Google Scholar] [CrossRef]
  26. Xu, T.; He, J.; Shang, C. Helper and equivalent objectives: Efficient approach for constrained optimization. IEEE Trans. Cybern. 2022, 52, 240–251. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Jansen, T.; Zarges, C. Performance analysis of randomised search heuristics operating with a fixed budget. Theor. Comput. Sci. 2014, 545, 39–58. [Google Scholar] [CrossRef] [Green Version]
  28. He, J. An analytic expression of relative approximation error for a class of evolutionary algorithms. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4366–4373. [Google Scholar]
  29. Wang, C.; Chen, Y.; He, J.; Xie, C. Error analysis of elitist randomized search heuristics. Swarm Evol. Comput. 2021, 63, 100875. [Google Scholar] [CrossRef]
  30. Jansen, T.; Zarges, C. Reevaluating Immune-Inspired Hypermutations Using the Fixed Budget Perspective. IEEE Trans. Evol. Comput. 2014, 18, 674–688. [Google Scholar] [CrossRef] [Green Version]
  31. He, J.; Yao, X. Towards an analytic framework for analysing the computation time of evolutionary algorithms. Artif. Intell. 2003, 145, 59–97. [Google Scholar] [CrossRef] [Green Version]
  32. He, J.; Lin, G. Average convergence rate of evolutionary algorithms. IEEE Trans. Evol. Comput. 2016, 20, 316–321. [Google Scholar] [CrossRef] [Green Version]
  33. Opara, K.R.; Arabas, J. Differential evolution: A survey of theoretical analyses. Swarm Evol. Comput. 2019, 44, 546–558. [Google Scholar] [CrossRef]
  34. Zhou, Y.; Yi, W.; Gao, L.; Li, X. Analysis of mutation vectors selection mechanism in differential evolution. Appl. Intell. 2016, 44, 904–912. [Google Scholar] [CrossRef]
  35. Zaharie, D. Influence of crossover on the behavior of differential evolution algorithms. Appl. Soft Comput. 2009, 9, 1126–1138. [Google Scholar] [CrossRef]
  36. Zaharie, D. Statistical properties of differential evolution and related random search algorithms. In COMPSTAT 2008: Proceedings in Computational Statistics; Brito, P., Ed.; Physica: Heidelberg, Germany, 2008; pp. 473–485. [Google Scholar]
  37. Zaharie, D.; Micota, F. Revisiting the analysis of population variance in differential evolution algorithms. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 1811–1818. [Google Scholar]
  38. Wang, L.; Huang, F.Z. Parameter analysis based on stochastic model for differential evolution algorithm. Appl. Math. Comput. 2010, 217, 3263–3273. [Google Scholar] [CrossRef]
  39. Opara, K.R.; Arabas, J. Comparison of mutation strategies in differential evolution—A probabilistic perspective. Swarm Evol. Comput. 2018, 39, 53–69. [Google Scholar] [CrossRef]
  40. Opara, K.R.; Arabas, J. The contour fitting property of differential mutation. Swarm Evol. Comput. 2019, 50, 100441. [Google Scholar] [CrossRef]
  41. Jansen, T.; Zarges, C. Evolutionary algorithms and artificial immune systems on a bi-stable dynamic optimisation problem. In Proceedings of the 16th Annual Conference on Genetic and Evolutionary Computation, Vancouver, BC, Canada, 12–16 July 2014; pp. 975–982. [Google Scholar]
  42. Nallaperuma, S.; Neumann, F.; Sudholt, D. Expected fitness gains of randomized search heuristics for the traveling salesperson problem. Evol. Comput. 2017, 25, 673–705. [Google Scholar] [CrossRef] [Green Version]
  43. Doerr, B.; Jansen, T.; Witt, C.; Zarges, C. A method to derive fixed budget results from expected optimisation times. In Proceedings of the the 15th Annual Conference on Genetic and Evolutionary Computation, Amsterdam, The Netherlands, 6–10 July 2013; pp. 1581–1588. [Google Scholar]
Figure 1. Simulation results on the difference of EAEs and TPs for the counterexample. (a) Difference of expected approximation errors (EAEs). (b) Difference of tail probabilities (TPs).
Figure 1. Simulation results on the difference of EAEs and TPs for the counterexample. (a) Difference of expected approximation errors (EAEs). (b) Difference of tail probabilities (TPs).
Mathematics 10 02850 g001
Figure 2. Numerical comparison for ( 1 + 1 ) E A and ( 1 + 1 ) E A C applied to the Deceptive problem, where n refers to the problem dimension. (a) Numerical comparison of expected approximation errors (EAEs). (b) Numerical comparison of tail probabilities (TPs).
Figure 2. Numerical comparison for ( 1 + 1 ) E A and ( 1 + 1 ) E A C applied to the Deceptive problem, where n refers to the problem dimension. (a) Numerical comparison of expected approximation errors (EAEs). (b) Numerical comparison of tail probabilities (TPs).
Mathematics 10 02850 g002
Figure 3. Numerical comparison on tail probabilities (TPs) of adaptive ( 1 + 1 ) E A and ( 1 + 1 ) E A C applied to the Deceptive problem, where n is the problem dimension. ( 1 + 1 ) E A C 1 , ( 1 + 1 ) E A C 2 , and ( 1 + 1 ) E A C 3 are three variants of ( 1 + 1 ) E A C with q m initialized as 1 n , 3 2 n , and 2 n , respectively.
Figure 3. Numerical comparison on tail probabilities (TPs) of adaptive ( 1 + 1 ) E A and ( 1 + 1 ) E A C applied to the Deceptive problem, where n is the problem dimension. ( 1 + 1 ) E A C 1 , ( 1 + 1 ) E A C 2 , and ( 1 + 1 ) E A C 3 are three variants of ( 1 + 1 ) E A C with q m initialized as 1 n , 3 2 n , and 2 n , respectively.
Mathematics 10 02850 g003aMathematics 10 02850 g003b
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, C.; He, J.; Chen, Y.; Zou, X. Influence of Binomial Crossover on Approximation Error of Evolutionary Algorithms. Mathematics 2022, 10, 2850. https://doi.org/10.3390/math10162850

AMA Style

Wang C, He J, Chen Y, Zou X. Influence of Binomial Crossover on Approximation Error of Evolutionary Algorithms. Mathematics. 2022; 10(16):2850. https://doi.org/10.3390/math10162850

Chicago/Turabian Style

Wang, Cong, Jun He, Yu Chen, and Xiufen Zou. 2022. "Influence of Binomial Crossover on Approximation Error of Evolutionary Algorithms" Mathematics 10, no. 16: 2850. https://doi.org/10.3390/math10162850

APA Style

Wang, C., He, J., Chen, Y., & Zou, X. (2022). Influence of Binomial Crossover on Approximation Error of Evolutionary Algorithms. Mathematics, 10(16), 2850. https://doi.org/10.3390/math10162850

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop