Next Article in Journal
Transfer Learning for Operator Selection: A Reinforcement Learning Approach
Previous Article in Journal
Tries-Based Parallel Solutions for Generating Perfect Crosswords Grids
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Reward Population-Based Differential Genetic Harmony Search Algorithm

1
Department of Applied Informatics, Faculty of Science and Engineering, Hosei University, Tokyo 184-8584, Japan
2
Department of Electrical, Electronics and Information Engineering, Faculty of Engineering, Kanagawa University, Yokohama 221-8686, Japan
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(1), 23; https://doi.org/10.3390/a15010023
Submission received: 30 November 2021 / Revised: 10 January 2022 / Accepted: 11 January 2022 / Published: 14 January 2022
(This article belongs to the Special Issue Metaheuristics and Machine Learning: Theory and Applications)

Abstract

:
To overcome the shortcomings of the harmony search algorithm, such as its slow convergence rate and poor global search ability, a reward population-based differential genetic harmony search algorithm is proposed. In this algorithm, a population is divided into four ordinary sub-populations and one reward sub-population, for each of which the evolution strategy of the differential genetic harmony search is used. After the evolution, the population with the optimal average fitness is combined with the reward population to produce a new reward population. During an experiment, tests were conducted first on determining the value of the harmony memory size (HMS) and the harmony memory consideration rate (HMCR), followed by an analysis of the effect of their values on the performance of the proposed algorithm. Then, six benchmark functions were selected for the experiment, and a comparison was made on the calculation results of the standard harmony memory search algorithm, reward population harmony search algorithm, differential genetic harmony algorithm, and reward population-based differential genetic harmony search algorithm. The result suggests that the reward population-based differential genetic harmony search algorithm has the merits of a strong global search ability, high solving accuracy, and satisfactory stability.

1. Introduction

With the development of big data, cloud computing, artificial intelligence, and other technologies, the data size of networks has witnessed fast growth, and as a result, it has been more common to solve optimization problems that are similar to traffic networks, such as vehicle route planning, spacecraft design, and wireless sensor layouts [1]. Usually, these optimization problems can be expressed with mathematical programming forms. For general simple optimization problems, mathematical programming and iterative algorithms may be used for complex large optimization problems, however, it is quite difficult to seek a global or approximate optimal solution within a reasonable time with these traditional methods. Therefore, to smoothly solve complex large optimization problems, the heuristic algorithm was proposed and has received increasing attention in recent decades [2].
In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity [3,4]. The common meta-heuristic algorithms include a genetic algorithm, simulated annealing algorithm, and particle swarm optimization algorithm [5,6]. Unlike meta heuristics, which are independent of the problem, heuristic algorithms depend on a specific problem. Heuristic algorithms are essentially methods by trial and error [7]. In the process of seeking the optimal solution, it can change its search path according to individual or global experience. When it becomes impossible or difficult to find the optimal solution of the problem, a heuristic algorithm is an efficient method to obtain the feasible solution [8]. Harmony search is a heuristic global search algorithm introduced in 2001 by Zong Woo Geem, Joong Hoon Kim, and G. V. Loganathan [9]. Inspired by the improvisation of musicians, it simulates musicians’ improvisation process to achieve subtle harmonies. The HS algorithm is characterized by merits such as a simple structure, less demand for parameters, a high convergence rate, and strong robustness, which facilitates its application in optimizing a system’s continuous and discrete variables [10]. We hope that the HS algorithm can effectively solve some optimization problems. In recent years, scholars have constantly improved the algorithm and made some progress [11,12,13,14].
Mahdavi et al. used a new method for generating new solution vectors and set the pitch adjustment rate and distance bandwidth as dynamic changes, improving the accuracy and convergence rate of the HS algorithm [15]. Khalili et al. changed all key parameters of the HS algorithm into a dynamic mode without predefining any parameters. This algorithm is referred to as the global dynamic harmony search, and such modification imbued the algorithm with outstanding performance on unimodal functions and multimodal functions [16]. In view of the drawbacks of HS such as a low convergence rate and solving accuracy in solving complex problems, Ouyang et al. proposed an adaptive global modified harmony search (MHS) that fuses local searches [17]. Pan et al. proposed the concepts of the maximum harmony memory consideration rate, the minimum harmony memory consideration rate, the maximum regulation bandwidth, the minimum regulation bandwidth, etc. on the basis of the standard HS algorithm, and they achieved an automatic regulation mechanism for the relevant parameters of the harmony search algorithm by improving the current harmony memory consideration rate for each iteration and changing the bandwidth with the number of iterations [18]. To improve the efficiency of the HS algorithm and make up for its drawback of easily getting into local search, Zhang Kangli et al. improved the generation of the initial solution vector’s harmony memory and proposed an improved harmony algorithm, ALHS [19]. Given the low evolution efficiency of single populations and fewer studies on multi-population improvement, S. Z. Zhao combined dynamic multi-swarm particle swarm optimization (DMS-PSO) with the HS algorithm and simplified it into the dynamic multi-swarm particle swarm optimization harmony search (DMS-PSO-HS). These sub-populations are frequently re-combined, and there is information exchange among particles in the entire population. Compared with DMS-PSD and HS, DMS-PSO-HS is improved from the aspect of multi-mode and combinatorial testing [20]. As to the problems with the HS algorithm in solving high-dimensional multi-objective optimization problems, Zhang proposed an improved differential evolved harmony search algorithm. In this algorithm, mutation and crossover are adopted to substitute the original pitch adjustment in the HS optimization algorithm, thus improving the global search ability of the algorithm [21]. To effectively solve integrated process planning and scheduling (IPPS) problems, Wu et al. proposed a nested method for single-objective IPPS problems. On the external layer of the method, the HS algorithm was used to determine the manufacturing feature processing sub-path, and static and dynamic scoring methods were proposed for the sub-path to guide the search on the external layer. Meanwhile, a genetic algorithm was adopted on the internal layer to determine machine allocation and operation series. Upon combining the two algorithms, the validity of the proposed method was proved through a test of benchmark instances [22].
Inspired by the above, the improvement of the HS algorithm mainly focuses on the improvement of parameters and infusion with other algorithms. In this paper, to solve the main problems with the algorithm, such as its poor global search ability and poor solution accuracy, a reward population-based differential harmony search algorithm is proposed. The highlight of the proposed algorithm is as follows. (a) We hope that by using the multiple population reward mechanisms, the population diversity can be increased and the algorithm can avoid falling into local optimization as much as possible. (b) In the step of generating a new harmony, excellent individuals in existing populations are utilized for optimization, thus improving the convergence rate of the algorithm; at the same time, on the basis of differential evolution and genetic operations, the difference among individuals is utilized flexibly for search guidance, thus improving the search ability of the algorithm. Finally, in a numerical experiment, the validity of the algorithm is verified through comparison.

2. Harmony Search Algorithm

The harmony search algorithm is a novel intelligent optimization algorithm. Like the genetic algorithm, which simulates biological evolution, the simulated annealing algorithm, which simulates physical annealing, and the particle swarm optimization algorithm, which simulates flocks of birds, the harmony algorithm simulates the principle of concert performances [23].
Let us assume that a band consists of n people, and everyone plays a specific musical instrument. Then, all sounds being played correspond to a group of harmonies after combination, which is: X = x 1 , x 2 , , x n . As the tone of the initial harmony is not necessarily the best, they need to get the best harmony through continuous cooperation and rehearsal. For convenience of comparison, a mathematical function f ( x ) can be used to measure whether a harmony sounds good in the whole process. The f ( x ) serves as a general director, as well as a general judgment criteria. If the requirements are not met, the performance should be constantly adjusted until a set of satisfactory harmonies is achieved. This is the optimization process of the harmony search algorithm.

2.1. Parameters Involved in Harmony Search Algorithm

In short, for the harmony search algorithm (HS), all solution vectors (decision variable sets) are stored in harmony memory (HM). The main parameters of the HS algorithm are the harmony memory size (HMS), harmony memory consideration rate (HMCR), pitch adjusting rate (PAR), distance bandwidth (BW), and number of improvisations or stopping criterion (NI).
Harmony memory size (HMS): As the music of each musical instrument has a specific range, a solution space may be constructed on the basis of such a performance range. Then, the solution space is used to generate a harmony memory randomly. Therefore, the size of the harmony memory should be designated first.
Harmony memory consideration rate (HMCR): In each iteration, a set of harmonies should be extracted from the harmony memory at a specific rate. Upon micro tuning the set, a set of new harmonies is obtained. Then, a judgment is made on whether the new harmonies are better than the worst harmony in the harmony memory. The process of this judgment involves a comparison done on the basis of the function f ( x ) mentioned above. Therefore, a random harmony memory consideration rate should be generated.
Pitch adjusting rate (PAR): A set of harmonies should be selected from the harmony memory at a specific consideration rate for pitch adjustment.
Tone pitch adjusting bandwidth (BW): As mentioned above, a set of harmonies will be selected from the memory for pitch adjustment at a specific consideration rate. Bandwidth (BW) refers to the adjustment amplitude.
Maximum times of creation ( T m a x ): This refers to the times of creation by concert performers, or namely, the number of times the whole harmony adjustment process needs to be repeated.

2.2. Flow of Harmony Search Algorithm

(1) Initialize the problem and algorithm parameters
In general, the global optimization problem can be summarized as follows. Minimize f ( x ) subject to
x i X i i = 1 , 2 , , N .
where, f ( x ) is the objective function, x is the set of the decision variables x i , N is the number of decision variables, X i is the set of possible ranges of values for each decision variable, the upper bound for each decision variable is U B ( i ) , and the lower bound is L B ( i ) ; then L B ( i ) X i U B ( i ) .
In Section 2.1, it was introduced that five parameters are involved in harmony research. At the very beginning of the algorithm, these values should be set properly.
(2) Initialize harmony memory
A harmony memory with a size of HMS is generated on the basis of the solution space, as follows.
H M = x 1 1 x 2 1 x N 1 1 x N 1 x 1 2 x 2 2 x N 1 2 x N 2 · · · · · · · · · · · · x 1 H M S 1 x 2 H M S 1 x n 1 H M S 1 x N H M S 1 x 1 H M S x 2 H M S x N 1 H M S x N H M S
Each decision variable is generated as follows: x i j = L B ( i ) + ( U B ( i ) L B ( i ) ) r for i = 1 , 2 , , N and j = 1 , 2 , H M S , where r is a random number between 0 and 1.
(3) Generation of new harmony
A new harmony vector is generated by three rules: (a) memory consideration, (b) pitch adjustment, and (c) random selection [24]. First, randomly generate a random number r 1 in the range [0,1] and compare r 1 with the initialized HMCR. If r 1 < H M C R , take a variable randomly from the initialized HM, which is called memory consideration. Otherwise, it can be obtained by random selection (i.e., randomly generated between the search boundaries). In the last step, take a new harmony variable. If it is updated by the memory consideration, it needs to be adjusted, and a variable r 2 between [0,1] is randomly generated. If r 2 < P A R , adjust the variable on the basis of the initialized BW and get a new variable, which is called pitch adjustment. The pitch adjustment rule is given as:
x i n e w = x i n e w ± r B W
where r is a random number between 0 and 1.
(4) Update of harmony memory
The newly obtained harmony is calculated with f ( x ) . If the new harmony has a better objective function solution than the poorest solution in the above initialized harmony memory (HM), the new harmony will substitute the poorest harmony in the harmony memory.
(5) Judgment on termination
Check whether current times of creation have reached the above initialized maximum times T m a x of creation. If no, Steps 3–4 should be repeated until the maximum times of creation are reached.
The complete steps of the harmony search algorithm are shown in Algorithm 1.
Algorithm 1: HS Algorithm
1: Initialize the algorithm parameters HMS, HMCR, PAR, BW, T m a x
2: Initialize the harmony memory.
3: Repeat
4:       Improvise a new harmony as:
5:       for all i, do
6:              x i n e w memory consideration with probability HMCR random selection with probability 1 - HMCR
7:              i f x i n e w HM, then
8:                   x i n e w = x i n e w ± r B W with probability PAR x i n e w with probability 1 - PAR
9:              end if
10:        end for
11:         if the new harmony vector is better than the worst
12: one in the original HM, then
13:        update harmony memory
14:        end if
15: Until T m a x is fulfilled
16: Return best harmony

3. Improved Harmony Search Algorithm

Although the harmony search algorithm has the merits of satisfactory realizability, there are still some drawbacks to be overcome, such as a poor global search ability, inadequate accuracy in dealing with high dimensional optimization problems, and difficulties in pitch adjustment [25]. In this paper, a reward population-based differential genetic harmony search algorithm (RDEGA-HS) is proposed. Furthermore, the reward population harmony search algorithm (RHS) and differential genetic harmony search algorithm (DEGA-HS) are designed to facilitate comparison of the results in the experiment.

3.1. Reward Population Harmony Search Algorithm (RHS)

In this paper, we propose a multi-population model based on reward population. First, the initialized harmony memory is divided into multiple sub-memories, including several common sub-memories and one reward memory. The size of each memory is subject to the parameter λ . Namely,
P o p s i z e i = P o p s i z e λ i i = 1 , 2 , , N .
P o p = P o p i
λ i = 1
where, P o p and P o p i refer to the total harmony memories and sub-harmony memories, respectively, P o p s i z e and P o p s i z e i refer to the number of the total harmony memories and sub-harmony memories, respectively, λ i refers to the proportion of each sub-harmony memory, and i refers to the number of sub-harmony memories.
Each harmony memory evolves separately. Furthermore, after each evolution, the sub-harmony memory with the optimal average fitness is combined with the reward harmony memory to form a new reward harmony memory. At the same time, poor harmony factors are eliminated to guarantee stability in the number of newly generated harmony memories. In this paper, four sub-harmony memories and one reward memory are set.

3.2. Differential Genetic Harmony Search Algorithm (DEGA-HS)

3.2.1. Mutation of Differential Evolution

The differential evolution algorithm (DE) is a population-based heuristic search algorithm [26]. The flows of evolution are quite similar to those of the genetic algorithm. The operators of both are selection, crossover, and mutation. For the genetic algorithm, crossover goes before mutation. For the DE algorithm, however, mutation goes before crossover. Furthermore, with regard to the specific definition of these operations, there is a difference between the two algorithms.
Let us assume that a population consists of NP N-dimensional vector individuals. Then, each population can be expressed as: P o p = ( x 1 , x 2 , , x N P ) . Generally, after the initialization of a population, the differential mutation operation will generate mutation vectors. The mutation strategy of the DE algorithm is advantageous in overcoming premature convergence. Therefore, it was considered that the mutation operation of the DE algorithm be introduced into the harmony search algorithm to overcome the premature convergence of the harmony search algorithm, thus improving the convergence rate.
There are many forms of mutation operations, where the common forms are DE/rand/ 1/bin, DE/best/1/bin, DE/rand to best/1/bin, DE/best/2/bin and so on. As to the representative methods, they are “selection of DE algorithm”, “basic vector selection method”, “number of differential vectors”, and “crossover methods”. Generally, they are expressed in the form of DE/base/num/cross. If “best” is selected for the base part, it means the optimal individual from the population is selected as the basic vector; if “rand” is selected, it means random selection. If “bin” is selected for the cross part, it means binomial crossover is adopted. The results show that if “rand” is selected in the base part, it is favorable for maintaining diversity in populations; if “best” is selected, it stresses the convergence rate [27].
For each individual x i in a population, a mutation vector of the differential evolution algorithm is generated as per the following formula.
v i = x r 1 + F ( x r 2 x r 3 )
r 1 , r 2 ,and r 3 are selected randomly and differ from each other. x r 1 is referred to as a basic vector, ( x r 2 x r 3 ) is referred to as a differential vector, and F is a scaling factor, F ( 0 , 1 ) . F is used to control the size of the differential vector. It is worth noting that an excessively great value of F will cause difficulties in the convergence of populations, and an excessively small value will reduce the convergence speed.
In this paper, two mutation strategies of DE are fused to improve the proposed algorithm. “DE/rand/bin” is selected randomly during iteration, without using the current optimal solution. Therefore, the convergence rate is relatively low, but the global search ability is strong. During iteration, as “DE/best/1/bin” utilizes the current optimal solution, the algorithm is able to better improve the convergence rate. On this basis, “DE/best/1/bin” is flexibly used in the memory consideration of the HS algorithm, while “DE/rand/1/bin” is used for substituting pitch adjustment in the HS algorithm. The two mutation generation strategies are as follows.
DE / best / 1 / bin v i = x b e s t + F ( x r 1 x r 2 )
DE / rand / 1 / bin v i = x r a n d + F ( x r 1 x r 2 )

3.2.2. Partheno-Genetic Algorithm

To increase the diversity of harmony factors, the Partheno-genetic algorithm (PGA) was considered to increase the means of generating new harmonies. For individuals with high randomness, they are subject to a shift operation, inversion operation, and permutation operation, and thus, three new harmonies are obtained. Then, the harmony with the optimal fitness is selected as the new individual introduced by the genetic operation.
The PGA is an improved GA that was put forward by Li and Tong in 1999 [28]. PGA is a genetic method for reproduction through gene shift, gene permutation, and gene inversion. Next, we will introduce genetic operators, namely, the permutation operator, shift operator, and inversion operator, which are characterized by a simple genetic operation, decreasing the requirement for diversity in the initial population and avoiding “premature convergence” [29].
(1) Permutation Operator.
The permutation operator is a process where genes at specific locations of a chromosome are exchanged at a certain probability and the locations of the exchanged genes are random. Permutation operators may be divided into single-point permutation and multi-point permutation. As to single-point permutation, two genes on a chromosome are exchanged, and in multi-point permutation, multiple genes on a chromosome are exchanged. The process of the two permutation operators is shown in Figure 1, where B and B are the single-point permutation and multi-point permutation result for Chromosome A, respectively.
(2) Shift Operator.
The shift operator is a process in which genes in some substrings of a chromosome are shifted backward successively, while the last gene in the substring is shifted to the foremost location. The substrings whose genes are shifted in a chromosome and their length are selected randomly. The gene shift can be divided into single-point shift and multi-point shift. As to single-point shift, only one substring of a chromosome is selected for gene shift, and in multi-point shift, multiple substrings of a chromosome are selected for gene shift. The process of the shift operation is shown in Figure 2. H is a chromosome including multiple genes k i ( i 1 , 2 , N ) . I and I are chromosomes obtained by implementing single-point shift and multi-point shift operations for H, respectively.
(3) Inversion Operator.
The inversion operation is a process where the genes in some substrings of a chromosome are inversed successively at a specific probability. The substrings for gene inversion in a chromosome and their length are selected randomly. The gene inversion can also be divided into single-point inversion and multi-point inversion. As to single-point inversion, only one substring of a chromosome is selected for gene inversion, and for multi-point inversion, multiple substrings of a chromosome are selected for gene inversion. The operation is shown in Figure 3. We assume O is a chromosome including multiple genes l i ( i 1 , 2 , N ) . On the basis of the operations of single-point and multi-point inversions for chromosome O, we can obtain the chromosomes R and R .
The multi-point genetic operator is usually used when the chromosome length is large, while the single-point genetic operator is used when the chromosome length is small. Therefore, the genetic algorithms of single-point permutation, single-point shift, and single-point inversion for reproduction are used for genetic operator operations and applied to random selection in the general harmony search algorithm.

3.2.3. Flow of DEGA-HS Algorithm

As stated above, the mutation strategy of the differential mutation algorithm has the advantage of overcoming premature convergence and degeneration. Therefore, we considered introducing the mutation strategy of DE into the HS algorithm to overcome the premature convergence of HS, thus improving the convergence rate. At the same time, there is excessively high randomness in the generation of a new harmony in the general harmony search algorithm. High randomness brings both advantages and disadvantages. To balance this contradiction, the Partheno-genetic algorithm is also introduced.
The flow of the improved DEGA algorithm is as follows.
Step 1. Initialization of parameters.
In the new algorithm, the original tone pitch adjusting bandwidth (BW) will be canceled. Therefore, only the harmony memory size (HMS), harmony memory consideration rate (HMCR), pitch adjusting rate (PAR), and maximum times of creation ( T m a x ) will be initialized.
Step 2. Random initialization of harmony memory.
Step 3. Generation of a new harmony.
For the general harmony search algorithm, there are mainly three operations to generate a new harmony at this step: memory consideration, pitch adjustment, and random selection. However, the operations of the proposed algorithm differ from them. The flow is as follows.
First, a random number r a n d ( 1 ) is generated between (0,1). If r a n d ( 1 ) is smaller than or equal to HMCR, the mutation strategy of “DE/best/1/bin” is used to generate a new harmony. Namely, x j n e w = x j b e s t + F ( x j k x j l ) . Then, a second random number r a n d ( 2 ) is generated between (0,1). If r a n d ( 2 ) is smaller than or equal to PAR, the mutation strategy of “DE/rand/1/bin” is used. Namely, x j n e w = x j r a n d + F ( x j k x j l ) . Otherwise, no adjustment is made.
If r a n d ( 1 ) is greater than HMCR, a new harmony is generated within the search boundary. Next, a third random number r a n d ( 3 ) is generated between (0,1). If r a n d ( 3 ) is smaller than or equal to PAR, a new harmony is generated according to the operation of the Partheno-genetic operation. Otherwise, no adjustment is made.
Step 4. Update of harmony memory.
The newly generated harmony is assessed. If it is better than the poorest harmony in the harmony memory, substitution is made.
Step 5. Judgment on termination.
Check whether current times of creation have reached the set maximum times of creation. If no, Steps 3–4 should be repeated until the maximum times of creation are reached.

3.3. Reward Population-Based Differential Genetic Harmony Search Algorithm (RDEGA-HS)

We hope the combination of the multi-population reward mechanism with a differential genetic harmony algorithm will increase the diversity of harmony and improve the convergence rate.
The specific flow of the algorithm is as follows.
Step 1. Initialization of algorithm parameters.
Step 2. Random initialization of harmony memory.
Step 3. The total harmony memory is divided into Q + 1 (Q is an integer greater than 0) sub-harmony memories, including Q common sub-harmony memories and one reward sub-memory.
Step 4. As to all sub-harmony memories including the reward sub-memory, new harmonies are generated on the basis of differential genetic operations.
Step 5. Update of each sub-harmony memory.
Step 6. Judge whether the maximum number T m a x 1 of iterations has been reached. If no, Steps 4 and 5 are repeated. Otherwise, move on to Step 7.
Step 7. Find the common sub-harmony memory with the best evolution effect at present, and combine it with the original reward harmony memory. Then, sort them from good to bad according to the fitness. On the premise of keeping the number of harmonies in the newly generated harmony memory unchanged, eliminate bad harmonies, and form a new reward harmony memory to replace the original reward harmony memory.
Step 8. Judge whether the upper limit T m a x 2 of iteration has been reached. If yes, then exit the loop and output the optimal solution. Otherwise, return to Step 4.
The pseudo-codes of the algorithm are shown in Algorithm 2.
Algorithm 2: RDEGA-HS Algorithm
1: Initialize the algorithm parameters
2: Initialize the total harmony memory
3: For P o p = P o p i
4: Repeat
5:    While time(2) < T m a x 2 , do
6:        Improvise each new harmony as follows:
7:        While time(1) < T m a x 1
8:        For all j, do
9:                x j n e w = x j b e s t + F ( x j k x j l ) with probability HMCR random selection       with probability 1 - HMCR
10:          If with probability HMCR, then
11:                  x j n e w = x j r a n d + F ( x j k x j l ) with probability PAR x j n e w           with probability 1 - PAR
12:          Else, do
13:                 x j n e w = o p e r a t i o n o f p a r t h e n o g e n e t i c   with probability PAR x j n e w     with probability 1 - PAR
14:          End If
15:        End For
16:        If the new harmony vector is better than the worst one in the
17:        original HM, then
18:            update the harmony memory
19:        End If
20:        End while
21:    Update the reward harmony memory
22:    End While
23: End For
24: Return best harmony

4. Experiments

To carry out an in-depth study on the performance of RDEGA-HS, two experiments were carried out to determine the influence of the values of HMS and HMCR on the performance of the algorithm and to perform a comparative study between the HS algorithm and the three proposed HS algorithm variants. The function name, formulation, and search range are listed in Table 1. In this paper, we used six benchmark functions to verify the ability of the RDEGA-HS algorithm in finding the optimal solution, so as to judge the advantages and disadvantages of the improved algorithm. Then, a comparison was made among the results of HS, RHS, and DEGA-HS.
The Ackley function is a widely used multi-mode test function. The Griewank function has many widely distributed local minimum values and is considered to be a convex function. The Rastrigin function is a high-dimensional multi-mode function; in this function, a frequent local minimum value is generated by increasing cosine modulation, and the locations of minimum values are generally scattered. In addition, benchmark functions such as Rosenbrock, Sphere, and Schwefel function are also used in the experimental steps.

4.1. Effects of HMS and HMCR on Performance of RDEGA-HS

In this section, the six benchmark functions in Table 1 are used for studying the influence of HMS and HMCR on the performance of the algorithm. The dimensions of the six functions were set to 40. Each case was independently run 50 times. Table 2 and Table 3 report the average value and standard deviation obtained with different values of HMS and HMCR, respectively. In this experiment, we set the termination based on the maximum number of iterations. T m a x 1 is set to 10 and T m a x 2 is set to 100, so the maximum number of iterations is 1000.
It can be seen from Table 2 that HMS influenced the result of the algorithm. When HMS was 80, the algorithm achieved the best optimization accuracy and stability. Therefore, HMS was set to 80 in all test experiments herein.
It can be seen from Table 3 that the value of HMCR obviously influenced the optimization result of the proposed algorithm. A greater HMCR will obviously improve the local search ability, thus improving the performance of the algorithm. A smaller HMCR will influence the performance of the algorithm. When HMCR was 0.9, the proposed algorithm demonstrated a higher convergence rate. Therefore, in this paper, the HMCR value was chosen to be 0.9.

4.2. Comparative Study on RDEGA-HS and HS Variants

To study the expandability of the algorithm, six different variable dimensions were used in each function test: 30, 40, 50, 60, 70, and 80. In a simulation experiment, the common parameters were set as follows: HMS = 80, HMCR = 0.9, PAR = 0.6, and BW = 0.05. As in the previous experiment, we set the maximum number of iterations as the termination. Since the algorithm we proposed has two loops, in order to show fairness when compared with other algorithms, we set T m a x 1 to 10 and T m a x 2 to 100, so the maximum number of iterations is 1000. As a correspondence, the maximum number of iterations of the other three algorithms is also set to 1000. All optimization algorithms were achieved in Matlab R2021a.
To measure the convergence accuracy, robustness, convergence rate, and other performances of the different optimization algorithms, each function was run independently for 100 times. Then, the average optimal value and standard deviation were taken to serve as the final evaluation indexes. The curve of the average optimal fitness was given.
(1) Ackley Function
For the case where n = 40, the change in the average optimal fitness of the Ackley function with the number of iterations is shown in Figure 4.
(2) Griewank Function
For the case where n = 40, the change in the average optimal fitness value of the Griewank function with the number of iterations in shown in Figure 5.
(3) Rastrigin Function
For the case where n = 40, the change in the average optimal fitness value of the Rastrigin function with the number of iterations in shown in Figure 6.
(4) Rosenbrock Function
For the case where n = 40 , the change in the average optimal fitness value of the Rosenbrock function with the number of iterations in shown in Figure 7.
(5) Sphere Function
For the case where n = 40, the change in the average optimal fitness value of the Sphere function with the number of iterations in shown in Figure 8.
(6) Schwefel Function
For the case where n = 40, the change in the average optimal fitness value of the Schwefel function with the number of iterations in shown in Figure 9.
It can be seen from the comparison of the optimization results of the test functions in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 and the changes in average fitness in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 that the proposed algorithm has a strong optimization ability. Upon a general review of each algorithm’s optimization process for the above six functions, the dimensions of the functions had a significant influence on the optimal value. As the dimensions increased, the optimal value of each algorithm also changed. For example, for the Ackley function, the variable dimensions had a significant influence on each algorithm. In terms of a low-dimensional function, RDEGA-HS and the other algorithms were all able to achieve the global optimum; however, in terms of a high-dimensional function, the RDEGA-HS algorithm still achieved a satisfactory result.
As to a comparison of the algorithms’ stability, the experiments were repeated 100 times. In addition to the optimization ability, which has received a lot of attention, the algorithms’ stability is also an index worth paying attention to when evaluating the performance of algorithms. After 100 repetitions, the variance in the experimental results was solved to check the fluctuation in the algorithms’ optimal value. For the six functions, the RDEGA-HS algorithm still maintained satisfactory stability, while the rest of the algorithms were quite unstable.
Since five sub-harmony memories have been designed and calculated for RDEGA-HS, the running duration is longer than an algorithm where a single-harmony memory is set. However, as the diversity of harmony memories is increased, the local optimal solutions are avoided, thus improving the solving accuracy.

5. Conclusions

In this paper, on the basis of the standard harmony search (HS) algorithm, a reward population mechanism, differential mutation, and Partheno-genetic operation were used to improve the basic harmony search algorithm, whereby the harmony vector diversity and accuracy were improved. With the introduction of the reward population mechanism, the harmony memory was divided into four common sub-harmony memories and one reward harmony memory, each of which adopts the evolution strategy of the differential genetic harmony search. After each iteration, the sub-harmony memory with the best optimal average fitness was combined with the reward harmony memory. The mutation strategy of the differential evolution (DE) algorithm is able to overcome premature convergence. By introducing the mutation strategy of DE into the HS algorithm, it was able to overcome premature convergence and improve the convergence rate. Meanwhile, the Partheno-genetic algorithm was introduced the generation of new harmony vectors. Finally, the six frequently used test functions of Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Schwefel were introduced to verify the validity and convergence of the algorithms. Then, a comparison was made on the calculation results of the standard harmony search algorithm, reward population-based harmony search algorithm, differential genetic harmony search algorithm, and reward population-based differential genetic harmony search algorithm. The result suggests that the reward population-based differential genetic harmony search algorithm has advantages such as a strong global search ability, high solving accuracy, and satisfactory stability.

Author Contributions

Conceptualization, Y.Z.; methodology, Y.Z.; software, Y.Z. and J.L.; validation, Y.Z. and J.L.; investigation, Y.Z.; resources, L.L.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, J.L.; supervision, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Snyman, J.A. Practical Mathematical Optimization; Springer: Boston, MA, USA, 2005; pp. 113–167. [Google Scholar]
  2. Blum, C.; Puchinger, J.; Raidl, G.R.; Roli, A. Hybrid metaheuristics in combinatorial optimization: A survey. Appl. Soft Comput. 2011, 11, 4135–4151. [Google Scholar] [CrossRef] [Green Version]
  3. Balamurugan, R.; Natarajan, A.M.; Premalatha, K. Stellar-mass black hole optimization for biclustering microarray gene expression data. Appl. Artif. Intell. 2015, 29, 353–381. [Google Scholar] [CrossRef]
  4. Bianchi, L.; Dorigo, M.; Gambardella, L.M.; Gutjahr, W.J. A survey on metaheuristics for stochastic combinatorial optimization. Nat. Comput. 2009, 8, 239–287. [Google Scholar] [CrossRef] [Green Version]
  5. Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Frome, UK, 2010; pp. 4–5. [Google Scholar]
  6. Beheshti, Z.; Shamsuddin, S.M. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft Comput. Appl. 2013, 5, 1–35. [Google Scholar]
  7. Yang, X.S.; Chien, S.F.; Ting, T.O. Bio-inspired computation and optimization: An overview. In Bio-Inspired Computation in Telecommunications; Yang, X.S., Chien, S.F., Ting, T.O., Eds.; Elsevier Inc.: New York, NY, USA, 2015; pp. 1–21. [Google Scholar]
  8. Halim, A.H.; Ismail, I. Combinatorial optimization: Comparison of heuristic algorithms in travelling salesman problem. Arch. Computat. Methods Eng. 2019, 26, 367–380. [Google Scholar] [CrossRef]
  9. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  10. Zhao, F.Q.; Qin, S.; Yang, G.Q.; Ma, W.M.; Zhang, C.; Song, H.B. A differential-based harmony search algorithm with variable neighborhood search for job shop scheduling problem and its runtime analysis. IEEE Access 2018, 6, 76313–76330. [Google Scholar] [CrossRef]
  11. Geem, Z.W. Optimal cost design of water distribution networks using harmony search. Eng. Optimiz. 2006, 38, 259–277. [Google Scholar] [CrossRef]
  12. Geem, Z.W. Harmony search applications in industry. In Soft Computing Applications in Industry. Studies in Fuzziness and Soft Computing; Prasad, B., Ed.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 226, pp. 117–134. [Google Scholar]
  13. Hasanipanah, M.; Keshtegar, B.; Thai, D.K.; Troung, N.T. An ANN-adaptive dynamical harmony search algorithm to approximate the flyrock resulting from blasting. Eng. Comput. 2020, 1–13. [Google Scholar] [CrossRef]
  14. Al-Omoush, A.A.; Alsewari, A.A.; Alamri, H.S.; Zamli, K.Z. Comprehensive review of the development of the harmony search algorithm and its applications. IEEE Access 2019, 7, 14233–14245. [Google Scholar] [CrossRef]
  15. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  16. Khalili, M.; Kharrat, R.; Salahshoor, K.; Sefat, M.H. Global dynamic harmony search algorithm: GDHS. Appl. Math. Comput. 2014, 228, 195–219. [Google Scholar] [CrossRef]
  17. Ouyang, H.B.; Gao, L.Q.; Kong, X.Y.; Guo, L. Application of MHS algorithm to structural design problems. J. Northeast. Univ. 2013, 34, 1687–1690. [Google Scholar] [CrossRef]
  18. Pan, Q.K.; Suganthan, P.N.; Tasgetiren, M.F.; Liang, J.J. A self-adaptive global best harmony search algorithm for continuous optimization problems. Appl. Math. Comput. 2010, 216, 830–848. [Google Scholar] [CrossRef]
  19. Zhang, K.L.; Chen, S.Y.; Shao, Z.Z. The improvement of harmony search algorithm. Artif. Intell. Rob. Res. 2015, 4, 32–39. [Google Scholar] [CrossRef]
  20. Zhao, S.Z.; Liang, J.J.; Suganthan, P.N.; Tasgetiren, M.F. Dynamic multi-swarm particle swarm optimizer with harmony search. Expert Syst. Appl. 2011, 38, 3735–3742. [Google Scholar] [CrossRef]
  21. Zhang, T.; Xu, X.Q.; Li, Z.H.; Abu-Siada, A.; Guo, Y.T. Optimum location and parameter setting of STATCOM based on improved differential evolution harmony search algorithm. IEEE Access 2020, 8, 87810–87819. [Google Scholar] [CrossRef]
  22. Wu, X.L.; Li, J. Two layered approaches integrating harmony search with genetic algorithm for the integrated process planning and scheduling problem. Comput. Ind. Eng. 2021, 155, 107194. [Google Scholar] [CrossRef]
  23. Moh’d Alia, O.; Mandava, R. The variants of the harmony search algorithm: An overview. Artif. Intell. Rev. 2011, 36, 49–68. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Li, J.C.; Li, L. An improved clustering-based harmony search algorithm (IC-HS). In Proceedings of the SAI Intelligent Systems Conference, Amsterdam, The Nederlands, 2–3 September 2021; Arai, K., Ed.; Springer: Cham, Germany, 2021; Volume 295, pp. 115–124. [Google Scholar]
  25. Manjarres, D.; Landa-Torres, I.; Gil-Lopez, S.; Del Ser, J.; Bilbao, M.N.; Salcedo-Sanz, S.; Geem, Z.W. A survey on applications of the harmony search algorithm. Eng. Appl. Artif. Intel. 2013, 26, 1818–1831. [Google Scholar] [CrossRef]
  26. Deng, W.; Shang, S.F.; Cai, X.; Zhao, H.M.; Song, Y.J.; Xu, J.J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  27. Jain, S.; Sharma, V.K.; Kumar, S. Robot path planning using differential evolution. In Proceedings of the Advances in Computing and Intelligent Systems, Beijing, China, 14–16 December 2019; Sharma, H., Govindan, K., Poonia, R., Kumar, S., El-Medany, W., Eds.; Springer: Singapore, 2020; pp. 531–537. [Google Scholar]
  28. Li, M.J.; Tong, T.S. A partheno-genetic algorithm and analysis on its global convergence. Acta Autom. Sin. 1999, 25, 68–72. [Google Scholar] [CrossRef]
  29. Yang, Z.; Li, J.C.; Li, L. Time-dependent theme park routing problem by Partheno-genetic algorithm. Mathematics 2020, 8, 2193. [Google Scholar] [CrossRef]
Figure 1. Permutation operator operation.
Figure 1. Permutation operator operation.
Algorithms 15 00023 g001
Figure 2. Shift operator operation.
Figure 2. Shift operator operation.
Algorithms 15 00023 g002
Figure 3. Inversion operator operation.
Figure 3. Inversion operator operation.
Algorithms 15 00023 g003
Figure 4. Change in average optimal fitness value of Ackley function with number of iterations.
Figure 4. Change in average optimal fitness value of Ackley function with number of iterations.
Algorithms 15 00023 g004
Figure 5. Change in average optimal fitness value of Griewank function with number of iterations.
Figure 5. Change in average optimal fitness value of Griewank function with number of iterations.
Algorithms 15 00023 g005
Figure 6. Change in average optimal fitness value of Rastrigin function with number of iterations.
Figure 6. Change in average optimal fitness value of Rastrigin function with number of iterations.
Algorithms 15 00023 g006
Figure 7. Change in average optimal fitness value of Rosenbrock function with number of iterations.
Figure 7. Change in average optimal fitness value of Rosenbrock function with number of iterations.
Algorithms 15 00023 g007
Figure 8. Change in average optimal fitness value of Sphere function with number of iterations.
Figure 8. Change in average optimal fitness value of Sphere function with number of iterations.
Algorithms 15 00023 g008
Figure 9. Change in average optimal fitness value of the Schwefel function with the number of iterations.
Figure 9. Change in average optimal fitness value of the Schwefel function with the number of iterations.
Algorithms 15 00023 g009
Table 1. Standard test functions and parameter settings.
Table 1. Standard test functions and parameter settings.
Function NameFormulationSearch Range
Ackley f ( x ) = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos ( 2 π x i ) + 20 x i [ 32 , 32 ]
Griewank f ( x ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos x i i x i [ 600 , 600 ]
Rastrigin f x = i = 1 n [ x i 2 10 cos 2 π x i + 10 ] x i [ 5.12 , 5.12 ]
Rosenbrock f ( x ) = i = 1 n 100 x i + 1 x i 2 2 + 1 x i 2 x i [ 30 , 30 ]
Sphere f ( x ) = i = 1 n x i 2 x i [ 100 , 100 ]
Schwefel f ( x ) = i = 1 n x + i = 1 n x x i [ 10 , 10 ]
Table 2. Effects of HMS on performance of RDEGA-HS (HMCR = 0.9).
Table 2. Effects of HMS on performance of RDEGA-HS (HMCR = 0.9).
FuntionHMS
100805020105
Ackley9.26 × 10 3 5.44 × 10 3 1.24 × 10 2 5.24 × 10 1 1.02 × 10 1 8.84
1.20 × 10 2 5.57 × 10 3 8.45 × 10 2 1.12 × 10 1 9.542.84 × 10 1
Griewank5.16 × 10 10 1.35 × 10 11 2.46 × 10 8 9.54 × 10 6 8.89 × 10 5 5.12 × 10 2
9.251.68 × 10 1 5.121.26 × 10 1 5.84 × 10 1 6.25 × 10 1
Rastrigin1.91 × 10 7 1.91 × 10 8 1.91 × 10 6 1.91 × 10 4 1.91 × 10 2 1.91
3.21 × 10 3 5.57 × 10 3 1.05 × 10 3 5.41 × 10 2 4.89 × 10 1 6.27
Rosenbrock9.99 × 10 2 8.68 × 10 2 1.38 × 10 1 7.54 × 10 1 3.638.74
7.45 × 10 1 1.12 × 10 1 4.212.84 × 10 1 9.79 × 10 1 1.84 × 10 2
Sphere3.25 × 10 37 1.03 × 10 40 5.84 × 10 35 6.84 × 10 34 7.54 × 10 32 8.47 × 10 30
6.45 × 10 29 5.14 × 10 30 7.48 × 10 29 6.42 × 10 26 3.41 × 10 24 5.49 × 10 20
Schwefel5.84 × 10 34 2.51 × 10 35 4.98 × 10 32 7.69 × 10 31 3.98 × 10 30 4.16 × 10 28
5.68 × 10 30 4.87 × 10 31 7.98 × 10 28 1.36 × 10 27 8.54 × 10 26 2.65 × 10 24
Table 3. Effects of HMCR on performance of RDEGA-HS (HMS = 80).
Table 3. Effects of HMCR on performance of RDEGA-HS (HMS = 80).
FunctionHMCR
0.90.80.60.50.30.1
Ackley4.26 × 10 3 8.54 × 10 3 9.98 × 10 3 3.41 × 10 2 9.88 × 10 2 6.54
9.25 × 10 4 4.51 × 10 3 5.89 × 10 2 5.68 × 10 1 1.256.66 × 10 1
Griewank2.54 × 10 10 7.84 × 10 8 1.54 × 10 6 5.65 × 10 5 6.58 × 10 4 2.58 × 10 3
7.33 × 10 2 9.98 × 10 2 1.54 × 10 1 6.858.984.65 × 10 1
Rastrigin9.91 × 10 9 7.54 × 10 8 8.54 × 10 7 3.69 × 10 5 9.58 × 10 4 7.54 × 10 1
2.54 × 10 4 8.96 × 10 4 2.56 × 10 3 6.66 × 10 3 2.58 × 10 2 8.95 × 10 2
Rosenbrock7.56 × 10 2 8.75 × 10 2 2.45 × 10 1 9.87 × 10 1 2.68.74 × 10 1
1.58 × 10 1 6.87 × 10 1 8.956.54 × 10 1 1.26 × 10 2 9.68 × 10 2
Sphere2.54 × 10 39 8.54 × 10 38 5.64 × 10 35 9.87 × 10 34 6.84 × 10 31 5.36 × 10 29
7.84 × 10 30 5.42 × 10 28 6.46 × 10 28 3.38 × 10 25 5.10 × 10 23 3.86 × 10 21
Schwefel5.98 × 10 36 6.04 × 10 33 9.40 × 10 31 3.20 × 10 30 1.65 × 10 28 9.69 × 10 27
2.31 × 10 30 3.94 × 10 29 4.48 × 10 27 9.45 × 10 25 4.31 × 10 23 1.41 × 10 22
Table 4. Result of the Ackley function under different optimization algorithms.
Table 4. Result of the Ackley function under different optimization algorithms.
AlgorithmDimensionsTheoretical OptimumMean OptimumStandard Deviation
HS3003.66 × 10 2 2.35
4003.55 × 10 1 3.51
5001.254.21
6003.353.23 × 10 1
7008.70 × 10 1 5.64 × 10 2
8009.49 × 10 2 6.43 × 10 2
RHS3002.41 × 10 2 5.33 × 10 2
4001.02 × 10 1 6.25 × 10 1
5001.014.22
6008.251.10 × 10 1
7006.59 × 10 1 4.56 × 10 2
8001.88 × 10 2 3.12 × 10 3
DEGA-HS3008.20 × 10 5 2.03 × 10 2
4001.22 × 10 2 6.01 × 10 2
5001.25 × 10 1 8.85 × 10 1
6007.226.43
7006.96 × 10 1 7.80 × 10 1
8001.69 × 10 2 9.72 × 10 3
RDEGA-HS3002.67 × 10 5 1.22 × 10 3
4005.44 × 10 3 3.25 × 10 3
5009.56 × 10 2 5.26 × 10 1
6002.29 × 10 1 4.14
7009.61 × 10 1 2.34 × 10 1
8005.771.22 × 10 2
Table 5. Results of the Griewank function under different optimization algorithms.
Table 5. Results of the Griewank function under different optimization algorithms.
AlgorithmDimensionsTheoretical OptimumAverage OptimumStandard Deviation
HS3004.00 × 10 11 1.00
4006.02 × 10 9 1.50
5001.38 × 10 8 2.64
6004.86 × 10 7 7.16
7005.99 × 10 7 5.99 × 10 1
8006.98 × 10 6 7.94 × 10 1
RHS3003.13 × 10 10 5.19 × 10 1
4001.66 × 10 9 7.73 × 10 1
5007.84 × 10 9 1.15
6002.06 × 10 8 7.52
7006.34 × 10 8 7.93
8004.54 × 10 7 1.87 × 10 1
DEGA-HS3008.82 × 10 11 2.15 × 10 1
4002.08 × 10 10 5.25 × 10 1
5001.08 × 10 9 7.96 × 10 1
6008.90 × 10 8 8.11 × 10 1
7008.22 × 10 7 4.48
8007.91 × 10 6 5.21
RDEGA-HS3001.13 × 10 12 1.24 × 10 2
4001.35 × 10 11 1.68 × 10 1
5001.35 × 10 10 2.29 × 10 1
6007.44 × 10 9 7.67 × 10 1
7004.93 × 10 8 9.45 × 10 1
8008.01 × 10 7 1.36
Table 6. Results of the Rastrigin function under different optimization algorithms.
Table 6. Results of the Rastrigin function under different optimization algorithms.
AlgorithmDimensionsTheoretical OptimumAverage OptimumStandard Deviation
HS3001.39 × 10 7 3.32 × 10 2
4008.52 × 10 7 4.98
5002.86 × 10 5 8.75
6005.20 × 10 5 6.11 × 10 1
7005.38 × 10 4 4.85 × 10 2
8003.59 × 10 3 8.09 × 10 2
RHS3004.02 × 10 7 1.72 × 10 1
4008.30 × 10 7 2.56
5003.12 × 10 5 3.80
6004.32 × 10 5 8.60
7002.73 × 10 4 4.56 × 10 1
8005.91 × 10 3 5.87 × 10 2
DEGA-HS3009.46 × 10 8 7.12 × 10 2
4002.09 × 10 7 1.74 × 10 1
5001.03 × 10 5 2.64
6004.81 × 10 4 7.34
7009.85 × 10 3 1.67 × 10 1
8006.95 × 10 2 5.74 × 10 1
RDEGA-HS3001.46 × 10 8 4.12 × 10 3
4001.91 × 10 8 5.57 × 10 3
5001.06 × 10 7 7.60 × 10 1
6008.34 × 10 7 4.21
7004.63 × 10 6 7.84
8009.85 × 10 6 8.73
Table 7. Results of the Rosenbrock function under different optimization algorithms.
Table 7. Results of the Rosenbrock function under different optimization algorithms.
AlgorithmDimensionsTheoretical OptimumAverage OptimumStandard Deviation
HS3008.43 × 10 2 6.36 × 10 2
4004.82 × 10 1 4.65 × 10 1
5002.866.80
6007.14 × 10 2 9.45 × 10 2
7007.14 × 10 2 9.45 × 10 2
8005.23 × 10 3 7.35 × 10 3
RHS3004.08 × 10 2 2.77 × 10 2
4004.68 × 10 1 8.47 × 10 1
5006.933.72
6009.635.71 × 10 1
7003.72 × 10 1 6.62 × 10 2
8007.52 × 10 2 5.04 × 10 3
DEGA-HS3007.77 × 10 3 7.98 × 10 1
4002.67 × 10 1 2.91
5006.12 × 10 1 6.84
6001.558.15 × 10 1
7007.15 × 10 1 3.43 × 10 2
8007.02 × 10 2 1.48 × 10 3
RDEGA-HS3002.21 × 10 3 9.98 × 10 2
4009.04 × 10 2 4.56 × 10 2
5002.53 × 10 1 9.10 × 10 1
6005.905.78
7001.31 × 10 1 1.74 × 10 1
8008.33 × 10 1 4.77 × 10 2
Table 8. Results of the Sphere function under different optimization algorithms.
Table 8. Results of the Sphere function under different optimization algorithms.
AlgorithmDimensionsTheoretical OptimumAverage OptimumStandard Deviation
HS3008.72 × 10 38 3.26 × 10 28
4009.21 × 10 36 1.25 × 10 28
5004.11 × 10 36 3.67 × 10 28
6008.42 × 10 34 1.63 × 10 27
7008.16 × 10 34 1.09 × 10 26
8002.91 × 10 33 1.28 × 10 26
RHS3002.45 × 10 38 2.56 × 10 28
4006.49 × 10 37 2.06 × 10 28
5001.14 × 10 35 1.63 × 10 28
6004.80 × 10 36 3.32 × 10 27
7007.41 × 10 35 2.46 × 10 27
8003.56 × 10 34 2.33 × 10 27
DEGA-HS3009.17 × 10 38 3.63 × 10 29
4003.90 × 10 38 1.40 × 10 28
5006.56 × 10 37 3.06 × 10 28
6008.62 × 10 35 3.05 × 10 28
7001.07 × 10 33 1.73 × 10 27
8001.14 × 10 32 2.39 × 10 27
RDEGA-HS3007.00 × 10 40 6.58 × 10 31
4002.66 × 10 39 5.84 × 10 30
5002.77 × 10 38 2.27 × 10 29
6003.92 × 10 36 3.14 × 10 28
7001.01 × 10 34 3.69 × 10 28
8003.89 × 10 34 8.51 × 10 29
Table 9. Results of the Schwefel function under different optimization algorithms.
Table 9. Results of the Schwefel function under different optimization algorithms.
AlgorithmDimensionsTheoretical OptimumAverage OptimumStandard Deviation
HS3003.93 × 10 35 7.20 × 10 27
4003.36 × 10 34 3.55 × 10 25
5007.14 × 10 35 3.82 × 10 26
6002.59 × 10 33 2.93 × 10 24
7001.58 × 10 32 3.07 × 10 23
8005.24 × 10 31 3.24 × 10 23
RHS3002.44 × 10 36 6.47 × 10 28
4003.70 × 10 35 1.69 × 10 26
5001.60 × 10 34 1.23 × 10 25
6004.13 × 10 33 3.01 × 10 25
7002.17 × 10 32 1.79 × 10 23
8004.43 × 10 32 3.38 × 10 23
DEGA-HS3004.63 × 10 38 9.70 × 10 29
4009.97 × 10 36 1.24 × 10 27
5003.25 × 10 35 8.49 × 10 27
6001.13 × 10 33 8.16 × 10 26
7001.97 × 10 32 3.24 × 10 24
8005.93 × 10 32 2.29 × 10 22
RDEGA-HS3001.52 × 10 39 2.19 × 10 29
4005.42 × 10 36 8.45 × 10 29
5003.76 × 10 35 3.19 × 10 26
6006.41 × 10 35 2.42 × 10 25
7001.09 × 10 34 1.53 × 10 25
8003.20 × 10 33 2.06 × 10 24
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Li, J.; Li, L. A Reward Population-Based Differential Genetic Harmony Search Algorithm. Algorithms 2022, 15, 23. https://doi.org/10.3390/a15010023

AMA Style

Zhang Y, Li J, Li L. A Reward Population-Based Differential Genetic Harmony Search Algorithm. Algorithms. 2022; 15(1):23. https://doi.org/10.3390/a15010023

Chicago/Turabian Style

Zhang, Yang, Jiacheng Li, and Lei Li. 2022. "A Reward Population-Based Differential Genetic Harmony Search Algorithm" Algorithms 15, no. 1: 23. https://doi.org/10.3390/a15010023

APA Style

Zhang, Y., Li, J., & Li, L. (2022). A Reward Population-Based Differential Genetic Harmony Search Algorithm. Algorithms, 15(1), 23. https://doi.org/10.3390/a15010023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop