1. Introduction
Optimization refers to the study of problems with multiple feasible solutions and selecting the best solution [
1]. Similarly, an optimization algorithm refers to the optimal scheme found in many schemes under certain conditions, such as finding the optimal super parameter in a neural network such that the neural network can achieve optimal performance. Intelligent optimization algorithms are widely used for optimization problems in various fields. For example, the dynamic differential annealed optimization algorithm and an improved moth-flame optimization algorithm were used to solve mathematical and engineering optimization problems [
2,
3], the sunflower optimization algorithm was proposed for the optimal selection of the parameters of the circuit-based PEMFC model [
4], and a novel meta-heuristic equilibrium algorithm was used to find the optimal threshold value for a grayscale image [
5]. Intelligent optimization algorithms are mainly divided into four categories, namely, natural simulation optimization algorithms, evolutionary algorithms, plant growth simulation algorithms, and swarm intelligence optimization algorithms, among which swarm intelligence optimization algorithms is the most important [
6]. Researchers observed the habits of animals with respect to foraging and communicating with each other, described the characteristics of these animals with algorithms, and, finally, developed the first swarm intelligence optimization algorithm.
Common swarm intelligence optimization algorithms include the Cuckoo Search Algorithm (CSA) [
7], the particle swarm optimization algorithm (PSO) [
8], the Ant Colony Optimization algorithm (ACO) [
9], and the Firefly Algorithm (FA) [
10]. Early on, scholars proposed the Artificial Fish Swarm algorithm (AFS) [
11], which was inspired by fish communities’ food-seeking, gathering, and following behaviors. The AFS algorithm can achieve good convergence in the early stage, but the algorithm suffers from problems such as a slow convergence speed and low optimization accuracy in the later stage [
12]. Accordingly, scholars have been researching a solution to this problem. Zhang et al. [
13] dynamically adjusted the visual field and step size of the artificial fish and improved the updating strategy of the position of the artificial fish. Liu et al. [
14] used chaotic transformation to initialize the positions of the individual fish in order to render the fish more evenly distributed in a limited area. In addition, a physical transformation model was built based on the relationship between motion and physical fitness. The algorithm effectively improved the speed of convergence and the accuracy of optimization. Li et al. [
15] used the steepest descent method to update the artificial fish with the best fitness values, instructed other artificial fish through the exchange of information between the artificial fish, and accelerated the convergence speed of the artificial fish algorithm. Later, a hybrid algorithm was created by Liu et al. [
16], who introduced the movement operator of the particle swarm algorithm in order to adjust the movement direction and position of the artificial fish and enhance their ability to escape the local optimum. Li et al. [
17] introduced the gene exchange behavior of GA into the AFS algorithm to enhance its ability to escape the local optimum and improve its search efficiency. Inspired by the above methods, it has been found that the advantages of one algorithm can compensate for the disadvantages of another algorithm.
In recent years, many optimization algorithms have been proposed by scholars. Among them, Jiang was inspired by the feeding and mating behaviors of beetles and proposed a new intelligent optimization algorithm based on these animals: the beetle antenna search algorithm (BAS) [
18]. Since the time and space complexity of the BAS algorithm is lower and more efficient than that of the swarm intelligence algorithm, it is also widely used in other fields, such as medicine [
19], engineering design [
20], and image processing [
21]. In addition, the BAS algorithm has obvious advantages over other algorithms in terms of convergence speed, which addresses the problem posed by the slow convergence speed of the AFS algorithm in the later period. However, the BAS algorithm, employing a single beetle for optimization, is prone to yielding values corresponding to a local extremum and has poor optimization stability. Therefore, is has been improved by many scholars. Wang et al. [
22] expanded a single beetle to a population of beetles, optimized the step size update, and proposed a beetle swarm antenna search algorithm (BSAS) that combines the swarm intelligence algorithm with the feedback-based step-size update strategy. Zhao et al. [
23] proposed an algorithm combining the BAS and GA to solve the BAS algorithm’s susceptibility to yielding values corresponding to a local extremum in the optimization of multimodal complex functions. Khan [
24] used ADAM update rules to adaptively adjust the step size in each iteration to improve the BAS algorithm.
In order to solve the instability and low optimization accuracy at high dimensions of the AFS and BAS algorithms, this paper proposes a hybrid algorithm comprising the artificial fish swarm and beetle search algorithms (AFS-MMSBAS). The algorithm combines the advantages of the BAS and AFS, of which the BAS algorithm is improved by using the mutation of beetles and a multi-step detection strategy to improve optimization accuracy. In the early stage, the AFS algorithm is used for a global search and convergence to a certain extent; then, the improved BAS algorithm is used to accelerate convergence. The simulation results show that the AFS-MMSBAS algorithm has significantly improved stability and optimization compared with the AFS, MDBAS, and AFS-MDBAS algorithms.
2. Hybrid Algorithm of AFS and Improved BAS
To address the above problems, this section proposes the following algorithms:
- (1)
Improved BAS algorithm: The use of the mutation strategy and the multi-step detection strategy of the BAS are proposed to allow the beetle to escape the local extremum value and accelerate convergence speed.
- (2)
Hybrid of the AFS algorithm and BAS algorithm: The respective advantages of the AFS and BAS algorithms are combined to improve the optimization stability of the algorithm in high dimensions.
2.1. Mutation Selection Strategy of BAS
The BAS algorithm is prone to yielding values corresponding to a local extremum. In order to make the beetle escape a local extremum, the application of a mutation strategy to the beetle is proposed. In multi-dimensional problems, if each dimension changes at the same time, the following principle applies: the higher the dimension, the lower the probability that the newly generated individual is superior to the current individual. Therefore, this paper proposes a method wherein each dimension changes separately. If the beetle with the dimensional value changed is better than the one without a change, the current beetle will be updated. Otherwise, the mutation operation is continued in the next dimension—and so on—until the last dimension, as shown in
Figure 1.
In addition, in order to acquire better results, a single variation is expanded to multiple variations. The beetle with the best fitness from multiple mutant beetles is selected, and the next step is initiated. The variation process is shown in
Figure 2.
2.2. Multistep Detection Strategy
The choice of the initial step size has a great impact on the performance of the BAS algorithm. If the initial step size is too large, the algorithm converges slowly. If it is too small, it becomes easy to fall into the local extreme value and difficult to escape. The initial step size is usually determined manually and is then gradually reduced under the effect of a decreasing factor to achieve the goal of approaching the optimal value. Inspired by the multi-directional detection strategy [
23], this paper proposes its own multi-step detection strategy. This strategy dynamically increases or decreases the step size in the optimization process and reduces the impact of the initial step size on the algorithm’s performance. The basic idea is to expand and reduce the current step size by a certain multiple, move forward according to different steps, and select the step size with the best fitness value for an update. This method can accelerate the convergence speed and increase the possibility of escaping local extremum.
Figure 3 is a schematic diagram showing the multiple-step detection of the beetles.
Set the current iteration as
, the step size as
, and scale the step size according to Formula (1), where
represents the shortened step size,
is the reduction factor,
represents the extended step size, and
is the amplification factor.
is the fitness of the current beetle
after a step length of
in a certain direction. Execute
and
at the same time, and then select the step size corresponding to the optimal fitness
to update it according to Formula (2), where
is the step-decreasing factor
Pseudocode of multi-step detection strategy is shown in Algorithm 1.
Algorithm 1. Multi-step detection pseudo code |
- 1.
Inputs:
|
- 2.
for each of the do
|
- 3.
|
- 4.
end for
|
- 5.
select of corresponding to the best
|
- 6.
do Formula (2)
|
- 7.
return a new step
|
In addition, this paper initializes the step size of the beetle according to Formula (3) to avoid manual interference
wherein
represents the position of the initial beetle,
represents the step-decreasing factor, and
is a constant to prevent the initial step from equaling 0.
2.3. Hybrid Strategy of FAS-MMSBAS Algorithm
The FAS-MMSBAS algorithm is composed of the BAS algorithm and the AFS algorithm combined through a simple hybrid strategy. In the early stage, the AFS algorithm is used for global optimization and fast convergence, and in the later stage, the improved BAS algorithm is used to replace it and continue optimization. The hybrid algorithm reduces the instability of the BAS algorithm caused by individual random initialization to a certain extent, accelerates the convergence speed, and improves the optimization accuracy. The most important consideration regarding the hybrid algorithm is to determine when to end the AFS algorithm. If it is too early, the global optimization ability of the AFS algorithm will be too small. If it is too late, it will converge too slowly and waste time. Therefore, according to the characteristics of the AFS algorithm, this paper proposes the termination of the AFS algorithm when the optimal value remains unchanged or the convergence speed becomes significantly slow. The optimal individual obtained by the AFS algorithm is taken as the initial individual of the BAS algorithm. The interruption rule of the AFS algorithm is shown in Algorithm 2:
Algorithm 2. FAS algorithm termination rule pseudo code |
- 1.
Input: parameters p and q
|
- 2.
if the repeat times of best individual == q, or maximum iterations of AFS > q then
|
- 3.
initial individual of the BAS = The optimal individual of AFS
|
- 4.
end if
|
- 5.
do Improve BAS
|
The parameters p and q are manually set values, where p represents the number of iterations of the best value, and q represents the maximum number of iterations of the AFS algorithm in the hybrid algorithm. When the fish swarm iterates p times and the optimal position remains unchanged, the AFS algorithm ends. This means that the current fish population may find the best value or fall into a local extreme value. In addition, because the AFS algorithm converges slowly in the later period, if condition p is not met under certain conditions, q is set as the termination condition of the AFS algorithm to avoid wasting time. The implementation steps of FAS-MMSBAS algorithm are as follows:
Step 1. Set the initial parameters of the algorithm, including the populations, visual properties, attempt number, crowding factor, steps of the fish, antenna number, antenna length, and variation rate.
Step 2. Use the AFS algorithm to continuously optimize until the AFS iteration termination condition is met.
Step 3. Take the best individual in the optimization stage of the AFS algorithm as the initial position for the optimization of the beetle.
Step 4. Judge whether there is variation. If there is, go to step 5; if not, go to step 6.
Step 5. Allow the dimension to be mutated, until the end of the mutation.
Step 6. Perform multi-directional and multistep detection on the beetle; then, select the best beetle to continue to move forward and update the current beetle step length and whisker length.
Step 7. If the termination conditions are met, the algorithm is ended; if not, go to step 4.
Figure 4 provides a flow chart of the AFS-MMSBAS algorithm depicted according to the above steps.
4. Conclusions
In view of the advantages and disadvantages of the AFS algorithm and BAS algorithm, this paper proposes a hybrid algorithm, AFS-MMSBAS, which combines the artificial fish swarm and improved beetle antenna search algorithms. This hybrid algorithm involves the use of a mutation strategy to increase the probability of a beetle escaping the local extreme value and altering its course to a better direction. It also proposes a multi-step detection strategy to improve the convergence speed of the algorithm by adaptively changing the step size. Finally, a simple method is used to combine the AFS algorithm with the improved BAS algorithm. In order to test that the AFS-MMSBAS algorithm is superior to its two constituent algorithms, the AFS-MMSBAS algorithm is compared with the AFS, BAS, and AFS-BAS algorithms at different dimensions. In order to further verify the advantages of this algorithm in dealing with high-dimensional problems, the AFS-MMSBAS algorithm is compared with similar algorithms, namely, AAFSA-GE and ADSAFS-PSO. The experimental results show that the AFS-MMSBAS algorithm can solve the problems of the poor optimization and instability of the AFS and BAS algorithms at high dimensions, and that it has a faster convergence speed during the later period. In a word, the AFS-MMSBAS algorithm performs well in high-dimensional problem processing.
The algorithm proposed in this paper performs well for high-dimensional problem optimization but performs poorly for some function problems, such as step functions. In addition, the algorithm has poor ability to solve low-dimensional problems and has great room for improvement. Therefore, in the future work, the authors plan to improve the algorithm, specifically with respect to its ability to solve high-dimensional partial function optimization and deal with low-dimensional problems. Further, it will be applied to engineering practices.