Next Article in Journal
The Application of the Improved Jellyfish Search Algorithm in a Site Selection Model of an Emergency Logistics Distribution Center Considering Time Satisfaction
Next Article in Special Issue
Bio-Inspired Internet of Things: Current Status, Benefits, Challenges, and Future Directions
Previous Article in Journal
Utilizing an Oxidized Biopolymer to Enhance the Bonding of Glass Ionomer Luting Cement Particles for Improved Physical and Mechanical Properties
Previous Article in Special Issue
A Subtraction-Average-Based Optimizer for Solving Engineering Problems with Applications on TCSC Allocation in Power Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multiple Mechanism Enhanced Arithmetic Optimization Algorithm for Numerical Problems

College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(4), 348; https://doi.org/10.3390/biomimetics8040348
Submission received: 18 July 2023 / Revised: 1 August 2023 / Accepted: 1 August 2023 / Published: 6 August 2023

Abstract

:
The Arithmetic Optimization Algorithm (AOA) is a meta-heuristic algorithm inspired by mathematical operators, which may stagnate in the face of complex optimization issues. Therefore, the convergence and accuracy are reduced. In this paper, an AOA variant called ASFAOA is proposed by integrating a double-opposite learning mechanism, an adaptive spiral search strategy, an offset distribution estimation strategy, and a modified cosine acceleration function formula into the original AOA, aiming to improve the local exploitation and global exploration capability of the original AOA. In the proposed ASFAOA, a dual-opposite learning strategy is utilized to enhance population diversity by searching the problem space a lot better. The spiral search strategy of the tuna swarm optimization is introduced into the addition and subtraction strategy of AOA to enhance the AOA’s ability to jump out of the local optimum. An offset distribution estimation strategy is employed to effectively utilize the dominant population information for guiding the correct individual evolution. In addition, an adaptive cosine acceleration function is proposed to perform a better balance between the exploitation and exploration capabilities of the AOA. To demonstrate the superiority of the proposed ASFAOA, two experiments are conducted using existing state-of-the-art algorithms. First, The CEC 2017 benchmark function was applied with the aim of evaluating the performance of ASFAOA on the test function through mean analysis, convergence analysis, stability analysis, Wilcoxon signed rank test, and Friedman’s test. The proposed ASFAOA is then utilized to solve the wireless sensor coverage problem and its performance is illustrated by two sets of coverage problems with different dimensions. The results and discussion show that ASFAOA outperforms the original AOA and other comparison algorithms. Therefore, ASFAOA is considered as a useful technique for practical optimization problems.

1. Introduction

With the rapid development in various fields, real-world optimization problems are becoming more and more complicated. When dealing with these emerging optimization problems, traditional optimization methods require too much time and expensive. In most cases, it is known that relatively exact solutions are acceptable, meaning that the estimated better solution is acceptable in production practice. Metaheuristic algorithms are an emerging class of optimization techniques with the advantages of high operational efficiency, flexibility, stability, simplicity of implementation, parallelism, and ease of combining with other algorithms [1]. Therefore, many optimization algorithms have been proposed in recent decades to solve these nonconvex, nonlinearly constrained, and complex optimization problems and have proven to be very effective for these practical problems.
As one of the novel algorithms, AOA was initially applied to numerical optimization problems and engineering design problems. Due to its uncomplicated structure and excellent performance, AOA has covered many areas such as support vector regression (SVR) parameter optimization [2], tuning PID controllers [3,4], fuel cell parameter extraction [5], DNA sequence optimization design [6], clustering optimization [7,8], power system stabilizer design [9], feature selection [10], photovoltaic parameter optimization [11,12,13], robot path planning [14], wireless sensor network location and deployment [15], IoT workflow scheduling [16], image segmentation [17], etc.
Although AOA has better performance, it reduces the convergence rate and tends to fall into local optimum solutions when facing optimization problems with complex structures. Therefore, the convergence accuracy and convergence speed of AOA are achieved by adopting various mechanisms. For example, Dhawale et al. used the Levy flight strategy to enhance the exploitation and exploration capabilities of AOA [18]. Zhang et al. proposed a hybrid AOA algorithm that introduces energy parameters for Harris hawks optimization to balance exploitation and exploration [19]. Izci et al. proposed a hybrid arithmetic optimization algorithm incorporating a Nelder–Mead simplex search for the optimal design of automotive cruise control systems [20]. Chen et al. proposed an improved algorithmic optimization algorithm based on a population control strategy that classifies populations and adaptively controls the number of individuals in subpopulations, effectively using information about each individual to improve the accuracy of the solution [21]. Davut et al. modified the basic opposites learning mechanism and applied it to enhance the population diversity of arithmetic optimization algorithms [22]. Fang et al. used dynamic inertia weights to improve the exploration exploitation capability of the algorithm and introduced dynamic variance probability coefficients and triangular variance strategies to help the algorithm avoid local optima. Zhang et al. used a differential variance ranking strategy to improve the local exploitation capability of AOA [23]. Abualigah et al. mixed AOA with the sine and cosine algorithm to enhance the local search performance of the algorithm [24]. Celik et al. introduced Gaussian distribution and quasi-opposite learning strategy in order to improve the deficiency of slow convergence of AOA [25]. Ozmen et al. presented an augmented arithmetic optimization algorithm integrating pattern search and elite opponent learning mechanisms [26]. Zheng et al. instead used stochastic mathematical optimizer probabilities to increase population diversity and proposed a forced switching mechanism to help populations jump out of local optimum [27]. An improved arithmetic optimization algorithm combining the logarithmic spiral mechanism and the greedy selection mechanism was proposed and employed for solving PID control problems by Ekinci et al. [28].
This work presented a variant of AOA called ASFAOA for numerical optimization and wireless sensor coverage problems by integrating a double-opposite learning mechanism, an adaptive spiral search strategy, an offset distribution estimation strategy, and a modified cosine acceleration function formulation into the original AOA. The contributions of this work are summarized as follows:
-
The double-opposed learning strategy was used to enhance population diversity. As a result, the global exploration capability of the method was improved.
-
The adaptive spiral search strategy was used to adequately search the space around each individual, and thus the local optimal avoidance of the method was further improved.
-
The offset distribution estimation strategy is used to efficiently utilize the dominant population information to guide the individuals towards correct evolution. Thus, the accuracy of the method is further improved.
-
The adaptive cosine acceleration function is used to balance the exploitation and exploration ability of the algorithm. Thus, the convergence speed and accuracy of the populations are accelerated.
-
ASFAOA was evaluated on the CEC2017 benchmark function to validate its global optimization capability.
-
ASFAOA is used to solve the wireless sensor coverage problem.
The structure of the article is set as follows: Section 2 gives an overview of the original AOA. Section 3 describes the implementation of the improved method, among other specifics. The results and discussion of comparing ASFAOA with other algorithms on CEC2017 function test and wireless sensor coverage problems are shown in Section 4. Finally, Section 5 shows the conclusions of the proposed work and future plans.

2. Arithmetic Optimization Algorithm

The AOA is a new, meta-heuristic method proposed by Abualigah in 2021 [29]. The AOA utilizes four traditional arithmetic operators to build position update formulas. The specific formulas are presented separately as follows:

2.1. Initialization Phase

In AOA, the initial population is generated randomly in the search space with the following equation:
X i int = r a n d × ( u b l b ) + l b , i = 1 , 2 , ... , N P
where X i int is the i t h initial individual, u b and l b are the upper and lower boundaries of the search space. N P is the number of tuna populations. r a n d is a uniformly distributed random vector ranging from 0 to 1.
After initializing the population, Math Optimizer Accelerated (MOA) is computed to choose whether to perform exploitation or exploration behavior.
M O A = m i n + t × ( m a x m i n t m a x )
where t and t max denote the current iteration and the maximum iteration. m a x and m i n are 0.9 and 0.2.

2.2. Exploration Phase

When MOA > 0.5, the problem is explored globally using multiplication and division operators. The mathematical model is as follows.
X i t + 1 = { X b t ÷ ( M O P + e p s ) × ( ( u b l b ) × μ + l b ) , r a n d < 0.5 X b t × M O P × ( ( u b l b ) × μ + l b ) , r a n d 0.5
where X b t is the global best agent. e p s is a minimal value that guarantees that the denominator is not zero. μ is a constant with value of 0.499.
The Math Optimizer probability (MOP) is as follows:
M O P = 1 ( t t m a x ) 0.2

2.3. Exploitation Phase

When MOA < 0.5, the exploitation is performed using operators (subtraction (“−”) and addition (“+”)). The mathematical model is as follows:
X i t + 1 = { X b t M O P × ( ( u b l b ) × μ + l b ) , r a n d < 0.5 X b t + M O P × ( ( u b l b ) × μ + l b ) , r a n d 0.5

3. The Proposed ASFAOA

The basic arithmetic optimization algorithm is a simple, structured algorithm with some power of searching for the best individual, but there are still several weaknesses as follows. First, in terms of population initialization, AOA randomly initializes populations in the search space and does not spread the entire search space well. For the four updated methods of AOA, all of them focus only on the best individual of the group and update the random position around the best individual according to the random value. Where there is a lack of information exchange between individuals, “eyes” only focus on “the current optimal position of the group” regardless of the rest of the search individuals, so that the individual search efficiency is extremely low. This will inevitably cause the search agents to over-gather in the vicinity of the current optimal position of the population, causing the diversity of the population to be ineffectively maintained and the algorithm to fall into a local optimum.
AOA selects exploitation or exploration by controlling the change in the acceleration function MOA. The larger the MOA, the greater the global search capability of the algorithm. the smaller the MOA, the greater the local exploitation capability of the algorithm. In the basic AOA algorithm, MOA grows linearly, which means the global search capability of the algorithm increases linearly. This is inconsistent with the common search strategy of swarm intelligence optimization algorithms, where the algorithm focuses on global exploration in the early stage of search and local exploitation in the later stage. On the other hand, the AOA is non-linear in evolutionary exploration, and the linear growth of MOA cannot accurately approximate the actual iterative process, which makes it difficult for the AOA algorithm to balance exploitation and exploration.
To improve the shortcomings of the basic arithmetic optimization algorithm and enhance its performance, this paper proposes an AOA variant called ASFAOA. The optimization performance of AOA is enhanced by the following approaches. First, the population diversity is enhanced by initializing the population using double-opposition learning, as well as enhancing the population diversity by more effectively searching the problem space using the double-opposed learning strategy. Second, the spiral search strategy of the tuna swarm optimization algorithm is introduced into the addition and subtraction strategy of AOA so as to search the space around each individual more effectively, thus enhancing the ability of AOA to jump out of the local optimum. In the third, an adaptive cosine acceleration function is proposed to better balance the exploitation and exploration capabilities of the algorithm. Fourth, an offset distribution estimation strategy is used to effectively utilize the dominant population information to guide individuals to evolve correctly. In the fifth, a stochastic boundary control strategy is proposed to increase the search range of each individual. The details of the improvement strategy are described as follows.

3.1. Double-Opposition Learning Strategy (DOL)

The opposition learning strategy is a new technique that has emerged in the field of optimal computing in recent years. The opposition learning strategy mainly enhances population diversity and avoids the algorithm from falling into local optimum by generating the inverse position of each individual and evaluating the original and opposition individuals to retain the dominant individual into the next generation. The specific formula is as follows:
X i o = l b + u b X i t
where X i o is the corresponding opposite solution of X i t . In order to further enhance the population diversity and overcome the deficiency that the opposed solution generated by the basic opposed learning strategy is not necessarily better than the current solution, considering that the tent chaos mapping has the characteristics of randomness and ergodicity, which can help generate new solutions and enhance the population diversity [30], this paper combines the tent chaos mapping with the opposed learning strategy and proposes a tent opposite-learning mechanism. The specific mathematical model is described as follows:
X i T o = l b + u b λ i · X i t
λ i + 1 = { 2 λ i , 0 λ i 0.5 2 ( 1 λ i ) , 0.5 < λ i 1
where, X i T o denotes the solution generated by the tent opposition learning corresponding to the i individual in the population. λ i is the corresponding tent chaotic mapping value. In addition to the tent opposition learning strategy, a lenticular opposition learning strategy is also proposed. This strategy uses the property that an individual will become an inverted real image on the other side of the convex lens when it is out of focus to build a mathematical model, as follows:
X i = l b + u b 2 + l b + u b 2 k X i k
where k is the scaling factor, and the value of k affects the quality of the generation of the opposite solution. The smaller the value of k, the larger the range of the generated opposite solution. The larger the value of k, the smaller the range of the opposite solution that can be provided. Considering that the algorithm performs a more global search in the early stage and more exact exploitation in the later stage, a dynamically adjustable scaling factor formula is proposed as shown below:
k = ( 1 + ( t t max ) 1 / 3 ) c
where c is a constant with a value of 3.

3.2. Adaptive Spiral Search Strategy (ASS)

In the local exploitation phase of AOA, AOA performs random position updates around the optimal individual, which is beneficial to the fast convergence of the algorithm, but when the optimal solution falls into a local optimum, it easily leads to other individuals following into the local optimum. To protect the AOA algorithm from falling into a local optimum, the spiral foraging strategy, inspired by the spiral foraging strategy of the tuna swarm algorithm [31], is introduced into the addition and subtraction operations in AOA. The AOA randomly selects one of the strategies from the original strategy and the spiral foraging strategy to update the individual positions. The spiral foraging strategy is specified as follows:
X i t + 1 = { α 1 · ( X b r t + β · | X b r t X i t | ) + ( 1 α 1 ) · X i t , i = 1 α 1 · ( X b r t + β · | X b r t X i t | ) + ( 1 α 1 ) · X i 1 t   , i = 2 , 3 , ... , N P
α 1 = a + ( 1 a ) · t t max
β = e b l · cos ( 2 π b )
l = e 3 cos ( ( t max + 1 t 1 ) π )
where a is a constant value of 0.7 and b is a random number uniformly distributed from 0 to 1. X b r t denotes the optimal individual or a randomly generated individual in the search space. In the global exploration phase of AOA, the spiral search strategy needs to search a wider space, so X b r t is a randomly generated individual in the search space. In the later local development phase, the spiral search is more focused around the optimal individual, so X b r t takes the location information of the optimal individual. Each individual is chosen to use either the original search strategy or the spiral search strategy at each iteration, according to the probability p r = r a n d .

3.3. Adaptive Cosine Acceleration Function (ACA)

When AOA uses MOA to switch between “global exploration” and “local exploitation”, the probability of local exploitation in the later stage of the search is lower than that of global exploration, which weakens the ability of local exploitation in the later stage of the algorithm and is not conducive to the optimization of the algorithm. On the other hand, the AOA is nonlinear in the evolutionary exploration process, and the linear growth of MOA cannot accurately approximate the actual iterative process, so the introduction of cosine control factor converts the change of MOA to nonlinear, which can more closely match the actual iterative process of the algorithm.
M O A = min + ( max min ) × cos 2 ( π t 2 t max )
Figure 1 shows the comparison between the original MOA and the improved MOA. From the figure, we can see that the improved MOA in this paper maintains a large value at the beginning of the algorithm iteration, which enables the algorithm to perform global search adequately; at the later part of the iteration, the MOA rapidly decreases to a smaller value, which increases the local exploitation probability of the algorithm and improves the convergence speed of the algorithm.

3.4. Offset Distribution Estimation Strategy (ODE)

Analyzing the basic AOA algorithm, we can see that each individual mainly follows the optimal individual for position updating. When the optimal individual falls into a local optimum, it will cause the rest of the individuals to fall into the same local optimum. At the same time, there is a lack of mutual information exchange between each individual. In order to enhance the population diversity, strengthen the information exchange among individuals, and improve the algorithm’s performance in finding the optimal, this paper introduces the offset distribution estimation strategy. The distribution estimation strategy represents the relationship between individuals through a probabilistic model [32,33]. Assuming that the problem model obeys a multivariate Gaussian probability distribution, the distribution model is computed using half of the individuals of the current population and sampling new offspring to drive the optimization process of the algorithm. The basic computational process can be divided into the following four steps:
(1)
Set the algorithm parameters and initialize the population;
(2)
Evaluate the solutions according to the objective function values;
(3)
Select the partially optimal solutions to compute the Gaussian probability distribution model;
(4)
Sample the new population according to the updated probability model; repeat step 2 until the end condition is satisfied.
The strategy uses the current dominant population to calculate the probability distribution model and generates new child populations based on the sampling of the probability distribution model, and finally obtains the optimal solution through continuous iteration. In this chapter, half of the better performing populations were selected for sampling, and the mathematical model of this strategy is described as follows.
n e w X i t + 1 = m + r a n d n · ( m X i t )
m = ( X b t + X m e a n t + X i t ) / 3
C o v = 2 N i = 1 N p / 2 ( X i t + 1 X m e a n t ) × ( X i t X m e a n t ) T
X m e a n t = i = 1 N p / 2 ω i × X i t
ω i = ln ( N p / 2 + 0 . 5 ) ln ( i ) i = 1 N / 2 ( ln ( N p / 2 + 0.5 ) ln ( i ) )
where X m e a n t is the weighted covariance matrix of the dominant population, ω i is the weighting coefficient of the dominant population in descending order of fitness value, and C o v is the weighted covariance matrix of the dominant population. By considering the optimal individual information, the dominant population weighted information and its own information, the evolutionary direction of the population is corrected to improve the performance of the algorithm in the search for superiority.

3.5. Boundary Control Strategy

When the agent position is beyond the search space, it is usually to reinitialize the individual at the boundary, but this tends to derive multiple agents at the boundary position, which is not conducive to the exploration of the population in the whole search space. In order to increase the search range of each agent, this paper proposes a randomized boundary control strategy, which randomly generates the dimensional information in the whole search space when a dimension of the agent is beyond the search boundary, and the specific mathematical model is shown as follows:
X i , j t = { l b + r a n d · ( u b l b ) , i f   X i , j t > u b   o r   X i , j t < l b X i , j t , i f   X i , j t u b   a n d   X i , j t l b
where j denotes the j-th dimension of each individual.

3.6. Pseudo-Code of ASFAOA

The pseudo code of the ASFAOA is shown in Algorithm 1.
Algorithm 1 Pseudo-code of the ASFAOA algorithm.
1Initialize the Arithmetic Optimization Algorithm parameters α, µ
2Initialize the parameters a
3Initialize the solutions’ positions randomly. (Solutions: i = 1,...,Np)
4while (t < tmax) do
5 Calculate the Fitness Function for the given solutions
6 Find the best solution (Determined best so far).
7 Update the MOA value using Equation (15).
8 Update the MOP value using Equation (4).
9 Update the k value using Equation (10).
10 Update the Cov value using Equations (18)~(20).
11for (i = 1 to Solutions) do
12  Update positions by Equations (7) and (9)
13  Generate a random values between [0, 1] (r1, r2, and r3)
14  if r1 > MOA then
15   if r2 > 0.5 then
16     Update positions by Equation (3)
17   else
18     Update positions by Equation (16)
19   end if
20  else
21   if r3 > 0.5 then
22     Update positions by Equation (5)
23   else
24     Update positions by Equation (11)
25   end if
26  end if
27  Update positions by Equation (21)
28end for
29t = t + 1
30end while
31Return the best solution.

3.7. The Computational Complexity of ASFAOA

The time complexity of AOA can be seen from the literature [25] as follows.
O ( A O A ) = O ( T ( O ( E x p l o r a t i o n   P h a s e + E x p l o i t a t i o n   P h a s e ) ) )
O ( A O A ) = O ( T ( N p · D + N p · D ) ) = O ( T · N p · D )
In this paper, five improvement strategies are proposed; ASS and ACA do not change the time complexity. The time complexity of the covariance matrix of ODE is O ( T ( N p / 2 · D 2 ) ) , and the DOL is O ( T · N p · D ) . Therefore, the time complexity of m-ASFAOA is shown below.
O ( A S F A O A ) = O ( T ( O ( E x p l o r a t i o n   P h a s e + O D E ) + O   ( E x p l o i t a t i o n   P h a s e + A S S ) ) + O ( D O L ) ) )
O ( A S F A O A ) = O ( T ( N p / 2 · D + N p / 2 · D ) + T ( N p / 2 · D + N p / 2 · D 2 ) + T ( N p · D ) ) = O ( T · N P / 2 · D 2 + T · N p · D )

4. Experimental Results and Discussion

To demonstrate the superiority of the proposed ASFAOA, two different experiments are conducted in this section, including CEC2017 benchmark function test and wireless sensor coverage optimization. During the experiments, the proposed approach is also compared with the current well-known techniques. All experimental results and discussions validate the competitive performance of the proposed ASFAOA in solving various optimization problems.

4.1. CEC2017 Benchmark Functions Test

In this section, the performance of the proposed ASFAOA is evaluated with the CEC2017 test function. Many meta-heuristic algorithms design update formulas for classical test functions, which then achieve better optimization performance. CEC2017 has a more complex structure and more difficult to solve compared to these functions, which allows for a better validation of the algorithm’s performance. The details of the test function are presented first. Then, six metaheuristics are used to illustrate the outstanding performance of the proposed improved algorithm. The convergence accuracy of the algorithms is analyzed in two aspects, including the mean and standard deviation. The mean value is the average solution obtained from these tests. The standard deviation is used to reflect the dispersion of the optimal solution. In addition, statistical methods such as Wilcoxon signed rank test and Friedman ranking test were used to confirm significant differences between ASFAOA and other algorithms. Convergence curves and box plots are used for a visual description of the optimization effect.

4.1.1. Experimental Settings

The IEEE CEC2017 test function includes 28 benchmark functions whose feasible range is [−100, 100]. The specific content of CEC 2017 is shown in Table 1. The Fi * denotes theoretical optimum of each function.
In this test, six typical swarm intelligence algorithms are selected to take part in the comparison test, such as the whale optimization algorithm (WOA) [34], sine cosine algorithm (SCA) [35], Harris hawks optimization (HHO) [36], sparrow search algorithm (SSA) [37], tunicate swarm algorithm (TSA) [38], butterfly optimization algorithm (BOA) [39]. The parameters of the mentioned algorithms are displayed on Table 2.
To perform a proper comparison, the number of iterations and population size were set to 500 and 600, respectively. Each algorithm was tested 51 times independently to obtain reliable statistical results.

4.1.2. 30D Functions Test Results and Analysis

In the study, ASFAOA was compared with six swarm intelligence algorithms on the CEC2017 function with D = 30. The specific results of the experiment are shown in Table 3. These data were calculated from the optimal values obtained by solving different functions 51 times. On each function, all algorithms are sorted, and the smaller the optimal value, the smaller the ranking. As can be seen, it mainly includes the mean value, standard deviation and ranking based on the two mentioned indicators. In the meantime, the last two rows in Table 3 indicate the average and final ranking of all algorithms. The proposed ASFAOA obtained the best ranking value of 1.04, which is the top one. HHO obtained the next-best ranking value of 2.71. SSA had the worst ranking of 6.86. It is worth noting that AOA scored 5.61, which is much larger than ASFAOA. Specifically, ASFAOA outperformed AOA on all functions and outperformed the rest of the comparison algorithms in 27 out of 28.
Table 4 illustrates the p-values calculated by the Wilcoxon singed-rank test for each function of each algorithm. If the value is less than 0.05, it means that there is a significant difference between ASFAOA and the other competitors, otherwise there is no significant difference. It can be seen that ASFAOA is significantly different from the other algorithms for most functions. The last row of Table 4 shows the results of comparing ASFAOA with other methods. ASFAOA performs worse than BOA on F19 and better than other methods on 27 functions. It is noteworthy that ASFAOA outperforms it on all functions compared to AOA. Furthermore, the average ranking results obtained with the various methods according to the Friedman test are shown in Figure 2. As shown in Figure 2, ASFAOA obtained the best average ranking value of 1.03, HHO ranked second with a value of 2.71, WOA and SCA followed HHO, and AOA ranked sixth. The results of the Friedman test further prove that ASFAOA outperforms other algorithms with significant advantages.
The convergence curves of all algorithms with different functions are shown in Figure 3. It can be seen that the enhanced ASFAOA performs the best compared to other methods. Specifically, ASFAOA has the fastest convergence speed and better convergence accuracy on F1–F5, F7, F9–F12, F15–F18, and F20–F28. In solving the rest of the functions, ASFAOA can achieve higher convergence accuracy in the later phase although the convergence speed is slower in the early phase. Generally, the convergence performance of ASFAOA is better than the comparison algorithm, especially compared to AOA. This phenomenon can be attributed to the following factors: on the one hand, the offset distribution estimation strategy guides the search agents to evolve towards more promising regions at a faster rate. On the other hand, the dual-opposed learning strategy and the spiral search strategy help to further improve the accuracy of the solution as well as the population diversity. In addition, the modification of MOA makes the algorithm more balanced in terms of exploitation and exploration capabilities.
To analyze the distribution characteristics of the solutions solved by ASFAOA, box plots were drawn based on the results of 51 independent solutions for each algorithm, as shown in Figure 4. For each algorithm, the center mark of each box indicates the median of the results of 51 times solved function. The bottom and top edges of the box indicate the first and third quartile points. The symbol “+” indicates bad values that are not inside the box. We can learn from Figure 3 that ASFAOA has no outliers when solving nine of the test functions (F1, F2, F8, F11, F14, F16, F24–F26), which indicates that the distribution solved with ASFAOA is very concentrated. For other test functions with bad values (F2, F4–F5, F7–F8, F12–F13, F15–F18, F20–F21, F24, F26), ASFAOA has a smaller median, which indicates that the quality of the solutions of ASFAOA is relatively better. Therefore, the improved algorithm proposed in this paper is robust.

4.1.3. Analysis of ASFAOA Improvement Strategies

In this paper, the proposed improvement method for basic AOA consists of four parts: offset distribution estimation strategy (ODE), adaptive cosine acceleration function (ACA), adaptive spiral search strategy (ASS), and double-opposition learning strategy (DOL). To evaluate the effectiveness of different modification strategies, we present four variants of ASFAOA using different modification strategies as shown in Table 5. ASFAOA-1 utilizes DOL strategy to improve the algorithmic performance. ASFAOA-2 serves for evaluating the effectiveness of the ASS strategy. ASFAOA-3 utilizes the ACA strategy to balance algorithm exploitation and exploration capabilities. ASFAOA-4 incorporates the ODE strategy. The performance of the six algorithms was compared using the CEC2017 test suite. Each function was run independently 51 times. Table 6 lists the average error results for each algorithm, and the last row gives the Friedman test results for the six algorithms.
Significantly, ASFAOA with a complete improvement strategy performed the best, with a ranking of 1.04 in the Friedman test. The four derived algorithms, having one of the methods, respectively, also ranked better than the basic AOA. The four derived algorithms are ranked as 4.04, 3.29, 3.00, and 4.07. Hence, it can be concluded that the impact of these four modifications on the performance in descending order is: ODE > ASS > DOL > ACA. ASFAOA-3 performs the best among the four derived algorithms, proving that the utilization of ODE can effectively improve the performance. It generates offspring by utilizing the overall distribution information of the dominant population, which effectively avoids the defect that the population only follows the optimal individuals and falls into local optimum. ASFAOA-2 performs similarly to ASFAOA-3 and ranks third. This is due to the adoption of the ASS strategy, which randomly selects an individual as a reference point in the early stage, effectively broadening the search range and enhancing the algorithm’s ability to solve multimodal functions. In the later period, the optimal individual is selected as the reference point, and the search range is narrowed by adaptive narrowing to ensure the convergence efficiency. ASFAOA-1 strengthens population diversity by generating reverse individuals, and the experimental results also illustrate that this strategy is effective. ASFAOA-4 achieves improved performance by simply modifying the control parameters of the original algorithm, suggesting that the method strikes a certain degree of balance between exploitation and exploration behaviors.

4.2. Wireless Sensor Coverage Optimization Test

In this section, the performance of the proposed ASFAOA is evaluated using the wireless sensor coverage optimization problem. The details of the wireless sensor coverage problem are first presented. Then, the superior performance of the proposed algorithm is illustrated using the better-performing comparison algorithm in Section 4.1. The convergence accuracy of the algorithm is analyzed in terms of the optimal value, the mean value, and the standard deviation.

4.2.1. Mathematical Models

In the wireless sensor network, the set of homogeneous wireless sensor nodes is S = { s 1 , s 2 , s 3 , ... , s i , , , , s N } ; the sensing radius is Rs and the monitoring area is a rectangular area of L × W . For calculation purposes, the rectangular area is discretized into L × W grids of equal area. The monitoring point is located at the geometric center of the grid. If the distance between the monitoring point and any node is less than or equal to the sensing radius R s , the monitoring point is considered to be covered by the wireless sensor network. The set of monitoring nodes is M = { m 1 , m 2 , m 3 , ... , m j , ... , M L × W } . ( x i , y i ) and ( x j , y j ) correspond to the two-dimensional spatial coordinates of s i and m j in the set, respectively. The Euclidean distance between the two nodes is as below:
d ( s i , m j ) = ( x i x j ) 2 + ( y i y j ) 2
The probability of monitoring point m j being sensed by node s i is defined as:
p c o v ( s i , m j ) = { 1 , i f d ( s i , m j ) R s 0 , o t h e r w i s e
The wireless sensor coverage problem can be solved using either a multi-objective optimization algorithm or a single-objective optimization algorithm depending on the factors considered [40,41]. In this paper, we focus on verifying the superiority of the proposed algorithms and hence mainly solve the problem with coverage as the objective. We define the area coverage C r of all sensor nodes in the target monitoring environment as the ratio of the area covered by the set of sensor nodes to the area of the monitoring area.
C r = j = 1 L × W p c o v ( s i , m j ) L × W

4.2.2. Simulation and Analysis

Two sets of experiments were designed to verify the ASFAOA performance. To make the experimental data more convincing, the simulation experiments for each algorithm were conducted 30 times independently, and the optimal value, mean and standard deviation were taken as statistics for comparison. In each experiment set, all parameter settings are the same for all four methods. The algorithms involved in the simulations include ASFAOA, AOA, HHO, and WOA.

Case 1

In Case 1, the monitoring area is a 10 m × 10 m two-dimensional square plane. The number of sensor nodes is 25, the sensing radius is 1, and the communication radius is 2. Table 7 shows the statistics of the optimization results by each algorithm. Figure 5 shows the iterative convergence curve of coverage optimization, and Figure 6 shows the node deployment of WSN after optimization of each algorithm.
As can be seen from Table 5, ASFAOA achieves a 15.7%, 8.26%, and 8.26% improvement in coverage compared with the optimized results of AOA, HHO, and WOA, respectively. Additionally, ASFAOA has better stability. From Figure 5, it can be observed that ASFAOAs do not fall into the local optimum and achieve a better coverage solution at the end, although the convergence speed is slow in the early phase. From the optimized node deployment of each algorithm in Figure 6, the node deployment after AOA optimization has a larger coverage blind area, while the result of ASFAOA optimization makes the sensor nodes more uniformly distributed, which verifies the effectiveness of the improved strategy.

Case 2

In case 2, the experimental settings are a two-dimensional plane with a monitoring area of 50 m × 50 m. The number of sensor nodes is 35. The sensing radius is 2.5 and the communication radius is 5. Table 6 records the optimization results for case 2. The coverage convergence curve for each algorithm is shown in Figure 7. Deployment scheme of nodes for each algorithm is shown in Figure 8.
According to the analysis of Table 8, ASFAOA continues to maintain its high performance in the optimization of Case 2 and finally achieves an average coverage rate of 83.93%. Compared with the optimization results of AOA, HHO, and WOA, the coverage rate improved by 14.53%, 5.11%, and 5.11%, respectively. In addition, the minimum mean value of ASFAOA indicates its better performance. As can be seen from Figure 7, ASFAOA can effectively avoid falling into local optimum and achieve a better coverage solution. By Figure 8, it can be seen that the optimal coverage solution obtained by ASFAOA has a more uniform distribution of nodes. The solution finally given by ASFAOA improves the coverage rate from 65 to 83.93, which indicates that the algorithm proposed in this paper has excellent search capability.

5. Conclusions

In this paper, we have proposed an improved variant of AOA, named ASFAOA for the global optimization problem. The convergence accuracy and convergence speed of ASFAOA are supported by the dual-opposition learning strategy, the adaptive spiral search strategy, and the offset distribution estimation strategy. In order to validate and analyze the superiority of ASFAOA, a large number of experiments were conducted, including the CEC 2017 test suite of mean analysis, convergence analysis, stability analysis, statistical tests, and two sets of wireless sensor coverage problems with different dimensions. The results and discussions validate the rationality and usability of the improved strategies. The application of ASFAOA to wireless sensor coverage problems verifies the capability of ASFAOA in solving practical optimization problems.
In the future, we will focus our attention on two subsequent directions: One is to further investigate the internal mechanisms of ASFAOA with the aim of reducing its computational complexity and improving its performance. The other is to develop a multi-objective version of ASFAOA for solving more practical problems such as robot path planning, optimal control of electric vehicle composite braking, multilevel thresholding image segmentation, UAV mission planning, etc.

Author Contributions

Conceptualization, S.Y.; methodology, S.Y.; software, S.Y. and L.Z.; validation, X.Y., J.S. and W.D.; formal analysis, S.Y.; investigation, X.Y.; resources, L.Z.; data curation, J.S.; writing—original draft preparation, S.Y.; writing—review and editing, L.Z.; visualization, W.D.; supervision, S.Y.; project administration, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, A.D.; Tang, S.Q.; Han, T.; Zhou, H.; Xie, L. A Modified Slime Mould Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 2298215. [Google Scholar] [CrossRef] [PubMed]
  2. Fang, H.; Fu, X.; Zeng, Z.; Zhong, K.; Liu, S. An Improved Arithmetic Optimization Algorithm and Its Application to Determine the Parameters of Support Vector Machine. Mathematics 2022, 10, 2875. [Google Scholar] [CrossRef]
  3. Ghith, E.S.; Tolba, F.A. Tuning PID controllers based on Hybrid Arithmetic optimization algorithm and Artificial Gorilla troop optimization for Micro-Robotics systems. IEEE Access 2023, 11, 27138–27154. [Google Scholar] [CrossRef]
  4. Issa, M. Enhanced Arithmetic Optimization Algorithm for Parameter Estimation of PID Controller. Arab. J. Sci. Eng. 2023, 48, 2191–2205. [Google Scholar] [CrossRef] [PubMed]
  5. Sharma, A.; Khan, R.A.; Sharma, A.; Kashyap, D.; Rajput, S. A novel opposition-based arithmetic optimization algorithm for parameter extraction of PEM fuel cell. Electronics 2021, 10, 2834. [Google Scholar] [CrossRef]
  6. Xie, L.; Wang, S.; Zhu, D.; Hu, G.; Zhou, C. DNA Sequence Optimization Design of Arithmetic Optimization Algorithm Based on Billiard Hitting Strategy. Interdiscip. Sci. Comput. Life Sci. 2023, 15, 231–248. [Google Scholar] [CrossRef]
  7. Abualigah, L.; Elaziz, M.A.; Yousri, D.; Al-qaness, M.A.A.; Ewees, A.A.; Zitar, R.A. Augmented arithmetic optimization algorithm using opposite-based learning and lévy flight distribution for global optimization and data clustering. J. Intell. Manuf. 2022, 1–39. [Google Scholar] [CrossRef]
  8. Abualigah, L.; Almotairi, K.H.; Al-qaness, M.A.A.; Ewees, A.A.; Yousri, D.; Elaziz, M.A.; Nadimi-Shahraki, M.H. Efficient text document clustering approach using multi-search Arithmetic Optimization Algorithm. Knowl. Based Syst. 2022, 248, 108833. [Google Scholar] [CrossRef]
  9. Izci, D. A novel modified arithmetic optimization algorithm for power system stabilizer design. Sigma J. Eng. Nat. Sci. Sigma Mühendislik Fen Bilim. Derg. 2022, 40, 529–541. [Google Scholar] [CrossRef]
  10. Ibrahim, R.A.; Abualigah, L.; Ewees, A.A.; Al-Qaness, M.A.A.; Yousri, D.; Alshathri, S.; Elaziz, M.A. An electric fish-based arithmetic optimization algorithm for feature selection. Entropy 2021, 23, 1189. [Google Scholar] [CrossRef]
  11. Montoya, O.D.; Giral-Ramírez, D.A.; Hernández, J.C. Efficient Integration of PV Sources in Distribution Networks to Reduce Annual Investment and Operating Costs Using the Modified Arithmetic Optimization Algorithm. Electronics 2022, 11, 1680. [Google Scholar] [CrossRef]
  12. Mohammed Ridha, H.; Hizam, H.; Mirjalili, S.; Lutfi Othman, M.; Effendy Ya’acob, M.; Ahmadipour, M. Novel parameter extraction for Single, Double, and three diodes photovoltaic models based on robust adaptive arithmetic optimization algorithm and adaptive damping method of Berndt-Hall-Hall-Hausman. Sol. Energy 2022, 243, 35–61. [Google Scholar] [CrossRef]
  13. Abbassi, A.; Ben Mehrez, R.; Bensalem, Y.; Abbassi, R.; Kchaou, M.; Jemli, M.; Abualigah, L.; Altalhi, M. Improved Arithmetic Optimization Algorithm for Parameters Extraction of Photovoltaic Solar Cell Single-Diode Model. Arab. J. Sci. Eng. 2022, 47, 10435–10451. [Google Scholar] [CrossRef]
  14. Wang, R.B.; Wang, W.F.; Xu, L.; Pan, J.S.; Chu, S.C. An Adaptive Parallel Arithmetic Optimization Algorithm for Robot Path Planning. J. Adv. Transp. 2021, 13, 889–945. [Google Scholar] [CrossRef]
  15. Bhat, S.J.; Santhosh, K.V. A localization and deployment model for wireless sensor networks using arithmetic optimization algorithm. Peer-to-Peer Netw. Appl. 2022, 15, 1473–1485. [Google Scholar] [CrossRef]
  16. Abd Elaziz, M.; Abualigah, L.; Ibrahim, R.A.; Attiya, I. IoT Workflow Scheduling Using Intelligent Arithmetic Optimization Algorithm in Fog Computing. Comput. Intell. Neurosci. 2021, 2021, 9114113. [Google Scholar] [CrossRef] [PubMed]
  17. Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A.H. A novel evolutionary arithmetic optimization algorithm for multilevel thresholding segmentation of covid-19 ct images. Processes 2021, 9, 1155. [Google Scholar] [CrossRef]
  18. Dhawale, P.G.; Kamboj, V.K.; Bath, S.K. A levy flight based strategy to improve the exploitation capability of arithmetic optimization algorithm for engineering global optimization problems. Trans. Emerg. Telecommun. Technol. 2023, 34, e4739. [Google Scholar] [CrossRef]
  19. Zhang, Y.J.; Yan, Y.X.; Zhao, J.; Gao, Z.M. AOAAO: The Hybrid Algorithm of Arithmetic Optimization Algorithm With Aquila Optimizer. IEEE Access 2022, 10, 10907–10933. [Google Scholar] [CrossRef]
  20. Izci, D.; Ekinci, S.; Kayri, M.; Eker, E. A novel improved arithmetic optimization algorithm for optimal design of PID controlled and Bode’s ideal transfer function based automobile cruise control system. Evol. Syst. 2022, 13, 453–468. [Google Scholar] [CrossRef]
  21. Chen, M.; Zhou, Y.; Luo, Q. An Improved Arithmetic Optimization Algorithm for Numerical Optimization Problems. Mathematics 2022, 10, 2152. [Google Scholar] [CrossRef]
  22. Izci, D.; Ekinci, S.; Eker, E.; Abualigah, L. Opposition-Based Arithmetic Optimization Algorithm with Varying Acceleration Coefficient for Function Optimization and Control of FES System. In Proceedings of the International Joint Conference on Advances in Computational Intelligence, Online, 23–24 October 2022. [Google Scholar]
  23. Zhang, J.; Zhang, G.; Huang, Y.; Kong, M. A Novel Enhanced Arithmetic Optimization Algorithm for Global Optimization. IEEE Access 2022, 10, 75040–75062. [Google Scholar] [CrossRef]
  24. Abualigah, L.; Ewees, A.A.; Al-qaness, M.A.A.; Elaziz, M.A.; Yousri, D.; Ibrahim, R.A.; Altalhi, M. Boosting arithmetic optimization algorithm by sine cosine algorithm and levy flight distribution for solving engineering optimization problems. Neural Comput. Appl. 2022, 34, 8823–8852. [Google Scholar] [CrossRef]
  25. Çelik, E. IEGQO-AOA: Information-Exchanged Gaussian Arithmetic Optimization Algorithm with Quasi-opposition learning. Knowl. Based Syst. 2023, 260, 110169. [Google Scholar] [CrossRef]
  26. Özmen, H.; Ekinci, S.; Izci, D. Boosted arithmetic optimization algorithm with elite opposition-based pattern search mechanism and its promise to design microstrip patch antenna for WLAN and WiMAX. Int. J. Model. Simul. 2023, 1–16. [Google Scholar] [CrossRef]
  27. Zheng, R.; Jia, H.; Abualigah, L.; Liu, Q.; Wang, S. An improved arithmetic optimization algorithm with forced switching mechanism for global optimization problems. Math. Biosci. Eng. 2022, 19, 473–512. [Google Scholar] [CrossRef]
  28. Ekinci, S.; Izci, D.; Al Nasar, M.R.; Abu Zitar, R.; Abualigah, L. Logarithmic spiral search based arithmetic optimization algorithm with selective mechanism and its application to functional electrical stimulation system control. Soft Comput. 2022, 26, 12257–12269. [Google Scholar] [CrossRef]
  29. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  30. Tang, A.; Zhou, H.; Han, T.; Xie, L. A Chaos Sparrow Search Algorithm with Logarithmic Spiral and Adaptive Step for Engineering Problems. Comput. Model. Eng. Sci. 2021, 130, 331–364. [Google Scholar] [CrossRef]
  31. Xie, L.; Han, T.; Zhou, H.; Zhang, Z.-R.; Han, B.; Tang, A. Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 9210050. [Google Scholar] [CrossRef]
  32. Tang, A.; Zhou, H.; Han, T.; Xie, L. A modified manta ray foraging optimization for global optimization problems. IEEE Access 2021, 9, 128702–128721. [Google Scholar] [CrossRef]
  33. Tang, A.D.; Han, T.; Zhou, H.; Xie, L. An improved equilibrium optimizer with application in unmanned aerial vehicle path planning. Sensors 2021, 21, 1814. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  35. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  36. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  37. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  38. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  39. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  40. Sun, Z.; Zhang, Y.; Nie, Y.; Wei, W.; Lloret, J.; Song, H. CASMOC: A novel complex alliance strategy with multi-objective optimization of coverage in wireless sensor networks. Wirel. Networks 2017, 23, 1201–1222. [Google Scholar] [CrossRef]
  41. Wang, Z.; Xie, H.; He, D.; Chan, S. Wireless sensor network deployment optimization based on two flower pollination algorithms. IEEE Access 2019, 7, 180590–180608. [Google Scholar] [CrossRef]
Figure 1. MOA dynamic variation curve.
Figure 1. MOA dynamic variation curve.
Biomimetics 08 00348 g001
Figure 2. Friedman ranking of different approaches on CEC2017 30D test.
Figure 2. Friedman ranking of different approaches on CEC2017 30D test.
Biomimetics 08 00348 g002
Figure 3. The mean value curves on CEC2017 functions.
Figure 3. The mean value curves on CEC2017 functions.
Biomimetics 08 00348 g003aBiomimetics 08 00348 g003b
Figure 4. Box plot analysis for CEC2017 functions.
Figure 4. Box plot analysis for CEC2017 functions.
Biomimetics 08 00348 g004aBiomimetics 08 00348 g004b
Figure 5. Coverage curves for case 1.
Figure 5. Coverage curves for case 1.
Biomimetics 08 00348 g005
Figure 6. Node deployment in case 1.
Figure 6. Node deployment in case 1.
Biomimetics 08 00348 g006
Figure 7. Coverage curves for case 2.
Figure 7. Coverage curves for case 2.
Biomimetics 08 00348 g007
Figure 8. Node deployment in case 2.
Figure 8. Node deployment in case 2.
Biomimetics 08 00348 g008
Table 1. Descriptions of CEC 2017 test suite.
Table 1. Descriptions of CEC 2017 test suite.
TypeNo.DescriptionFi *
Unimodal functions1Shifted and Rotated Bent Cigar Function300
Unimodal functions2Shifted and Rotated Rosenbrock’s Function400
3Shifted and Rotated Rastrigin’s Function500
4Shifted and Rotated Expanded Scaffer’s F6 Function600
5Shifted and Rotated Lunacek Bi-Rastrigin Function700
6Shifted and Rotated Non-Continuous Rastrigin’s Function800
7Shifted and Rotated Levy Function900
8Shifted and Rotated Schwefel’s Function1000
Hybrid functions9Hybrid Function 1 (n = 3)1100
10Hybrid Function 2 (n = 3)1200
11Hybrid Function 3 (n = 3)1300
12Hybrid Function 4 (n = 4)1400
13Hybrid Function 5 (n = 4)1500
14Hybrid Function 6 (n = 4)1600
15Hybrid Function 6 (n = 5)1700
16Hybrid Function 6 (n = 5)1800
17Hybrid Function 6 (n = 5)1900
18Hybrid Function 6 (n = 6)2000
Composite functions19Composition Function 1 (n = 3)2100
20Composition Function 2 (n = 3)2200
21Composition Function 3 (n = 4)2300
22Composition Function 4 (n = 4)2400
23Composition Function 5 (n = 5)2500
24Composition Function 6 (n = 5)2600
25Composition Function 7 (n = 6)2700
26Composition Function 8 (n = 6)2800
27Composition Function 9 (n = 3)2900
28Composition Function 10 (n = 3)3000
Table 2. Parameter settings of compared algorithms.
Table 2. Parameter settings of compared algorithms.
MethodsParameters
WOA b = 1 , a = 2 (Linearly decreased over iterations)
SCA a = 2 (Linearly decreased over iterations)
HHO β = 1.5 , E 0 [ 1 , 1 ]
SSA P = 0.2 , C = 0.2
TSA x r ( 1 , 4 )
BOA p = 0.6 , a = 0.1 , c = 0.01
AOA M o p m a x = 1 , M o p m i n = 0.2 , C = 1 , α = 5 , M u = 0.499
Table 3. Results obtained with the methods for the CEC2017 test at D = 30.
Table 3. Results obtained with the methods for the CEC2017 test at D = 30.
FunctionItemsASFAOAWOASCAHHOSSATSABOAAOA
F1Mean4.51 × 10−61.02 × 1053.61 × 1041.68 × 1038.40 × 1043.83 × 1043.82 × 1046.91 × 104
Std1.46 × 10−64.67 × 1046.47 × 1037.95 × 1026.59 × 1031.19 × 1046.97 × 1031.15 × 104
Rank18327546
F2Mean1.93 × 101.46 × 1021.02 × 1031.23 × 1021.44 × 1031.62 × 1039.33 × 1037.61 × 103
Std2.75 × 103.64 × 102.61 × 1023.33 × 101.09 × 1031.40 × 1031.29 × 1032.45 × 103
Rank13425687
F3Mean6.55 × 102.57 × 1022.76 × 1022.05 × 1023.50 × 1022.76 × 1023.49 × 1022.95 × 102
Std1.88 × 104.88 × 102.28 × 103.62 × 104.40 × 104.09 × 102.16 × 103.20 × 10
Rank13428576
F4Mean1.96 × 10−16.03 × 104.84 × 105.62 × 108.06 × 106.15 × 106.63 × 106.21 × 10
Std4.13 × 10−19.43 × 1005.55 × 1005.92 × 1008.84 × 1001.43 × 105.76 × 1006.71 × 100
Rank14238576
F5Mean1.90 × 1024.76 × 1024.24 × 1024.98 × 1027.12 × 1024.83 × 1025.57 × 1026.00 × 102
Std6.05 × 107.72 × 103.36 × 106.57 × 106.85 × 107.78 × 103.17 × 105.66 × 10
Rank13258467
F6Mean6.82 × 101.88 × 1022.54 × 1021.40 × 1022.72 × 1022.34 × 1022.93 × 1022.25 × 102
Std2.07 × 104.52 × 101.89 × 102.13 × 104.31 × 103.99 × 101.54 × 102.67 × 10
Rank13627584
F7Mean9.71 × 106.83 × 1034.22 × 1034.69 × 1039.35 × 1038.57 × 1036.82 × 1034.50 × 103
Std2.63 × 1022.35 × 1039.97 × 1028.28 × 1021.85 × 1033.01 × 1038.69 × 1027.24 × 102
Rank16248753
F8Mean3.29 × 1034.82 × 1037.20 × 1034.35 × 1037.05 × 1035.55 × 1037.33 × 1035.51 × 103
Std5.39 × 1028.20 × 1023.00 × 1027.25 × 1027.45 × 1026.07 × 1022.85 × 1025.83 × 102
Rank13726584
F9Mean1.93 × 104.55 × 1029.42 × 1021.61 × 1023.91 × 1032.23 × 1032.19 × 1031.72 × 103
Std1.70 × 101.40 × 1022.14 × 1024.86 × 101.64 × 1031.69 × 1036.72 × 1029.74 × 102
Rank13428765
F10Mean1.00 × 1033.05 × 1071.26 × 1097.61 × 1064.69 × 1088.88 × 1082.08 × 1096.27 × 109
Std2.76 × 1022.19 × 1073.15 × 1084.21 × 1063.76 × 1081.07 × 1097.43 × 1082.56 × 109
Rank13624578
F11Mean5.51 × 101.14 × 1054.09 × 1081.51 × 1058.55 × 1071.75 × 1083.15 × 1083.80 × 104
Std1.42 × 108.65 × 1041.50 × 1089.05 × 1044.66 × 1084.14 × 1082.10 × 1081.71 × 104
Rank13845672
F12Mean3.52 × 105.12 × 1051.47 × 1053.82 × 1041.50 × 1063.73 × 1051.19 × 1055.72 × 104
Std6.14 × 1005.25 × 1058.14 × 1044.25 × 1041.21 × 1066.73 × 1057.62 × 1044.92 × 104
Rank17528643
F13Mean3.36 × 108.15 × 1041.29 × 1076.86 × 1041.83 × 1072.48 × 1071.82 × 1062.35 × 104
Std1.18 × 103.82 × 1041.07 × 1074.86 × 1042.37 × 1077.80 × 1071.46 × 1061.22 × 104
Rank14637852
F14Mean5.82 × 1021.79 × 1032.01 × 1031.55 × 1032.74 × 1031.43 × 1033.18 × 1031.98 × 103
Std2.51 × 1024.36 × 1022.98 × 1023.56 × 1025.38 × 1022.92 × 1024.12 × 1025.09 × 102
Rank14637285
F15Mean8.75 × 107.32 × 1027.16 × 1027.48 × 1021.20 × 1036.06 × 1021.22 × 1039.12 × 102
Std4.91 × 102.68 × 1021.75 × 1022.19 × 1023.85 × 1022.30 × 1022.49 × 1022.67 × 102
Rank14357286
F16Mean3.27 × 101.84 × 1063.93 × 1066.90 × 1051.51 × 1072.08 × 1069.60 × 1051.29 × 106
Std2.83 × 1002.09 × 1063.32 × 1068.77 × 1051.51 × 1074.09 × 1066.22 × 1051.60 × 106
Rank15728634
F17Mean2.39 × 101.60 × 1062.56 × 1071.46 × 1054.23 × 1071.11 × 1074.61 × 1061.08 × 106
Std3.26 × 1001.36 × 1061.31 × 1071.42 × 1051.23 × 1083.45 × 1074.06 × 1061.39 × 105
Rank14728653
F18Mean1.87 × 1027.03 × 1026.05 × 1026.71 × 1028.59 × 1027.24 × 1027.29 × 1026.94 × 102
Std8.81 × 101.96 × 1021.32 × 1022.01 × 1022.42 × 1022.09 × 1029.88 × 101.54 × 102
Rank15238674
F19Mean2.44 × 1024.40 × 1024.48 × 1024.06 × 1025.06 × 1024.68 × 1021.97 × 1024.87 × 102
Std1.35 × 104.86 × 101.97 × 103.51 × 105.36 × 104.96 × 103.01 × 105.23 × 10
Rank24538617
F20Mean1.00 × 1023.13 × 1034.85 × 1032.39 × 1034.18 × 1034.47 × 1034.71 × 1025.13 × 103
Std7.09 × 10−62.44 × 1032.94 × 1032.37 × 1031.88 × 1032.09 × 1037.76 × 101.21 × 103
Rank14735628
F21Mean3.86 × 1027.09 × 1026.84 × 1027.05 × 1028.60 × 1027.86 × 1026.97 × 1029.68 × 102
Std1.57 × 108.66 × 103.49 × 107.35 × 101.00 × 1028.15 × 105.59 × 109.10 × 10
Rank15247638
F22Mean4.41 × 1027.30 × 1027.51 × 1028.26 × 1028.99 × 1028.47 × 1021.10 × 1031.14 × 103
Std2.72 × 107.30 × 102.52 × 107.42 × 101.37 × 1028.08 × 101.68 × 1021.09 × 102
Rank12346578
F23Mean3.88 × 1024.66 × 1026.99 × 1024.11 × 1027.84 × 1027.61 × 1021.75 × 1031.67 × 103
Std3.75 × 1003.26 × 105.73 × 101.87 × 101.30 × 1023.02 × 1022.01 × 1024.55 × 102
Rank13426587
F24Mean1.20 × 1034.44 × 1034.24 × 1033.94 × 1036.30 × 1035.01 × 1035.21 × 1036.40 × 103
Std5.80 × 1021.11 × 1032.93 × 1021.10 × 1031.11 × 1038.76 × 1021.49 × 1037.22 × 102
Rank14327568
F25Mean4.84 × 1026.47 × 1027.03 × 1026.05 × 1029.56 × 1027.30 × 1028.14 × 1021.34 × 103
Std1.30 × 108.42 × 103.63 × 104.00 × 101.65 × 1029.92 × 109.81 × 102.14 × 102
Rank13427568
F26Mean3.30 × 1025.13 × 1021.04 × 1034.62 × 1021.06 × 1031.27 × 1033.28 × 1032.95 × 103
Std5.09 × 103.27 × 101.23 × 1022.60 × 103.22 × 1024.52 × 1023.99 × 1026.15 × 102
Rank13425687
F27Mean5.82 × 1021.88 × 1031.70 × 1031.32 × 1032.64 × 1031.58 × 1033.04 × 1032.43 × 103
Std6.25 × 104.08 × 1022.31 × 1022.56 × 1026.35 × 1024.08 × 1024.72 × 1025.22 × 102
Rank15427386
F28Mean2.01 × 1037.04 × 1067.41 × 1071.01 × 1064.85 × 1071.33 × 1073.98 × 1071.47 × 107
Std3.41 × 104.69 × 1063.63 × 1076.08 × 1053.75 × 1071.07 × 1072.31 × 1071.01 × 107
Rank13827465
Average ranking1.043.964.572.716.865.256.005.61
Total ranking13428576
Table 4. The p-value results on the CEC 2017 30D test obtained with the Wilcoxon signed-rank test.
Table 4. The p-value results on the CEC 2017 30D test obtained with the Wilcoxon signed-rank test.
ASFAOA vs.WOASCAHHOSSATSABOAAOA
No.p-ValueWinp-ValueWinp-ValueWinp-ValueWinp-ValueWinp-ValueWinp-ValueWin
F15.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F25.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F35.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F45.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F55.15 × 10−105.46 × 10−105.15 × 10−105.15 × 10−105.46 × 10−105.15 × 10−105.15 × 10−10
F65.46 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F75.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F81.32 × 10−95.15 × 10−104.17 × 10−85.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F95.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F115.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F125.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F135.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F145.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−109.87 × 10−105.15 × 10−105.15 × 10−10
F155.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F165.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F175.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F185.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.46 × 10−10
F195.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−108.18 × 10−9+5.15 × 10−10
F205.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F215.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F225.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F235.15 × 10−105.15 × 10−101.86 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F247.35 × 10−105.15 × 10−108.27 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F255.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F265.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F275.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
F285.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−105.15 × 10−10
+/−/=28/0/028/0/028/0/028/0/028/0/027/0/128/0/0
Table 5. ASFAOA variants with different improvement strategies.
Table 5. ASFAOA variants with different improvement strategies.
AlgorithmDOLASSACAODE
ASFAOA-1YesNoNoNo
ASFAOA-2NoYesNoNo
ASFAOA-3NoNoYesNo
ASFAOA-4NoNoNoYes
ASFAOAYesYesYesYes
Table 6. Statistics of the results in CEC2017 30D test using ASFAOA variants.
Table 6. Statistics of the results in CEC2017 30D test using ASFAOA variants.
FunctionItemsASFAOAASFAOA-1ASFAOA-2ASFAOA-3ASFAOA-4AOA
F1Mean4.51 × 10−67.36 × 1045.76 × 1041.13 × 10−58.03 × 1046.91 × 104
Std1.46 × 10−69.22 × 1039.54 × 1031.35 × 10−69.21 × 1031.15 × 104
Rank153264
F2Mean1.93 × 101.41 × 1035.77 × 1022.66 × 101.79 × 1037.61 × 103
Std2.75 × 109.71 × 1023.31 × 1023.55 × 101.11 × 1032.45 × 103
Rank143256
F3Mean6.55 × 102.70 × 1022.49 × 1022.40 × 1022.75 × 1022.95 × 102
Std1.88 × 103.92 × 104.32 × 104.34 × 103.35 × 103.20 × 10
Rank143256
F4Mean1.96 × 10−13.59 × 105.76 × 105.21 × 103.43 × 106.21 × 10
Std4.13 × 10−14.71 × 1009.30 × 1007.33 × 1004.74 × 1006.71 × 100
Rank135426
F5Mean1.90 × 1024.38 × 1025.89 × 1025.78 × 1024.32 × 1026.00 × 102
Std6.05 × 105.90 × 106.33 × 104.90 × 106.87 × 105.66 × 10
Rank135426
F6Mean6.82 × 102.18 × 1021.84 × 1021.69 × 1022.16 × 1022.25 × 102
Std2.07 × 102.95 × 103.44 × 103.01 × 102.46 × 102.67 × 10
Rank153246
F7Mean9.71 × 104.56 × 1034.62 × 1034.70 × 1034.43 × 1034.50 × 103
Std2.63 × 1026.71 × 1025.02 × 1021.28 × 1035.56 × 1027.24 × 102
Rank145623
F8Mean3.29 × 1035.02 × 1033.81 × 1034.14 × 1034.93 × 1035.51 × 103
Std5.39 × 1024.20 × 1025.78 × 1025.74 × 1024.53 × 1025.83 × 102
Rank152346
F9Mean1.93 × 102.37 × 1034.92 × 1028.94 × 102.65 × 1031.72 × 103
Std1.70 × 101.00 × 1033.88 × 1022.69 × 101.44 × 1039.74 × 102
Rank153264
F10Mean1.00 × 1031.08 × 1093.22 × 1072.29 × 1031.11 × 1096.27 × 109
Std2.76 × 1028.19 × 1083.98 × 1071.58 × 1038.13 × 1082.56 × 109
Rank143256
F11Mean5.51 × 102.33 × 1071.99 × 1041.18 × 1032.95 × 1073.80 × 104
Std1.42 × 103.57 × 1071.33 × 1046.98 × 1024.15 × 1071.71 × 104
Rank153264
F12Mean3.52 × 105.32 × 1057.15 × 1048.18 × 104.71 × 1055.72 × 104
Std6.14 × 1005.15 × 1056.52 × 1041.81 × 104.02 × 1054.92 × 104
Rank164253
F13Mean3.36 × 105.78 × 1038.96 × 1032.35 × 1026.13 × 1032.35 × 104
Std1.18 × 105.79 × 1036.46 × 1039.44 × 106.06 × 1031.22 × 104
Rank135246
F14Mean5.82 × 1021.90 × 1031.44 × 1031.28 × 1031.83 × 1031.98 × 103
Std2.51 × 1023.36 × 1023.66 × 1022.92 × 1023.83 × 1025.09 × 102
Rank153246
F15Mean8.75 × 106.81 × 1025.81 × 1026.48 × 1026.52 × 1029.12 × 102
Std4.91 × 102.10 × 1022.07 × 1022.46 × 1022.23 × 1022.67 × 102
Rank152346
F16Mean3.27 × 109.02 × 1056.42 × 1056.03 × 109.57 × 1051.29 × 106
Std2.83 × 1005.80 × 1051.32 × 1066.55 × 107.72 × 1051.60 × 106
Rank143256
F17Mean2.39 × 105.26 × 1039.59 × 1035.60 × 105.28 × 1031.08 × 106
Std3.26 × 1001.02 × 1041.05 × 1042.36 × 109.15 × 1031.39 × 105
Rank135246
F18Mean1.87 × 1025.92 × 1025.27 × 1025.78 × 1025.69 × 1026.94 × 102
Std8.81 × 101.74 × 1021.74 × 1021.60 × 1021.79 × 1021.54 × 102
Rank152436
F19Mean2.44 × 1023.78 × 1023.97 × 1024.03 × 1023.98 × 1024.87 × 102
Std1.35 × 101.11 × 1024.06 × 104.69 × 101.00 × 1025.23 × 10
Rank123546
F20Mean1.00 × 1023.18 × 1031.27 × 1033.29 × 1034.06 × 1035.13 × 103
Std7.09 × 10−61.81 × 1031.04 × 1032.06 × 1031.81 × 1031.21 × 103
Rank132456
F21Mean3.86 × 1026.91 × 1026.65 × 1028.05 × 1026.73 × 1029.68 × 102
Std1.57 × 106.34 × 106.69 × 107.90 × 105.88 × 109.10 × 10
Rank142536
F22Mean4.41 × 1029.00 × 1027.37 × 1029.72 × 1029.21 × 1021.14 × 103
Std2.72 × 107.59 × 106.56 × 109.31 × 107.68 × 101.09 × 102
Rank132546
F23Mean3.88 × 1027.62 × 1025.98 × 1024.35 × 1027.83 × 1021.67 × 103
Std3.75 × 1001.47 × 1027.47 × 102.28 × 102.12 × 1024.55 × 102
Rank143256
F24Mean1.20 × 1034.18 × 1034.15 × 1034.78 × 1033.88 × 1036.40 × 103
Std5.80 × 1021.25 × 1031.25 × 1032.29 × 1031.30 × 1037.22 × 102
Rank143526
F25Mean4.84 × 1027.02 × 1027.85 × 1029.52 × 1027.19 × 1021.34 × 103
Std1.30 × 107.30 × 109.63 × 101.52 × 1028.94 × 102.14 × 102
Rank124536
F26Mean3.30 × 1021.13 × 1037.79 × 1023.09 × 1021.16 × 1032.95 × 103
Std5.09 × 102.65 × 1021.95 × 1023.14 × 103.17 × 1026.15 × 102
Rank243156
F27Mean5.82 × 1021.56 × 1031.59 × 1031.44 × 1031.55 × 1032.43 × 103
Std6.25 × 103.45 × 1023.01 × 1022.78 × 1023.66 × 1025.22 × 102
Rank145236
F28Mean2.01 × 1036.24 × 1065.05 × 1052.84 × 1035.72 × 1061.47 × 107
Std3.41 × 105.88 × 1061.30 × 1064.69 × 1026.03 × 1061.01 × 107
Rank153246
Average ranking1.044.043.293.004.075.57
Total ranking143256
Table 7. Comparison of WSN performance in case 1.
Table 7. Comparison of WSN performance in case 1.
AlgorithmBestMeanStd
ASFAOA75.21%67.05%0.03
AOA59.50%56.75%0.01
HHO66.94%61.46%0.02
WOA66.94%62.40%0.03
Table 8. Comparison of WSN performance in case 2.
Table 8. Comparison of WSN performance in case 2.
AlgorithmBestMeanStd
ASFAOA83.93%79.72%0.03
AOA69.40%65.14%0.02
HHO78.82%78.58%0.01
WOA78.82%75.95%0.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, S.; Zhang, L.; Yang, X.; Sun, J.; Dong, W. A Multiple Mechanism Enhanced Arithmetic Optimization Algorithm for Numerical Problems. Biomimetics 2023, 8, 348. https://doi.org/10.3390/biomimetics8040348

AMA Style

Yang S, Zhang L, Yang X, Sun J, Dong W. A Multiple Mechanism Enhanced Arithmetic Optimization Algorithm for Numerical Problems. Biomimetics. 2023; 8(4):348. https://doi.org/10.3390/biomimetics8040348

Chicago/Turabian Style

Yang, Sen, Linbo Zhang, Xuesen Yang, Jiayun Sun, and Wenhao Dong. 2023. "A Multiple Mechanism Enhanced Arithmetic Optimization Algorithm for Numerical Problems" Biomimetics 8, no. 4: 348. https://doi.org/10.3390/biomimetics8040348

APA Style

Yang, S., Zhang, L., Yang, X., Sun, J., & Dong, W. (2023). A Multiple Mechanism Enhanced Arithmetic Optimization Algorithm for Numerical Problems. Biomimetics, 8(4), 348. https://doi.org/10.3390/biomimetics8040348

Article Metrics

Back to TopTop