1. Introduction
With the rapid development in various fields, real-world optimization problems are becoming more and more complicated. When dealing with these emerging optimization problems, traditional optimization methods require too much time and expensive. In most cases, it is known that relatively exact solutions are acceptable, meaning that the estimated better solution is acceptable in production practice. Metaheuristic algorithms are an emerging class of optimization techniques with the advantages of high operational efficiency, flexibility, stability, simplicity of implementation, parallelism, and ease of combining with other algorithms [
1]. Therefore, many optimization algorithms have been proposed in recent decades to solve these nonconvex, nonlinearly constrained, and complex optimization problems and have proven to be very effective for these practical problems.
As one of the novel algorithms, AOA was initially applied to numerical optimization problems and engineering design problems. Due to its uncomplicated structure and excellent performance, AOA has covered many areas such as support vector regression (SVR) parameter optimization [
2], tuning PID controllers [
3,
4], fuel cell parameter extraction [
5], DNA sequence optimization design [
6], clustering optimization [
7,
8], power system stabilizer design [
9], feature selection [
10], photovoltaic parameter optimization [
11,
12,
13], robot path planning [
14], wireless sensor network location and deployment [
15], IoT workflow scheduling [
16], image segmentation [
17], etc.
Although AOA has better performance, it reduces the convergence rate and tends to fall into local optimum solutions when facing optimization problems with complex structures. Therefore, the convergence accuracy and convergence speed of AOA are achieved by adopting various mechanisms. For example, Dhawale et al. used the Levy flight strategy to enhance the exploitation and exploration capabilities of AOA [
18]. Zhang et al. proposed a hybrid AOA algorithm that introduces energy parameters for Harris hawks optimization to balance exploitation and exploration [
19]. Izci et al. proposed a hybrid arithmetic optimization algorithm incorporating a Nelder–Mead simplex search for the optimal design of automotive cruise control systems [
20]. Chen et al. proposed an improved algorithmic optimization algorithm based on a population control strategy that classifies populations and adaptively controls the number of individuals in subpopulations, effectively using information about each individual to improve the accuracy of the solution [
21]. Davut et al. modified the basic opposites learning mechanism and applied it to enhance the population diversity of arithmetic optimization algorithms [
22]. Fang et al. used dynamic inertia weights to improve the exploration exploitation capability of the algorithm and introduced dynamic variance probability coefficients and triangular variance strategies to help the algorithm avoid local optima. Zhang et al. used a differential variance ranking strategy to improve the local exploitation capability of AOA [
23]. Abualigah et al. mixed AOA with the sine and cosine algorithm to enhance the local search performance of the algorithm [
24]. Celik et al. introduced Gaussian distribution and quasi-opposite learning strategy in order to improve the deficiency of slow convergence of AOA [
25]. Ozmen et al. presented an augmented arithmetic optimization algorithm integrating pattern search and elite opponent learning mechanisms [
26]. Zheng et al. instead used stochastic mathematical optimizer probabilities to increase population diversity and proposed a forced switching mechanism to help populations jump out of local optimum [
27]. An improved arithmetic optimization algorithm combining the logarithmic spiral mechanism and the greedy selection mechanism was proposed and employed for solving PID control problems by Ekinci et al. [
28].
This work presented a variant of AOA called ASFAOA for numerical optimization and wireless sensor coverage problems by integrating a double-opposite learning mechanism, an adaptive spiral search strategy, an offset distribution estimation strategy, and a modified cosine acceleration function formulation into the original AOA. The contributions of this work are summarized as follows:
- -
The double-opposed learning strategy was used to enhance population diversity. As a result, the global exploration capability of the method was improved.
- -
The adaptive spiral search strategy was used to adequately search the space around each individual, and thus the local optimal avoidance of the method was further improved.
- -
The offset distribution estimation strategy is used to efficiently utilize the dominant population information to guide the individuals towards correct evolution. Thus, the accuracy of the method is further improved.
- -
The adaptive cosine acceleration function is used to balance the exploitation and exploration ability of the algorithm. Thus, the convergence speed and accuracy of the populations are accelerated.
- -
ASFAOA was evaluated on the CEC2017 benchmark function to validate its global optimization capability.
- -
ASFAOA is used to solve the wireless sensor coverage problem.
The structure of the article is set as follows:
Section 2 gives an overview of the original AOA.
Section 3 describes the implementation of the improved method, among other specifics. The results and discussion of comparing ASFAOA with other algorithms on CEC2017 function test and wireless sensor coverage problems are shown in
Section 4. Finally,
Section 5 shows the conclusions of the proposed work and future plans.
2. Arithmetic Optimization Algorithm
The AOA is a new, meta-heuristic method proposed by Abualigah in 2021 [
29]. The AOA utilizes four traditional arithmetic operators to build position update formulas. The specific formulas are presented separately as follows:
2.1. Initialization Phase
In AOA, the initial population is generated randomly in the search space with the following equation:
where
is the
initial individual,
and
are the upper and lower boundaries of the search space.
is the number of tuna populations.
is a uniformly distributed random vector ranging from 0 to 1.
After initializing the population, Math Optimizer Accelerated (MOA) is computed to choose whether to perform exploitation or exploration behavior.
where
and
denote the current iteration and the maximum iteration.
and
are 0.9 and 0.2.
2.2. Exploration Phase
When MOA > 0.5, the problem is explored globally using multiplication and division operators. The mathematical model is as follows.
where
is the global best agent.
is a minimal value that guarantees that the denominator is not zero.
is a constant with value of 0.499.
The Math Optimizer probability (
MOP) is as follows:
2.3. Exploitation Phase
When MOA < 0.5, the exploitation is performed using operators (subtraction (“−”) and addition (“+”)). The mathematical model is as follows:
3. The Proposed ASFAOA
The basic arithmetic optimization algorithm is a simple, structured algorithm with some power of searching for the best individual, but there are still several weaknesses as follows. First, in terms of population initialization, AOA randomly initializes populations in the search space and does not spread the entire search space well. For the four updated methods of AOA, all of them focus only on the best individual of the group and update the random position around the best individual according to the random value. Where there is a lack of information exchange between individuals, “eyes” only focus on “the current optimal position of the group” regardless of the rest of the search individuals, so that the individual search efficiency is extremely low. This will inevitably cause the search agents to over-gather in the vicinity of the current optimal position of the population, causing the diversity of the population to be ineffectively maintained and the algorithm to fall into a local optimum.
AOA selects exploitation or exploration by controlling the change in the acceleration function MOA. The larger the MOA, the greater the global search capability of the algorithm. the smaller the MOA, the greater the local exploitation capability of the algorithm. In the basic AOA algorithm, MOA grows linearly, which means the global search capability of the algorithm increases linearly. This is inconsistent with the common search strategy of swarm intelligence optimization algorithms, where the algorithm focuses on global exploration in the early stage of search and local exploitation in the later stage. On the other hand, the AOA is non-linear in evolutionary exploration, and the linear growth of MOA cannot accurately approximate the actual iterative process, which makes it difficult for the AOA algorithm to balance exploitation and exploration.
To improve the shortcomings of the basic arithmetic optimization algorithm and enhance its performance, this paper proposes an AOA variant called ASFAOA. The optimization performance of AOA is enhanced by the following approaches. First, the population diversity is enhanced by initializing the population using double-opposition learning, as well as enhancing the population diversity by more effectively searching the problem space using the double-opposed learning strategy. Second, the spiral search strategy of the tuna swarm optimization algorithm is introduced into the addition and subtraction strategy of AOA so as to search the space around each individual more effectively, thus enhancing the ability of AOA to jump out of the local optimum. In the third, an adaptive cosine acceleration function is proposed to better balance the exploitation and exploration capabilities of the algorithm. Fourth, an offset distribution estimation strategy is used to effectively utilize the dominant population information to guide individuals to evolve correctly. In the fifth, a stochastic boundary control strategy is proposed to increase the search range of each individual. The details of the improvement strategy are described as follows.
3.1. Double-Opposition Learning Strategy (DOL)
The opposition learning strategy is a new technique that has emerged in the field of optimal computing in recent years. The opposition learning strategy mainly enhances population diversity and avoids the algorithm from falling into local optimum by generating the inverse position of each individual and evaluating the original and opposition individuals to retain the dominant individual into the next generation. The specific formula is as follows:
where
is the corresponding opposite solution of
. In order to further enhance the population diversity and overcome the deficiency that the opposed solution generated by the basic opposed learning strategy is not necessarily better than the current solution, considering that the tent chaos mapping has the characteristics of randomness and ergodicity, which can help generate new solutions and enhance the population diversity [
30], this paper combines the tent chaos mapping with the opposed learning strategy and proposes a tent opposite-learning mechanism. The specific mathematical model is described as follows:
where,
denotes the solution generated by the tent opposition learning corresponding to the
individual in the population.
is the corresponding tent chaotic mapping value. In addition to the tent opposition learning strategy, a lenticular opposition learning strategy is also proposed. This strategy uses the property that an individual will become an inverted real image on the other side of the convex lens when it is out of focus to build a mathematical model, as follows:
where
k is the scaling factor, and the value of
k affects the quality of the generation of the opposite solution. The smaller the value of
k, the larger the range of the generated opposite solution. The larger the value of
k, the smaller the range of the opposite solution that can be provided. Considering that the algorithm performs a more global search in the early stage and more exact exploitation in the later stage, a dynamically adjustable scaling factor formula is proposed as shown below:
where
is a constant with a value of 3.
3.2. Adaptive Spiral Search Strategy (ASS)
In the local exploitation phase of AOA, AOA performs random position updates around the optimal individual, which is beneficial to the fast convergence of the algorithm, but when the optimal solution falls into a local optimum, it easily leads to other individuals following into the local optimum. To protect the AOA algorithm from falling into a local optimum, the spiral foraging strategy, inspired by the spiral foraging strategy of the tuna swarm algorithm [
31], is introduced into the addition and subtraction operations in AOA. The AOA randomly selects one of the strategies from the original strategy and the spiral foraging strategy to update the individual positions. The spiral foraging strategy is specified as follows:
where
is a constant value of 0.7 and
is a random number uniformly distributed from 0 to 1.
denotes the optimal individual or a randomly generated individual in the search space. In the global exploration phase of AOA, the spiral search strategy needs to search a wider space, so
is a randomly generated individual in the search space. In the later local development phase, the spiral search is more focused around the optimal individual, so
takes the location information of the optimal individual. Each individual is chosen to use either the original search strategy or the spiral search strategy at each iteration, according to the probability
.
3.3. Adaptive Cosine Acceleration Function (ACA)
When AOA uses MOA to switch between “global exploration” and “local exploitation”, the probability of local exploitation in the later stage of the search is lower than that of global exploration, which weakens the ability of local exploitation in the later stage of the algorithm and is not conducive to the optimization of the algorithm. On the other hand, the AOA is nonlinear in the evolutionary exploration process, and the linear growth of MOA cannot accurately approximate the actual iterative process, so the introduction of cosine control factor converts the change of MOA to nonlinear, which can more closely match the actual iterative process of the algorithm.
Figure 1 shows the comparison between the original
MOA and the improved
MOA. From the figure, we can see that the improved
MOA in this paper maintains a large value at the beginning of the algorithm iteration, which enables the algorithm to perform global search adequately; at the later part of the iteration, the
MOA rapidly decreases to a smaller value, which increases the local exploitation probability of the algorithm and improves the convergence speed of the algorithm.
3.4. Offset Distribution Estimation Strategy (ODE)
Analyzing the basic AOA algorithm, we can see that each individual mainly follows the optimal individual for position updating. When the optimal individual falls into a local optimum, it will cause the rest of the individuals to fall into the same local optimum. At the same time, there is a lack of mutual information exchange between each individual. In order to enhance the population diversity, strengthen the information exchange among individuals, and improve the algorithm’s performance in finding the optimal, this paper introduces the offset distribution estimation strategy. The distribution estimation strategy represents the relationship between individuals through a probabilistic model [
32,
33]. Assuming that the problem model obeys a multivariate Gaussian probability distribution, the distribution model is computed using half of the individuals of the current population and sampling new offspring to drive the optimization process of the algorithm. The basic computational process can be divided into the following four steps:
- (1)
Set the algorithm parameters and initialize the population;
- (2)
Evaluate the solutions according to the objective function values;
- (3)
Select the partially optimal solutions to compute the Gaussian probability distribution model;
- (4)
Sample the new population according to the updated probability model; repeat step 2 until the end condition is satisfied.
The strategy uses the current dominant population to calculate the probability distribution model and generates new child populations based on the sampling of the probability distribution model, and finally obtains the optimal solution through continuous iteration. In this chapter, half of the better performing populations were selected for sampling, and the mathematical model of this strategy is described as follows.
where
is the weighted covariance matrix of the dominant population,
is the weighting coefficient of the dominant population in descending order of fitness value, and
is the weighted covariance matrix of the dominant population. By considering the optimal individual information, the dominant population weighted information and its own information, the evolutionary direction of the population is corrected to improve the performance of the algorithm in the search for superiority.
3.5. Boundary Control Strategy
When the agent position is beyond the search space, it is usually to reinitialize the individual at the boundary, but this tends to derive multiple agents at the boundary position, which is not conducive to the exploration of the population in the whole search space. In order to increase the search range of each agent, this paper proposes a randomized boundary control strategy, which randomly generates the dimensional information in the whole search space when a dimension of the agent is beyond the search boundary, and the specific mathematical model is shown as follows:
where
denotes the
j-th dimension of each individual.
3.6. Pseudo-Code of ASFAOA
The pseudo code of the ASFAOA is shown in Algorithm 1.
Algorithm 1 Pseudo-code of the ASFAOA algorithm. |
1 | Initialize the Arithmetic Optimization Algorithm parameters α, µ |
2 | Initialize the parameters a |
3 | Initialize the solutions’ positions randomly. (Solutions: i = 1,...,Np) |
4 | while (t < tmax) do |
5 | Calculate the Fitness Function for the given solutions |
6 | Find the best solution (Determined best so far). |
7 | Update the MOA value using Equation (15). |
8 | Update the MOP value using Equation (4). |
9 | Update the k value using Equation (10). |
10 | Update the Cov value using Equations (18)~(20). |
11 | for (i = 1 to Solutions) do |
12 | Update positions by Equations (7) and (9) |
13 | Generate a random values between [0, 1] (r1, r2, and r3) |
14 | if r1 > MOA then |
15 | if r2 > 0.5 then |
16 | Update positions by Equation (3) |
17 | else |
18 | Update positions by Equation (16) |
19 | end if |
20 | else |
21 | if r3 > 0.5 then |
22 | Update positions by Equation (5) |
23 | else |
24 | Update positions by Equation (11) |
25 | end if |
26 | end if |
27 | Update positions by Equation (21) |
28 | end for |
29 | t = t + 1 |
30 | end while |
31 | Return the best solution. |
3.7. The Computational Complexity of ASFAOA
The time complexity of AOA can be seen from the literature [
25] as follows.
In this paper, five improvement strategies are proposed;
ASS and
ACA do not change the time complexity. The time complexity of the covariance matrix of
ODE is
, and the
DOL is
. Therefore, the time complexity of m-
ASFAOA is shown below.
4. Experimental Results and Discussion
To demonstrate the superiority of the proposed ASFAOA, two different experiments are conducted in this section, including CEC2017 benchmark function test and wireless sensor coverage optimization. During the experiments, the proposed approach is also compared with the current well-known techniques. All experimental results and discussions validate the competitive performance of the proposed ASFAOA in solving various optimization problems.
4.1. CEC2017 Benchmark Functions Test
In this section, the performance of the proposed ASFAOA is evaluated with the CEC2017 test function. Many meta-heuristic algorithms design update formulas for classical test functions, which then achieve better optimization performance. CEC2017 has a more complex structure and more difficult to solve compared to these functions, which allows for a better validation of the algorithm’s performance. The details of the test function are presented first. Then, six metaheuristics are used to illustrate the outstanding performance of the proposed improved algorithm. The convergence accuracy of the algorithms is analyzed in two aspects, including the mean and standard deviation. The mean value is the average solution obtained from these tests. The standard deviation is used to reflect the dispersion of the optimal solution. In addition, statistical methods such as Wilcoxon signed rank test and Friedman ranking test were used to confirm significant differences between ASFAOA and other algorithms. Convergence curves and box plots are used for a visual description of the optimization effect.
4.1.1. Experimental Settings
The IEEE CEC2017 test function includes 28 benchmark functions whose feasible range is [−100, 100]. The specific content of CEC 2017 is shown in
Table 1. The Fi * denotes theoretical optimum of each function.
In this test, six typical swarm intelligence algorithms are selected to take part in the comparison test, such as the whale optimization algorithm (WOA) [
34], sine cosine algorithm (SCA) [
35], Harris hawks optimization (HHO) [
36], sparrow search algorithm (SSA) [
37], tunicate swarm algorithm (TSA) [
38], butterfly optimization algorithm (BOA) [
39]. The parameters of the mentioned algorithms are displayed on
Table 2.
To perform a proper comparison, the number of iterations and population size were set to 500 and 600, respectively. Each algorithm was tested 51 times independently to obtain reliable statistical results.
4.1.2. 30D Functions Test Results and Analysis
In the study, ASFAOA was compared with six swarm intelligence algorithms on the CEC2017 function with D = 30. The specific results of the experiment are shown in
Table 3. These data were calculated from the optimal values obtained by solving different functions 51 times. On each function, all algorithms are sorted, and the smaller the optimal value, the smaller the ranking. As can be seen, it mainly includes the mean value, standard deviation and ranking based on the two mentioned indicators. In the meantime, the last two rows in
Table 3 indicate the average and final ranking of all algorithms. The proposed ASFAOA obtained the best ranking value of 1.04, which is the top one. HHO obtained the next-best ranking value of 2.71. SSA had the worst ranking of 6.86. It is worth noting that AOA scored 5.61, which is much larger than ASFAOA. Specifically, ASFAOA outperformed AOA on all functions and outperformed the rest of the comparison algorithms in 27 out of 28.
Table 4 illustrates the
p-values calculated by the Wilcoxon singed-rank test for each function of each algorithm. If the value is less than 0.05, it means that there is a significant difference between ASFAOA and the other competitors, otherwise there is no significant difference. It can be seen that ASFAOA is significantly different from the other algorithms for most functions. The last row of
Table 4 shows the results of comparing ASFAOA with other methods. ASFAOA performs worse than BOA on F19 and better than other methods on 27 functions. It is noteworthy that ASFAOA outperforms it on all functions compared to AOA. Furthermore, the average ranking results obtained with the various methods according to the Friedman test are shown in
Figure 2. As shown in
Figure 2, ASFAOA obtained the best average ranking value of 1.03, HHO ranked second with a value of 2.71, WOA and SCA followed HHO, and AOA ranked sixth. The results of the Friedman test further prove that ASFAOA outperforms other algorithms with significant advantages.
The convergence curves of all algorithms with different functions are shown in
Figure 3. It can be seen that the enhanced ASFAOA performs the best compared to other methods. Specifically, ASFAOA has the fastest convergence speed and better convergence accuracy on F1–F5, F7, F9–F12, F15–F18, and F20–F28. In solving the rest of the functions, ASFAOA can achieve higher convergence accuracy in the later phase although the convergence speed is slower in the early phase. Generally, the convergence performance of ASFAOA is better than the comparison algorithm, especially compared to AOA. This phenomenon can be attributed to the following factors: on the one hand, the offset distribution estimation strategy guides the search agents to evolve towards more promising regions at a faster rate. On the other hand, the dual-opposed learning strategy and the spiral search strategy help to further improve the accuracy of the solution as well as the population diversity. In addition, the modification of MOA makes the algorithm more balanced in terms of exploitation and exploration capabilities.
To analyze the distribution characteristics of the solutions solved by ASFAOA, box plots were drawn based on the results of 51 independent solutions for each algorithm, as shown in
Figure 4. For each algorithm, the center mark of each box indicates the median of the results of 51 times solved function. The bottom and top edges of the box indicate the first and third quartile points. The symbol “+” indicates bad values that are not inside the box. We can learn from
Figure 3 that ASFAOA has no outliers when solving nine of the test functions (F1, F2, F8, F11, F14, F16, F24–F26), which indicates that the distribution solved with ASFAOA is very concentrated. For other test functions with bad values (F2, F4–F5, F7–F8, F12–F13, F15–F18, F20–F21, F24, F26), ASFAOA has a smaller median, which indicates that the quality of the solutions of ASFAOA is relatively better. Therefore, the improved algorithm proposed in this paper is robust.
4.1.3. Analysis of ASFAOA Improvement Strategies
In this paper, the proposed improvement method for basic AOA consists of four parts: offset distribution estimation strategy (ODE), adaptive cosine acceleration function (ACA), adaptive spiral search strategy (ASS), and double-opposition learning strategy (DOL). To evaluate the effectiveness of different modification strategies, we present four variants of ASFAOA using different modification strategies as shown in
Table 5. ASFAOA-1 utilizes DOL strategy to improve the algorithmic performance. ASFAOA-2 serves for evaluating the effectiveness of the ASS strategy. ASFAOA-3 utilizes the ACA strategy to balance algorithm exploitation and exploration capabilities. ASFAOA-4 incorporates the ODE strategy. The performance of the six algorithms was compared using the CEC2017 test suite. Each function was run independently 51 times.
Table 6 lists the average error results for each algorithm, and the last row gives the Friedman test results for the six algorithms.
Significantly, ASFAOA with a complete improvement strategy performed the best, with a ranking of 1.04 in the Friedman test. The four derived algorithms, having one of the methods, respectively, also ranked better than the basic AOA. The four derived algorithms are ranked as 4.04, 3.29, 3.00, and 4.07. Hence, it can be concluded that the impact of these four modifications on the performance in descending order is: ODE > ASS > DOL > ACA. ASFAOA-3 performs the best among the four derived algorithms, proving that the utilization of ODE can effectively improve the performance. It generates offspring by utilizing the overall distribution information of the dominant population, which effectively avoids the defect that the population only follows the optimal individuals and falls into local optimum. ASFAOA-2 performs similarly to ASFAOA-3 and ranks third. This is due to the adoption of the ASS strategy, which randomly selects an individual as a reference point in the early stage, effectively broadening the search range and enhancing the algorithm’s ability to solve multimodal functions. In the later period, the optimal individual is selected as the reference point, and the search range is narrowed by adaptive narrowing to ensure the convergence efficiency. ASFAOA-1 strengthens population diversity by generating reverse individuals, and the experimental results also illustrate that this strategy is effective. ASFAOA-4 achieves improved performance by simply modifying the control parameters of the original algorithm, suggesting that the method strikes a certain degree of balance between exploitation and exploration behaviors.
4.2. Wireless Sensor Coverage Optimization Test
In this section, the performance of the proposed ASFAOA is evaluated using the wireless sensor coverage optimization problem. The details of the wireless sensor coverage problem are first presented. Then, the superior performance of the proposed algorithm is illustrated using the better-performing comparison algorithm in
Section 4.1. The convergence accuracy of the algorithm is analyzed in terms of the optimal value, the mean value, and the standard deviation.
4.2.1. Mathematical Models
In the wireless sensor network, the set of homogeneous wireless sensor nodes is
; the sensing radius is
Rs and the monitoring area is a rectangular area of
. For calculation purposes, the rectangular area is discretized into
grids of equal area. The monitoring point is located at the geometric center of the grid. If the distance between the monitoring point and any node is less than or equal to the sensing radius
, the monitoring point is considered to be covered by the wireless sensor network. The set of monitoring nodes is
.
and
correspond to the two-dimensional spatial coordinates of
and
in the set, respectively. The Euclidean distance between the two nodes is as below:
The probability of monitoring point
being sensed by node
is defined as:
The wireless sensor coverage problem can be solved using either a multi-objective optimization algorithm or a single-objective optimization algorithm depending on the factors considered [
40,
41]. In this paper, we focus on verifying the superiority of the proposed algorithms and hence mainly solve the problem with coverage as the objective. We define the area coverage
of all sensor nodes in the target monitoring environment as the ratio of the area covered by the set of sensor nodes to the area of the monitoring area.
4.2.2. Simulation and Analysis
Two sets of experiments were designed to verify the ASFAOA performance. To make the experimental data more convincing, the simulation experiments for each algorithm were conducted 30 times independently, and the optimal value, mean and standard deviation were taken as statistics for comparison. In each experiment set, all parameter settings are the same for all four methods. The algorithms involved in the simulations include ASFAOA, AOA, HHO, and WOA.
Case 1
In Case 1, the monitoring area is a 10 m × 10 m two-dimensional square plane. The number of sensor nodes is 25, the sensing radius is 1, and the communication radius is 2.
Table 7 shows the statistics of the optimization results by each algorithm.
Figure 5 shows the iterative convergence curve of coverage optimization, and
Figure 6 shows the node deployment of WSN after optimization of each algorithm.
As can be seen from
Table 5, ASFAOA achieves a 15.7%, 8.26%, and 8.26% improvement in coverage compared with the optimized results of AOA, HHO, and WOA, respectively. Additionally, ASFAOA has better stability. From
Figure 5, it can be observed that ASFAOAs do not fall into the local optimum and achieve a better coverage solution at the end, although the convergence speed is slow in the early phase. From the optimized node deployment of each algorithm in
Figure 6, the node deployment after AOA optimization has a larger coverage blind area, while the result of ASFAOA optimization makes the sensor nodes more uniformly distributed, which verifies the effectiveness of the improved strategy.
Case 2
In case 2, the experimental settings are a two-dimensional plane with a monitoring area of 50 m × 50 m. The number of sensor nodes is 35. The sensing radius is 2.5 and the communication radius is 5.
Table 6 records the optimization results for case 2. The coverage convergence curve for each algorithm is shown in
Figure 7. Deployment scheme of nodes for each algorithm is shown in
Figure 8.
According to the analysis of
Table 8, ASFAOA continues to maintain its high performance in the optimization of Case 2 and finally achieves an average coverage rate of 83.93%. Compared with the optimization results of AOA, HHO, and WOA, the coverage rate improved by 14.53%, 5.11%, and 5.11%, respectively. In addition, the minimum mean value of ASFAOA indicates its better performance. As can be seen from
Figure 7, ASFAOA can effectively avoid falling into local optimum and achieve a better coverage solution. By
Figure 8, it can be seen that the optimal coverage solution obtained by ASFAOA has a more uniform distribution of nodes. The solution finally given by ASFAOA improves the coverage rate from 65 to 83.93, which indicates that the algorithm proposed in this paper has excellent search capability.
5. Conclusions
In this paper, we have proposed an improved variant of AOA, named ASFAOA for the global optimization problem. The convergence accuracy and convergence speed of ASFAOA are supported by the dual-opposition learning strategy, the adaptive spiral search strategy, and the offset distribution estimation strategy. In order to validate and analyze the superiority of ASFAOA, a large number of experiments were conducted, including the CEC 2017 test suite of mean analysis, convergence analysis, stability analysis, statistical tests, and two sets of wireless sensor coverage problems with different dimensions. The results and discussions validate the rationality and usability of the improved strategies. The application of ASFAOA to wireless sensor coverage problems verifies the capability of ASFAOA in solving practical optimization problems.
In the future, we will focus our attention on two subsequent directions: One is to further investigate the internal mechanisms of ASFAOA with the aim of reducing its computational complexity and improving its performance. The other is to develop a multi-objective version of ASFAOA for solving more practical problems such as robot path planning, optimal control of electric vehicle composite braking, multilevel thresholding image segmentation, UAV mission planning, etc.
Author Contributions
Conceptualization, S.Y.; methodology, S.Y.; software, S.Y. and L.Z.; validation, X.Y., J.S. and W.D.; formal analysis, S.Y.; investigation, X.Y.; resources, L.Z.; data curation, J.S.; writing—original draft preparation, S.Y.; writing—review and editing, L.Z.; visualization, W.D.; supervision, S.Y.; project administration, S.Y. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Tang, A.D.; Tang, S.Q.; Han, T.; Zhou, H.; Xie, L. A Modified Slime Mould Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 2298215. [Google Scholar] [CrossRef] [PubMed]
- Fang, H.; Fu, X.; Zeng, Z.; Zhong, K.; Liu, S. An Improved Arithmetic Optimization Algorithm and Its Application to Determine the Parameters of Support Vector Machine. Mathematics 2022, 10, 2875. [Google Scholar] [CrossRef]
- Ghith, E.S.; Tolba, F.A. Tuning PID controllers based on Hybrid Arithmetic optimization algorithm and Artificial Gorilla troop optimization for Micro-Robotics systems. IEEE Access 2023, 11, 27138–27154. [Google Scholar] [CrossRef]
- Issa, M. Enhanced Arithmetic Optimization Algorithm for Parameter Estimation of PID Controller. Arab. J. Sci. Eng. 2023, 48, 2191–2205. [Google Scholar] [CrossRef] [PubMed]
- Sharma, A.; Khan, R.A.; Sharma, A.; Kashyap, D.; Rajput, S. A novel opposition-based arithmetic optimization algorithm for parameter extraction of PEM fuel cell. Electronics 2021, 10, 2834. [Google Scholar] [CrossRef]
- Xie, L.; Wang, S.; Zhu, D.; Hu, G.; Zhou, C. DNA Sequence Optimization Design of Arithmetic Optimization Algorithm Based on Billiard Hitting Strategy. Interdiscip. Sci. Comput. Life Sci. 2023, 15, 231–248. [Google Scholar] [CrossRef]
- Abualigah, L.; Elaziz, M.A.; Yousri, D.; Al-qaness, M.A.A.; Ewees, A.A.; Zitar, R.A. Augmented arithmetic optimization algorithm using opposite-based learning and lévy flight distribution for global optimization and data clustering. J. Intell. Manuf. 2022, 1–39. [Google Scholar] [CrossRef]
- Abualigah, L.; Almotairi, K.H.; Al-qaness, M.A.A.; Ewees, A.A.; Yousri, D.; Elaziz, M.A.; Nadimi-Shahraki, M.H. Efficient text document clustering approach using multi-search Arithmetic Optimization Algorithm. Knowl. Based Syst. 2022, 248, 108833. [Google Scholar] [CrossRef]
- Izci, D. A novel modified arithmetic optimization algorithm for power system stabilizer design. Sigma J. Eng. Nat. Sci. Sigma Mühendislik Fen Bilim. Derg. 2022, 40, 529–541. [Google Scholar] [CrossRef]
- Ibrahim, R.A.; Abualigah, L.; Ewees, A.A.; Al-Qaness, M.A.A.; Yousri, D.; Alshathri, S.; Elaziz, M.A. An electric fish-based arithmetic optimization algorithm for feature selection. Entropy 2021, 23, 1189. [Google Scholar] [CrossRef]
- Montoya, O.D.; Giral-Ramírez, D.A.; Hernández, J.C. Efficient Integration of PV Sources in Distribution Networks to Reduce Annual Investment and Operating Costs Using the Modified Arithmetic Optimization Algorithm. Electronics 2022, 11, 1680. [Google Scholar] [CrossRef]
- Mohammed Ridha, H.; Hizam, H.; Mirjalili, S.; Lutfi Othman, M.; Effendy Ya’acob, M.; Ahmadipour, M. Novel parameter extraction for Single, Double, and three diodes photovoltaic models based on robust adaptive arithmetic optimization algorithm and adaptive damping method of Berndt-Hall-Hall-Hausman. Sol. Energy 2022, 243, 35–61. [Google Scholar] [CrossRef]
- Abbassi, A.; Ben Mehrez, R.; Bensalem, Y.; Abbassi, R.; Kchaou, M.; Jemli, M.; Abualigah, L.; Altalhi, M. Improved Arithmetic Optimization Algorithm for Parameters Extraction of Photovoltaic Solar Cell Single-Diode Model. Arab. J. Sci. Eng. 2022, 47, 10435–10451. [Google Scholar] [CrossRef]
- Wang, R.B.; Wang, W.F.; Xu, L.; Pan, J.S.; Chu, S.C. An Adaptive Parallel Arithmetic Optimization Algorithm for Robot Path Planning. J. Adv. Transp. 2021, 13, 889–945. [Google Scholar] [CrossRef]
- Bhat, S.J.; Santhosh, K.V. A localization and deployment model for wireless sensor networks using arithmetic optimization algorithm. Peer-to-Peer Netw. Appl. 2022, 15, 1473–1485. [Google Scholar] [CrossRef]
- Abd Elaziz, M.; Abualigah, L.; Ibrahim, R.A.; Attiya, I. IoT Workflow Scheduling Using Intelligent Arithmetic Optimization Algorithm in Fog Computing. Comput. Intell. Neurosci. 2021, 2021, 9114113. [Google Scholar] [CrossRef] [PubMed]
- Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A.H. A novel evolutionary arithmetic optimization algorithm for multilevel thresholding segmentation of covid-19 ct images. Processes 2021, 9, 1155. [Google Scholar] [CrossRef]
- Dhawale, P.G.; Kamboj, V.K.; Bath, S.K. A levy flight based strategy to improve the exploitation capability of arithmetic optimization algorithm for engineering global optimization problems. Trans. Emerg. Telecommun. Technol. 2023, 34, e4739. [Google Scholar] [CrossRef]
- Zhang, Y.J.; Yan, Y.X.; Zhao, J.; Gao, Z.M. AOAAO: The Hybrid Algorithm of Arithmetic Optimization Algorithm With Aquila Optimizer. IEEE Access 2022, 10, 10907–10933. [Google Scholar] [CrossRef]
- Izci, D.; Ekinci, S.; Kayri, M.; Eker, E. A novel improved arithmetic optimization algorithm for optimal design of PID controlled and Bode’s ideal transfer function based automobile cruise control system. Evol. Syst. 2022, 13, 453–468. [Google Scholar] [CrossRef]
- Chen, M.; Zhou, Y.; Luo, Q. An Improved Arithmetic Optimization Algorithm for Numerical Optimization Problems. Mathematics 2022, 10, 2152. [Google Scholar] [CrossRef]
- Izci, D.; Ekinci, S.; Eker, E.; Abualigah, L. Opposition-Based Arithmetic Optimization Algorithm with Varying Acceleration Coefficient for Function Optimization and Control of FES System. In Proceedings of the International Joint Conference on Advances in Computational Intelligence, Online, 23–24 October 2022. [Google Scholar]
- Zhang, J.; Zhang, G.; Huang, Y.; Kong, M. A Novel Enhanced Arithmetic Optimization Algorithm for Global Optimization. IEEE Access 2022, 10, 75040–75062. [Google Scholar] [CrossRef]
- Abualigah, L.; Ewees, A.A.; Al-qaness, M.A.A.; Elaziz, M.A.; Yousri, D.; Ibrahim, R.A.; Altalhi, M. Boosting arithmetic optimization algorithm by sine cosine algorithm and levy flight distribution for solving engineering optimization problems. Neural Comput. Appl. 2022, 34, 8823–8852. [Google Scholar] [CrossRef]
- Çelik, E. IEGQO-AOA: Information-Exchanged Gaussian Arithmetic Optimization Algorithm with Quasi-opposition learning. Knowl. Based Syst. 2023, 260, 110169. [Google Scholar] [CrossRef]
- Özmen, H.; Ekinci, S.; Izci, D. Boosted arithmetic optimization algorithm with elite opposition-based pattern search mechanism and its promise to design microstrip patch antenna for WLAN and WiMAX. Int. J. Model. Simul. 2023, 1–16. [Google Scholar] [CrossRef]
- Zheng, R.; Jia, H.; Abualigah, L.; Liu, Q.; Wang, S. An improved arithmetic optimization algorithm with forced switching mechanism for global optimization problems. Math. Biosci. Eng. 2022, 19, 473–512. [Google Scholar] [CrossRef]
- Ekinci, S.; Izci, D.; Al Nasar, M.R.; Abu Zitar, R.; Abualigah, L. Logarithmic spiral search based arithmetic optimization algorithm with selective mechanism and its application to functional electrical stimulation system control. Soft Comput. 2022, 26, 12257–12269. [Google Scholar] [CrossRef]
- Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
- Tang, A.; Zhou, H.; Han, T.; Xie, L. A Chaos Sparrow Search Algorithm with Logarithmic Spiral and Adaptive Step for Engineering Problems. Comput. Model. Eng. Sci. 2021, 130, 331–364. [Google Scholar] [CrossRef]
- Xie, L.; Han, T.; Zhou, H.; Zhang, Z.-R.; Han, B.; Tang, A. Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 9210050. [Google Scholar] [CrossRef]
- Tang, A.; Zhou, H.; Han, T.; Xie, L. A modified manta ray foraging optimization for global optimization problems. IEEE Access 2021, 9, 128702–128721. [Google Scholar] [CrossRef]
- Tang, A.D.; Han, T.; Zhou, H.; Xie, L. An improved equilibrium optimizer with application in unmanned aerial vehicle path planning. Sensors 2021, 21, 1814. [Google Scholar] [CrossRef]
- Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
- Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
- Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
- Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
- Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
- Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
- Sun, Z.; Zhang, Y.; Nie, Y.; Wei, W.; Lloret, J.; Song, H. CASMOC: A novel complex alliance strategy with multi-objective optimization of coverage in wireless sensor networks. Wirel. Networks 2017, 23, 1201–1222. [Google Scholar] [CrossRef]
- Wang, Z.; Xie, H.; He, D.; Chan, S. Wireless sensor network deployment optimization based on two flower pollination algorithms. IEEE Access 2019, 7, 180590–180608. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).