Next Article in Journal
A Study of the Impact of Predictive Maintenance Parameters on the Improvment of System Monitoring
Next Article in Special Issue
Application of the ADMM Algorithm for a High-Dimensional Partially Linear Model
Previous Article in Journal
Transmission Line Object Detection Method Based on Label Adaptive Allocation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Arithmetic Optimization Algorithm for Numerical Optimization Problems

1
College of Artificial Intelligence, Guangxi University for Nationalities, Nanning 530006, China
2
Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning 530006, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(12), 2152; https://doi.org/10.3390/math10122152
Submission received: 28 May 2022 / Revised: 16 June 2022 / Accepted: 17 June 2022 / Published: 20 June 2022
(This article belongs to the Special Issue Optimization Algorithms: Theory and Applications)

Abstract

:
The arithmetic optimization algorithm is a recently proposed metaheuristic algorithm. In this paper, an improved arithmetic optimization algorithm (IAOA) based on the population control strategy is introduced to solve numerical optimization problems. By classifying the population and adaptively controlling the number of individuals in the subpopulation, the information of each individual can be used effectively, which speeds up the algorithm to find the optimal value, avoids falling into local optimum, and improves the accuracy of the solution. The performance of the proposed IAOA algorithm is evaluated on six systems of nonlinear equations, ten integrations, and engineering problems. The results show that the proposed algorithm outperforms other algorithms in terms of convergence speed, convergence accuracy, stability, and robustness.

1. Introduction

In the practical application calculations of science and engineering, many mathematical problems will be involved, such as nonlinear equation systems (NESs), numerical integration, etc. There are tremendous methods for solving NESs, including traditional techniques and intelligent optimization algorithms. Traditional techniques to solve NESs use gradient information [1], such as Newton’s method [2,3], quasi-Newton’s method [4], steepest descent method, etc. Due to relying on the selection of initial points and being prone to falling into optimal local one, these methods cannot obtain high-quality solutions for some specific problems. The metaheuristic algorithms, however, have the characteristics of low requirements for the initial point, a wide range of solutions, high efficiency, and robustness. These break through the limitations of traditional methods in solving problems. In recent years, metaheuristic algorithms have made great contributions in solving NESs (Karr et al. [5]; Ouyang et al. [6]; Jaberipour et al. [7]; Pourjafari et al. [8]; Jia et al. [9]; Ren et al. [10]; Cai et al. [11]; Abdollahi et al. [12]; Hirsch et al. [13]; Sacco et al. [14]; Gong et al. [15]; Ariyaratne et al. [16]; Gong et al. [17]; Ibrahim et al. [18]; Liao et al. [19]; Ning et al. [20]; Rizk-Allah et al. [21]; Ji et al. [22]; Turgut et al. [23]).
Numerical integration is a very basic computational problem. It is well-known that, when calculating the definite integral, the integrand is required to be easily given and then solved by the Newton-Leibniz formula. However, this method has many limitations, because in many practical problems, the original function of the integrand cannot be expressed, or the calculation is too complicated, so the definite integral of the integrand is replaced by a suitable finite sum approximation. The traditional numerical integration methods include the trapezoidal method, rectangle method, Romberg method, Gauss method, Simpson’s method, Newton’s method, etc. The above methods all divide the integral interval into equal parts, and the calculation efficiency is not high. Therefore, it is of great significance to find a new technique with a fast convergence speed, high precision, and strong robustness for numerical integration. Zhou et al. [24], based on the evolutionary strategy method, worked to solve numerical integration. Wei et al. [25] researched the numerical integration method based on particle swarm optimization. Wei et al. [26], based on functional networks, worked to solve numerical integration. Deng et al. [27] solved the numerical integration problems based on the differential evolution algorithm. Xiao et al. [28] applied the improved bat algorithm in numerical integration. The quality of the solution obtained by the above techniques was higher than the traditional methods.
All along, engineering optimization problems have been a popular area of research. Metaheuristic algorithms have been widely applied to engineering optimization problems due to their great practical significance, such as applied to the automatic adjustment of controller coefficients (Szczepanski et al. [29]; Hu et al. [30]), applied to system identification (Szczepanski et al. [31]; Liu et al. [32]), applied to global path planning (Szczepanski et al. [33]; Brand et al. [34]), and applied to robotic arm scheduling (Szczepanski et al. [35]; Kolakowska et al. [36]).
The Arithmetic Optimization Algorithm (AOA) [37] is a novel metaheuristic algorithm proposed by Abualigah et al. in 2021. AOA is a mathematical model technique that simulates the behaviors of Arithmetic operators (i.e., Multiplication, Division, Subtraction, and Addition) and their influence on the best local solution. Some improvements and practical applications of the algorithm have been made by scholars. Premkumar et al. [38] proposed a multi-objective arithmetic optimization algorithm (MOAOA) for solving real-world multi-objective CEC-2021-constrained optimization problems. Bansal. et al. [39] used a binary arithmetic optimization algorithm for integrated features and feature selection. Agushaka et al. [40] introduced an advanced arithmetic optimization algorithm for solving mechanical engineering design problems. Abualigah et al. [41] presented a novel evolutionary arithmetic optimization algorithm for multilevel thresholding segmentation. Xu et al. [42] hybridized an extreme learning machine and a developed version of the arithmetic optimization algorithm for model identification of the proton exchange membrane fuel cells. Izci et al. [43] introduced an improved arithmetic optimization algorithm for the optimal design of controlled PID. Khatir et al. [44] proposed an improved artificial neural network using the arithmetic optimization algorithm for damage assessments.
The basic AOA still has some drawbacks. For instance, it is easy to fall into a local optimum due to the location update based on the optimal value, premature convergence, and low solution accuracy, which need to be solved. Furthermore, in order to seek a more efficient way to solve numerical problems, in this paper, an improved arithmetic optimization algorithm (IAOA) based on the population control strategy is proposed to solve numerical optimization problems. By classifying the population and adaptively controlling the number of individuals in the subpopulation, the information of each individual can be used effectively while increasing the population diversity. More individuals are needed in the early iterations to perform a large-scale search that avoids falling into the local optimum. The search around the optimal value later in the iterations by more individuals speeds up the algorithm to find the optimal value and improves the accuracy of the solution. The performance of the proposed IAOA algorithm is evaluated on six systems of nonlinear equations, ten integrations, and engineering problems. The results show that the proposed algorithm outperforms the other algorithms in terms of convergence speed, convergence accuracy, stability, and robustness.
The main structure of this paper is as follows. Section 2 reviews the relevant knowledge for the nonlinear equation systems, integration, and basic arithmetic optimization algorithm (AOA). Section 3 introduces the proposed IAOA in detail. Section 4 presents experimental results, comparisons, and analyses. Section 5 concludes the work and proposes future research directions.

2. Preliminaries

2.1. Nonlinear Equation Systems

Generally, a nonlinear equation system can be formulated as follows.
N E S = f 1 ( x 1 , x 2 , , x D ) = 0 f i ( x 1 , x 2 , , x D ) = 0 f n ( x 1 , x 2 , , x D ) = 0
where x is a D-dimensional decision variable, and n is the number of equations. Some equations are linear; the others are nonlinear. If x* satisfies fi (x*) = 0, then x* is a root of the system of equations.
Before using the optimization algorithm to solve the NES, first is to convert it into a single-objective optimization problem [17] as follows.
min f ( x ) = i = 1 n f i 2 ( x ) , x = ( x 1 , x 2 , , x i , , x D )
Finding the minimum of an optimization problem is equivalent to finding the root of the NES.

2.2. Numerical Integration

Definite integrals are very basic mathematical calculation problems as follows.
a b f ( x ) d x
where f(x) represents the integrand function, and a and b represent the upper and lower bounds, respectively.
Usually, firstly, we find the original function F(x) of the integrand when finding a definite integral and then use the Newton-Leibniz formula as follows:
a b f ( x ) d x = F ( b ) F ( a ) , ( F ( x ) = f ( x ) )
However, in many cases, it is difficult to obtain the original function F(x), so the Newton-Leibniz formula will not be able to be used.
In addition, the rest of the numerical quadrature methods are based on the quadrature formula of equidistant node division and summation or stipulate that the equidistant nodes remain unchanged during the whole process of calculating, as shown in Figure 1a. There need more nodes to obtain a high accuracy. However, the best segmentation is not the predetermined equidistant points, as shown in Figure 1b. Randomly generated subintervals has unequal intervals according to the concave and convex changes of the function curve, so the obtained value has a higher accuracy than the traditional methods. Based on this idea, there is another integral method based on non-equidistant point division [24]. First, generate some points randomly on the integral interval, and then, the algorithm is used to optimize these split points. Finally, a higher accuracy value will be obtained. This not only calculates the definite integral of the function in the usual sense but also calculates the integral of the singular function and the integral of the oscillatory function for this method [27]. The flow of the numerical integration algorithm based on unequal point segmentation is as follows [24].
(1)
Randomly initialize the population in the search space S.
(2)
Arrange each individual in the integral interval in ascending order. The integral interval has n(n = D + 2) nodes and n − 1 segments. Calculate the distance hi between two adjacent nodes and the function f(xk) value of each node, then calculate the function value corresponding to the D + 2 nodes and the function value of the middle node of each subsection. Find the minimum value wj and the maximum value Wj (j = 1, 2, …, D + 1) among the function values of the left endpoint, middle node, and right endpoint of each subsection.
(3)
Calculate fitness value. F ( i ) 1 2 j = 1 D + 1 h j | W j w j | .
(4)
Update individuals through an optimization algorithm.
(5)
Repeat step 4 until reaching the stop condition.
(6)
Get the accuracy and integral values.
The numerical integration method based on Hermite interpolation only needs to provide the value of the integral node functions and has high precision. However, this method is based on equidistant segmentation. In this paper, the adaptability of unequal-spaced partitioning and the numerical integration method based on Hermite interpolation are combined to solve the numerical integration problem, and the formula is as follows:
a b f ( x ) d x = k = 1 n h i 2 [ f ( x k ) + f ( x k + 1 ) ] i = 1 n 1 25 144 h i [ f ( a ) + f ( b ) ] n 1 + i = 1 n 1 h i 3 [ f ( a + h i ) + f ( b h i ) ] n 1 i = 1 n 1 h i 4 [ f ( a + 2 h i ) + f ( b 2 h i ) ] n 1 + i = 1 n 1 h i 9 [ f ( a + 3 h i ) + f ( b 3 h i ) ] n 1 i = 1 n 1 h i 48 [ f ( a + 4 h i ) + f ( b 4 h i ) ] n 1
where n is the number of random split points, hi is the distance between two adjacent points, and f(x) is the integrand function. The advantage of this method is that it does not need to calculate the derivative value and only needs to provide the node function value. Before using the optimization algorithm to solve the integration, the first step is to convert it into a single-objective optimization problem as follows:
min F ( x ) = a b f ( x ) d x E
where a b f ( x ) d x is obtained by Equation (5), and E means the exact value.
Combine the optimization algorithm with Equation (5), and the whole solution process is as follows.
(1)
Randomly initialize the population in the search space S.
(2)
Arrange each individual in the integral interval in ascending order. The integral interval has n(n = D + 2) nodes and n − 1 segments. Calculate the distance hi between two adjacent nodes and the function f(xk) value of each node and then bring them into Equation (5).
(3)
Calculate the fitness value by Equation (6).
(4)
Update individuals through an optimization algorithm.
(5)
Repeat step 4 until reaching the stop condition.
(6)
Get the accuracy and integral values.

2.3. The Arithmetic Optimization Algorithm (AOA)

The AOA algorithm is a population-based metaheuristic algorithm to solve optimization problems by utilizing mathematical operators (Multiplication (“×”), Division (“÷”), Subtraction (“−”), and Addition (“+”)). The specific description is as follows.

2.3.1. Initialization Phase

Generate a candidate solution matrix randomly.
X = x 1 , 1 x 1 , j x 1 , n 1 x 1 , n x 2 , 1 x 2 , j x 2 , n 1 x 2 , n x N 1 , 1 x N 1 , j x N 1 , n 1 x N 1 , n x N , 1 x N , j x N , n 1 x N , n
After the initialization step, calculate the Math Optimizer Accelerated (MOA) function and use it to choose between exploration and exploitation. The function is as follows:
M O A ( t ) = M i n + t × M a x M i n T
where Max = 0.9 denotes the maximum and Min = 0.2 denotes the minimum of the function value, MOA (t) represents the function value of the current iteration, and T and t represent the maximum number of iterations and current iteration, respectively.

2.3.2. Exploration Phase

During the exploration phase, the operators (Multiplication (“×”) and Division (“÷”)) are used to explore the space randomly when the MOA > 0.5. The mathematical model is as follows:
x i , j ( t + 1 ) = b e s t ( x j ) ÷ ( M O P + ε ) × ( ( U B j L B j ) × μ + L B j ) , r 2 < 0.5 b e s t ( x j ) × M O P × ( ( U B j L B j ) × μ + L B j ) , o t h e r w i s e
where r2 is a random number, xi,j(t + 1) represents the jth position of ith solution in the (t + 1)th iteration, best(xj) denotes the jth position in the global optimal solution, ε is a small integer number that avoids the case where the denominator is zero in division, UBj and LBj represents the upper and lower bounds of each dimension, respectively, and μ is equal to 0.5. The Math Optimizer probability (MOP) is as follows:
M O P ( t ) = 1 t 1 α T 1 α
where MOP(t) represents the function value for the current iteration, and α is a sensitive parameter and equal to 5.

2.3.3. Exploitation Phase

During the exploration phase, the operators (Subtraction (“−”) and Addition (“+”)) are used to execute the exploitation. When MOA < 0.5, the mathematical model as follows:
x i , j ( t + 1 ) = b e s t ( x j ) M O P × ( ( U B j L B j ) × μ + L B j ) , r 3 < 0.5 b e s t ( x j ) + M O P × ( ( U B j L B j ) × μ + L B j ) , o t h e r w i s e
where r3 is a random number. The pseudo-code of the AOA is as follows (Algorithm 1) [37].
Algorithm 1 AOA
1. Set up the initial parameters α, μ.
2. Initialize the population randomly.
3. for t = 1: T
4.   Calculate the fitness function and select the best solution.
5.   Update the MOA (using Equation (8)) and MOP (using Equation (10)).
6.   for i = 1: N
7.     for j = 1: Dim
8.        Generate the random values between [0, 1] (r1, r2, r3)
9.        if r1 > MOA
10.           if r2 > 0.5
11.             Update the position of the individual by Equation (9).
12.           else
13.             Update the position of the individual by Equation (9).
14.           end
15.         else
16.           if r3 > 0.5
17.             Update the position of the individual by Equation (11).
18.           else
19.             Update the position of the individual by Equation (11).
20.           end
21.         end
22.    end
23. end
24. t = t + 1
25. end
26. Return the best solution (x).

3. Our Proposed IAOA

3.1. Motivation for Improving the AOA

In AOA, the population is updated based on the optimal global solution. Once it falls into the optimal local one, the entire population will stagnate. There is premature coverage, in some cases [33]. In addition, this algorithm does not fully utilize the information of the individuals in the population. Therefore, to make full use of the information of the individuals and address the weakness of AOA, the improved arithmetic optimization algorithm (IAOA) is proposed in this paper.

3.2. Population Control Mechanism

In the basic arithmetic optimization algorithm (AOA), the operators (Multiplication (“×”), Division (“÷”), Subtraction (“−”), and Addition (“+”)) are used to wrap around an optimal solution to search randomly in space, and it will lead to a loss of population diversity. Therefore, it is necessary to classify for the population.

3.2.1. The First Subpopulation

Sort the population according to the fitness value and select the first num_best individuals as the first subpopulation:
n u m _ b e s t = r o u n d ( 0.1 N + 0.5 N ( 1 t / T ) )
where N is the number of individuals, and t and T represent the current iteration and maximum iterations, respectively. Then, these individuals update their position by getting information about each other. The mathematical model is as follows:
x b e s t _ i ( t + 1 ) = x b e s t _ i ( t ) + r a n d × b e s t ( x ) x b e s t _ i ( t ) + x b e s t _ j ( t ) 2 × ω
x b e s t _ j ( t + 1 ) = x b e s t _ j ( t ) + r a n d × b e s t ( x ) x b e s t _ i ( t ) + x b e s t _ j ( t ) 2 × ω
where xbest_i(t + 1) denotes the position of ith individual in the next iteration, the same as xbest_j(t + 1), best(x) represents the global optimum that has been found through individuals after t iterations, xbest_j is selected from the first class randomly, and ω means the information acquisition rate and takes the value 1 or 2.

3.2.2. The Second Subpopulation

Select num_middle individuals from the population as the second subpopulation.
n u m _ m i d d l e = r o u n d ( 0.3 × N )
These individuals fall between num_best and num_worst in the population. Then, these individuals update their position, and the updated model is as follows:
x m i d _ i ( t + 1 ) = x m i d _ i ( t ) + L e v y × ( b e s t ( x ) x m i d _ j )
where xmid_i(t + 1) denotes the position of ith individual in the next iteration, Levy is the Levy distribution function [45,46], and xmid_j is selected from the second class randomly.

3.2.3. The Third Subpopulation

Select num_worst individuals from the population as the final subpopulation.
n u m _ w o r s t = N ( n u m _ b e s t + n u m _ m i d d l e )
In the final class, the individuals update their position by the following equation:
x w o r s t _ i ( t + 1 ) = x w o r s t _ i + t T × b e s t ( x ) x w o r s t _ j
where xworst_i(t + 1) denotes the position of ith individual in the next iteration, and best(x) represents the global optimum that has been found through individuals after t iterations.
At the early iteration of IAOA, there are more individuals in the first subpopulation for speeding up the update of the global optimum. At the later iterations of the algorithm, the number of individuals in the first subpopulation decreases, which solves the operator crowding problem near the optimum. In addition, the number of individuals in the third subpopulation increases, which effectively prevents the population from falling into the local optimum. The second subpopulation utilizes the Levy flight for small-step updates to find more promising areas. The above strategy can effectively overcome the weaknesses of traditional AOA and improve its performance. The pseudo-code of the IAOA in Algorithm 2 is as follows (Algorithm 2). Figure 2 is the flowchart of the IAOA.
Algorithm 2 IAOA
1. Set up the initial parameters α, μ.
2. Initialize the population randomly.
3. for t = 1: T
4.   Calculate the fitness function and select the best solution.
5.   Calculate the number of the first subpopulation by Equation (12).
6.   Update the first subpopulation by Equations (13) and (14).
7.   Calculate the number of the second subpopulation by Equation (15).
8.   Update the second subpopulation by Equation (16).
9.   Calculate the number of the third subpopulation by Equation (17).
10.    Update the third subpopulation by Equation (18).
11.    Update the MOA (using Equation (8)) and MOP (using Equation (10)).
12.    for i = 1: N
13.      for j = 1: Dim
14.        Generate the random values between [0, 1] (r1, r2, r3)
15.        if r1 > MOA
16.           if r2 > 0.5
17.               Update the position of the individual by Equation (9).
18.           else
19.               Update the position of the individual by Equation (9).
20.           end
21.        else
22.           if r3 > 0.5
23.               Update the position of the individual by Equation (11).
24.           else
25.               Update the position of the individual by Equation (11).
26.           end
27.        end
28.     end
29. end
30. t = t + 1
31. end
32. Return the best solution (x).

4. Numerical Experiments and Analysis

4.1. Parameter Settings

Here, six groups of NESs and ten groups of integration have been used to demonstrate the efficiency of the IAOA. The IAOA compares several popular algorithms and two improved arithmetic optimization algorithms (The Arithmetic Optimization Algorithm (AOA) [37], Sine Cosine Algorithm (SCA) [47], Whale Optimization Algorithm (WOA) [48], Grey Wolf Optimizer (GWO) [49], Harris hawks optimization (HHO) [50], Slime mould algorithm (SMA) [51], Differential evolution(DE) [52], Cuckoo search algorithm (CSA) [53], Advanced arithmetic optimization algorithm (nAOA) [40], and a developed version of Arithmetic Optimization Algorithm (dAOA) [42]) for tackling NES. Among them, the parameters of these algorithms are all from the original version. These algorithms are evaluated from four aspects: the average value, the optimal value, the worst value, and the standard deviation. All algorithms are executed on MATLAB 2021a, running on a computer with a Windows 10 operating system, Intel(R) Core (TM) i7-9700 CPU @ 3.00 GHz, 16 GB of Random Access Memory (RAM), and run 30 times independently for all test problems. The flowchart for handling issues by the IAOA is shown in Figure 3.

4.2. Application in Solving NESs

Solving nonlinear problems often requires higher-precision solutions in many practical applications. In this section, six nonlinear systems of equations are chosen to evaluate the performance of the IAOA. The characteristics of these equations are different from each other, where problem01 [54] describes the interval arithmetic problem, problem02 [55] describes the multiple steady-states problem, and problem06 [56] describes the molecular conformation. These problems come from real-world applications. For fairness, set the population to 50 and the maximum number of iterations to 200. Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 show all the test results of the NES. Best represents the best value, Worst represents the worst value, Mean represents the mean value, Std represents the standard deviation, and p-value stands for the Wilcoxon rank–sum test in Table 7. The Wilcoxon p-value test is used to verify whether there is an obvious difference between the two sets of data.
Problem 01. The description of the system is as follows [54]:
x 1 0.25428722 0.18324757 x 4 x 3 x 9 = 0 x 2 0.37842197 0.16275449 x 1 x 10 x 6 = 0 x 3 0.27162577 0.16955071 x 1 x 2 x 10 = 0 x 4 0.19807914 0.15585316 x 7 x 1 x 6 = 0 x 5 0.44166728 0.19950920 x 7 x 6 x 3 = 0 x 6 0.14654113 0.18922793 x 8 x 5 x 10 = 0 x 7 0.42937161 0.21180486 x 2 x 5 x 8 = 0 x 8 0.07056438 0.17081208 x 1 x 7 x 6 = 0 x 9 0.34504906 0.19612740 x 10 x 6 x 8 = 0 x 10 0.42651102 0.21466544 x 4 x 8 x 1 = 0
There are ten equations in the system, where x i [ 2 , 2 ] , i = 1, …, n, and n = 10. The aim was to obtain a higher precision solution x (x1, …, xn) through the proposed optimization method, and the results are recorded in Table 1. The IAOA is better than others compared with several algorithms. The WOA ranks second, and the rest obtain competitive results. The convergence curve for this problem shows in Figure 4a.
Problem 02. The description of the system is as follows [55]:
( 1 R ) D 10 ( 1 + β 1 ) x 1 exp 10 x 1 1 + 10 x 1 γ x 1 = 0 ( 1 R ) D 10 β 1 x 1 ( 1 + β 2 ) x 2 exp 10 x 2 1 + 10 x 2 γ + x 1 ( 1 + β 2 ) x 2 = 0
There are two equations in system, where x i [ 0 , 1 ] , i = 1, …, n, and n = 2. In Table 2, the experimental results for this problem proved that the proposed IAOA outperforms the other methods. The DE ranks second, and the rest obtain competitive results. The AOA, WOA, GWO, HHO, and CSO are in the third echelon. Furthermore, the rest are in the fourth echelon. The convergence curve for this problem is shown in Figure 4b.
Problem 03. The description of the system is as follows [13]:
sin x 1 3 3 x 1 x 2 2 1 = 0 cos 3 x 1 2 x 2 x 2 3 + 1 = 0
There are two equations in the system, where x i [ 2 , 2 ] , i = 1, …, n, and n = 2. The simulation results for this problem are shown in Table 3. It revealed that the IAOA is better than the other algorithms. The DE, CSO, and SMA are in the second echelon. The rest are in the third echelon. The convergence curve for this problem is shown in Figure 4c.
Problem 04. The description of the system is as follows [54]:
x 2 + 2 x 6 + x 9 + 2 x 10 10 5 = 0 x 3 + x 8 3 10 5 = 0 x 1 + x 3 + 2 x 5 + 2 x 8 + x 9 + x 10 5 10 5 = 0 x 4 + 2 x 7 10 5 = 0 0.5140437 10 7 x 5 x 1 2 = 0 0.1006932 10 6 x 6 2 x 2 2 = 0 0.7816278 10 15 x 7 x 4 2 = 0 0.1496236 10 6 x 8 x 1 x 3 = 0 0.6194411 10 7 x 9 x 1 x 2 = 0 0.2089296 10 14 x 10 x 1 x 2 2 = 0
There are ten equations in the system: x i [ 10 , 10 ] , i = 1, …, n, and n = 10. Table 4 shows that the IAOA outperforms the others, and AOA, HHO, SMA, and nAOA obtain the competitive results. The convergence curve for this problem is shown in Figure 4d.
Problem 05. The description of the system is as follows [17]:
0.5 sin ( x 1 x 2 ) 0.25 π x 2 0.5 x 1 = 0 1 0.25 π exp ( 2 x 1 ) e + e π x 2 2 e x 1 = 0
There are two equations in the system, where x 1 [ 0.25 , 1 ] and x 2 [ 1.5 , 2 π ] . In Table 5, the IAOA obtained the optimal solution, DE obtained the suboptimal solution, and the rest of the algorithms obtained competitive results. The convergence curve for this problem is shown in Figure 4e.
Problem 06. The description of the system is as follows [56]:
β 11 + β 12 x 2 2 + β 13 x 3 2 + β 14 x 2 x 3 + β 15 x 2 2 x 3 2 = 0 β 21 + β 22 x 3 2 + β 23 x 1 2 + β 24 x 3 x 1 + β 25 x 3 2 x 1 2 = 0 β 31 + β 32 x 1 2 + β 33 x 2 2 + β 34 x 1 x 2 + β 35 x 1 2 x 2 2 = 0
There are three equations in the system, where the details about βij can be found in the literature [56]: x i [ 20 , 20 ] , i = 1, …, n, and n = 3. In Table 6, the proposed IAOA outperforms the other algorithms; the GWO, SMA, and DE get competitive results. The convergence curve for this problem is shown in Figure 4f.
The statistical results show that the IAOA outperforms all algorithms on the remaining problems in Table 7. These demonstrate that the IAOA has stronger ability and higher stability than the other methods when solving a nonlinear system of equations. In Figure 4, IAOA’s convergence speed is slower than the others before the 110th iteration, but after that, the IAOA still maintains a high convergence speed and achieves the optimum at the 200th iteration for problem01; for problem02 and problem03, the IAOA has the fastest speed throughout the whole process and reaches the optimum at the 120th iteration and before 120 iterations, respectively; for problem04, the IAOA is slower than the other algorithms before 70 iterations; however it continues to converge after that and obtains the optimal value after 200 iterations; for problem05, there is a close convergence rate for the IAOA and DE, but a better value is obtained by the IAOA; for problem06, it has a slower convergence speed than the others before 20 iterations, but after that, the fastest convergence rate is obtained by the IAOA. All the experimental results prove that the algorithm proposed in this paper has the characteristics that include a fast convergence speed, high convergence accuracy, high solution quality, good stability, and strong robustness when dealing with nonlinear systems of equations. The p-values of almost all test functions in the table are less than 0.05, indicating that the IAOA is significantly different from the other algorithms.

4.3. Numerical Integration

The performance of the proposed new method is evaluated in this section using the ten numerical integration problems in Table 8, where F08 is a singular integral and F10 is an oscillatory integral. The IAOA compared with the traditional methods and population-based algorithms in tackling these cases. Table 9, Table 10, Table 11 and Table 12 show the best integral values obtained by solving ten problems in 30 independent runs, where the R-method, T-method, S-method, H-method, G32, and 2n × L5 represent the traditional methods (rectangle method, trapezoid method, Simpson method, Hermite interpolation method, the 32-point Gaussian formula, and the 5-point Gauss-Roberto-Legendre formula). The rest are swarm intelligence algorithms applied to solve numerical integration problems (evolutionary strategy method [24], particle swarm optimization [25], differential evolution algorithm [27], and improved bat algorithm [28]). The population size and the maximum number of iterations are set to 30 and 200 during the process, respectively. In Table 9, for F01, the solution accuracy of the IAOA is higher than the other methods, and then, the S-method, FN, ES, DEBA, PSO, and DE obtain close results; for F02, the IAOA achieves the best result, and the FN, ES, DEBA, PSO, and DE are in the second echelon; for F03, the IAOA achieves the better result compared to the FN, ES, and PSO. The MBFES, DEBA, and DE rank third. In Table 10, for F04, the IAOA gets a perfect result, and the FN, ES, DEBA, PSO, and DE obtain similar values; for F05, the IAOA ranks first, and the FN, ES, DEBA, PSO, and DE rank second; for F06, the IAOA, FN, and DE achieve competitive results. For F07–F09, the IAOA obtains the best value, and the FN, ES, and DEBA rank second in Table 11. The traditional methods (R-method, T-method, and S-method) fail to solve F10; therefore, G32 and 2n × L5 are utilized to tackle this problem. In Table 12, the IAOA and DEBA obtain similar values and ranks first. Table 13 and Table 14 are statistical results for the numerical integration (F01–F10) are obtained by swarm intelligence algorithms. For F01–F09, the IAOA is better than the other algorithms across all the assessment criteria (the best value, the worst value, mean value, and standard deviation). However, for F10, the IAOA achieves the only optimal result in the best value, and the rest rank second, in which the DEBA obtains the best results. From Figure 5, the method proposed in this paper has the fastest convergence speed and convergence accuracy for all the problems except F10. The above experimental results prove that the IAOA has fast convergence speed, high solution accuracy, and strong robustness. These enable the IAOA to handle numerical integration problems; therefore, it is a worthwhile direction to apply the IAOA to solve the integration solution problems in practical engineering applications.

4.4. Sovling Engineering Problem

Compared with three-dimensional motion, planar motion restricts the robot to a single plane and is simpler to calculate. However, most robot mechanisms can simplify plane mechanisms or planes for tackling. Now, the robotic arm plays an increasingly important role, which has also attracted the extensive attention of researchers. Improving the working efficiency of the robotic arm under the premise of low energy consumption is a challenging problem facing the industrial field [57]. The kinematics of the robotic arm mainly include forward kinematics and inverse kinematics. One is the pose of the end effector determined according to the rotation angle of each joint based on the base coordinates; the other is taking the end joint as the starting point and, finally, back-to-base coordinates. The inverse kinematics problem is essentially a nonlinear equation problem. The tasks performed by the robotic arm are usually described by its base coordinate system in practical applications. Therefore, the inverse kinematics solution is particularly important in the field of the control. The robotic arm model [58] is shown in Figure 6a, and the mathematical model in coordinates is shown in Figure 6b. The nonlinear equation system for this model is as follows.
10 , 000 × ( ( a × sin ( A 2 ) b × sin ( A 2 + B 2 ) + c × sin ( A 2 + B 2 + C 2 ) X ) 2 ) = 0 10 , 000 × ( ( h a × cos ( A 2 ) b × cos ( A 2 + B 2 ) + c × cos ( A 2 + B 2 + C 2 ) Y ) 2 ) = 0 A 2 A 1 + B 2 B 1 + C 2 C 1 = 0
where a = 16.5 cm; b = 7.9 cm; c = 5.3 cm; and h = 7.4 cm ( A 1 = 150°, B 1 = 132.7026°, and C 1 = 127.0177°) are the initial angles of the three joints; (X = 10 cm, Y = 10 cm) is the coordinate of the end effector; and ( A 2 , B 2 , and C 2 ) are the aims required to obtain three joint angles in the final stage. The first two equations in the nonlinear equation system find the three joint angles when the end effector reaches the target position (X, Y), and the third equation ensures that the change of the joint angle is the smallest to meet the requirements for saving energy.
Table 15, Table 16, Table 17 and Table 18 demonstrate that the IAOA obtains the closest results to the initial angle compared with the PSO, GA and PSSA in solving the inverse kinematics problem of the robotic arm. This shows that the method proposed in this paper allows the robotic arm to consume less energy during movement. In Table 19, f represents the fitness value obtain by Equation (25) and is the difference between the final angle and initial angle of the joint. Obviously, the IAOA achieves the best results for both evaluations. Therefore, it is a great significance to the stability, operation efficiency, operation accuracy, and energy consumption of the robotic arm trajectory control. A new method is provided for the inverse motion solution, which makes up for the deficiency of the traditional method.

5. Conclusions and Future Works

In this paper, the shortcomings are analyzed of the traditional AOA so that an improved AOA based on a population control strategy is proposed to overcome the weakness. The algorithm can find the best global value faster by classifying the population and adaptively controlling the number of individuals in each subpopulation. This method effectively enhances the information sharing strength between individuals, can better search the space, avoids falling into the local optimum, accelerates the convergence process, and improves the optimization accuracy. The AOA, IAOA, and some other algorithms are compared based on solving 6 nonlinear systems of equations, 10 numerical integrations, and an engineering problem. The experimental results show that the IAOA can solve these problems well and outperform the other algorithms. In the future, the IAOA can be used to solve more nonlinear problems in practical engineering applications. Secondly, it can try to solve multiple integrals. Finally, the algorithm can be further improved and enhanced in its performance.

Author Contributions

Conceptualization and methodology, M.C. and Y.Z.; software, M.C.; writing—original draft preparation, M.C.; writing—review and editing, Y.Z. and Q.L.; and funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, Grant No. U21A20464 and 62066005.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Broyden, C.G. A class of methods for solving nonlinear simultaneous equations. Math. Comput. 1965, 19, 577–593. [Google Scholar] [CrossRef]
  2. Ramos, H.; Monteiro, M.T.T. A new approach based on the newton’s method to solve systems of nonlinear equations. J. Comput. Appl. Math. 2017, 318, 3–13. [Google Scholar] [CrossRef]
  3. Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Modified newton’s method for systems of nonlinear equations with singular Jacobian. J. Comput. Appl. Math. 2009, 224, 77–83. [Google Scholar] [CrossRef] [Green Version]
  4. Luo, Y.Z.; Tang, G.J.; Zhou, L.N. Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-newton method. Appl. Soft Comput. 2008, 8, 1068–1073. [Google Scholar] [CrossRef]
  5. Karr, C.L.; Weck, B.; Freeman, L.M. Solutions to systems of nonlinear equations via a genetic algorithm. Eng. Appl. Artif. Intell. 1998, 11, 369–375. [Google Scholar] [CrossRef]
  6. Ouyang, A.J.; Zhou, Y.Q.; Luo, Q.F. Hybrid particle swarm optimization algorithm for solving systems of nonlinear equations. In Proceedings of the 2009 IEEE International Conference on Granular Computing, Nanchang, China, 17–19 August 2009; pp. 460–465. [Google Scholar]
  7. Jaberipour, M.; Khorram, E.; Karimi, B. Particle swarm algorithm for solving systems of nonlinear equations. Comput. Math. Appl. 2011, 62, 566–576. [Google Scholar] [CrossRef] [Green Version]
  8. Pourjafari, E.; Mojallali, H. Solving nonlinear equations systems with a new approach based on invasive weed optimization algorithm and clustering. Swarm Evol. Comput. 2012, 4, 33–43. [Google Scholar] [CrossRef]
  9. Jia, R.M.; He, D.X. Hybrid artificial bee colony algorithm for solving nonlinear system of equations. In Proceedings of the 2012 Eighth International Conference on Computational Intelligence and Security, Guangzhou, China, 17–18 November 2012; pp. 56–60. [Google Scholar]
  10. Ren, H.M.; Wu, L.; Bi, W.H.; Argyros, I.K. Solving nonlinear equations system via an efficient genetic algorithm with symmetric and harmonious individuals. Appl. Math. Comput. 2013, 219, 10967–10973. [Google Scholar] [CrossRef]
  11. Cai, R.Z.; Yue, G.L. A novel firefly algorithm of solving nonlinear equation group. Appl. Mech. Mater. 2013, 389, 918–923. [Google Scholar]
  12. Abdollahi, M.; Isazadeh, A.; Abdollahi, D. Imperialist competitive algorithm for solving systems of nonlinear equations. Comput. Math. Appl. 2013, 65, 1894–1908. [Google Scholar] [CrossRef]
  13. Hirsch, M.J.; Pardalos, P.M.; Resende, M.G.C. Solving systems of nonlinear equations with continuous GRASP. Nonlinear Anal. Real World Appl. 2009, 10, 2000–2006. [Google Scholar] [CrossRef]
  14. Sacco, W.F.; Henderson, N. Finding all solutions of nonlinear systems using a hybrid metaheuristic with fuzzy clustering means. Appl. Soft Comput. 2011, 11, 5424–5432. [Google Scholar] [CrossRef]
  15. Gong, W.Y.; Wang, Y.; Cai, Z.H.; Yang, S. A weighted bi-objective transformation technique for locating multiple optimal solutions of nonlinear equation systems. IEEE Trans. Evol. Comput. 2017, 21, 697–713. [Google Scholar] [CrossRef] [Green Version]
  16. Ariyaratne, M.K.A.; Fernando, T.G.I.; Weerakoon, S. Solving systems of nonlinear equations using a modified firefly algorithm (MODFA). Swarm Evol. Comput. 2019, 48, 72–92. [Google Scholar] [CrossRef]
  17. Gong, W.Y.; Wang, Y.; Cai, Z.H.; Wang, L. Finding multiple roots of nonlinear equation systems via a repulsion-based adaptive differential evolution. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 1499–1513. [Google Scholar] [CrossRef] [Green Version]
  18. Ibrahim, A.M.; Tawhid, M.A. A hybridization of differential evolution and monarch butterfly optimization for solving systems of nonlinear equations. J. Comput. Des. Eng. 2019, 6, 354–367. [Google Scholar] [CrossRef]
  19. Liao, Z.W.; Gong, W.Y.; Wang, L. Memetic niching-based evolutionary algorithms for solving nonlinear equation system. Expert Syst. Appl. 2020, 149, 113–261. [Google Scholar] [CrossRef]
  20. Ning, G.Y.; Zhou, Y.Q. Application of improved differential evolution algorithm in solving equations. Int. J. Comput. Intell. Syst. 2021, 14, 199. [Google Scholar] [CrossRef]
  21. Rizk-Allah, R.M. A quantum-based sine cosine algorithm for solving general systems of nonlinear equations. Artif. Intell. Rev. 2021, 54, 3939–3990. [Google Scholar] [CrossRef]
  22. Ji, J.Y.; Man, L.W. An improved dynamic multi-objective optimization approach for nonlinear equation systems. Inf. Sci. 2021, 576, 204–227. [Google Scholar] [CrossRef]
  23. Turgut, O.E.; Turgut, M.S.; Coban, M.T. Chaotic quantum behaved particle swarm optimization algorithm for solving nonlinear system of equations. Comput. Math. Appl. 2014, 68, 508–530. [Google Scholar] [CrossRef]
  24. Zhou, Y.Q.; Zhang, M.; Zhao, B. Numerical integration of arbitrary functions based on evolutionary strategy method. Chin. J. Comput. 2008, 21, 196–206. [Google Scholar]
  25. Wei, X.Q.; Zhou, Y.Q. Research on numerical integration method based on particle swarm optimization. Microelectron. Comput. 2009, 26, 117–119. [Google Scholar]
  26. Wei, X.X.; Zhou, Y.Q.; Lan, X.L. Research on a numerical integration method based on functional networks. Comput. Sci. 2009, 36, 224–226. [Google Scholar]
  27. Deng, Z.X.; Huang, F.D.; Liu, X.J. A differential evolution algorithm for solving numerical integration problems. Comput. Eng. 2011, 37, 206–207. [Google Scholar]
  28. Xiao, H.H.; Duan, Y.M. Application of improved bat algorithm in numerical integration. J. Intell. Syst. 2014, 9, 364–371. [Google Scholar]
  29. Szczepanski, R.; Kaminski, M.; Tarczewski, T. Auto-tuning process of state feedback speed controller applied for two-mass system. Energies 2020, 13, 3067. [Google Scholar] [CrossRef]
  30. Hu, H.B.; Hu, Q.B.; Lu, Z.Y.; Xu, D. Optimal PID controller design in PMSM servo system via particle swarm optimization. In Proceedings of the 31st Annual Conference of IEEE Industrial Electronics Society, IECON 2005, Raleigh, NC, USA, 6–10 November 2005; p. 5. [Google Scholar]
  31. Szczepanski, R.; Tarczewski, T.; Niewiara, L.J.; Stojic, D. Isdentification of mechanical parameters in servo-drive system. In Proceedings of the 2021 IEEE 19th International Power Electronics and Motion Control Conference (PEMC), Gliwice, Poland, 25–29 April 2021; pp. 566–573. [Google Scholar]
  32. Liu, L.; Cartes, D.A.; Liu, W. Particle Swarm Optimization Based Parameter Identification Applied to PMSM. In Proceedings of the 2007 American Control Conference, New York, NY, USA, 9–13 July 2007; pp. 2955–2960. [Google Scholar]
  33. Szczepanski, R.; Tarczewski, T. Global path planning for mobile robot based on artificial bee colony and Dijkstra’s algorithms. In Proceedings of the 2021 IEEE 19th International Power Electronics and Motion Control Conference (PEMC), Gliwice, Poland, 25–29 April 2021; pp. 724–730. [Google Scholar]
  34. Brand, M.; Masuda, M.; Wehner, N.; Yu, X.H. Ant colony optimization algorithm for robot path planning. In Proceedings of the 2010 International Conference on Computer Design and Applications, Qinhuangdao, China, 25–27 June 2010; pp. 436–440. [Google Scholar]
  35. Szczepanski, R.; Erwinski, K.; Tejer, M.; Bereit, A.; Tarczewski, T. Optimal scheduling for palletizing task using robotic arm and artificial bee colony algorithm. Eng. Appl. Artif. Intell. 2022, 113, 104976. [Google Scholar] [CrossRef]
  36. Kolakowska, E.; Smith, S.F.; Kristiansen, M. Constraint optimization model of a scheduling problem for a robotic arm in automatic systems. Robot. Auton. Syst. 2014, 62, 267–280. [Google Scholar] [CrossRef]
  37. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  38. Premkumar, M.; Jangir, P.; Kumar, D.S.; Sowmya, R.; Alhelou, H.H.; Abualigah, L.; Yildiz, A.R.; Mirjalili, S. A new arithmetic optimization algorithm for solving real-world multi-objective CEC-2021 constrained optimization problems: Diversity analysis and validations. IEEE Access 2021, 9, 84263–84295. [Google Scholar] [CrossRef]
  39. Bansal, P.; Gehlot, K.; Singhal, A.; Gupta, A. Automatic detection of osteosarcoma based on integrated features and feature selection using binary arithmetic optimization algorithm. Multimed. Tools Appl. 2022, 81, 8807–8834. [Google Scholar] [CrossRef]
  40. Agushaka, J.O.; Ezugwu, A.E. Advanced arithmetic optimization algorithm for solving mechanical engineering design problems. PLoS ONE 2021, 16, e0255703. [Google Scholar]
  41. Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A. A novel evolutionary arithmetic optimization algorithm for multilevel thresholding segmentation of COVID-19 CT images. Processes 2021, 9, 1155. [Google Scholar] [CrossRef]
  42. Xu, Y.P.; Tan, J.W.; Zhu, D.J.; Ouyang, P.; Taheri, B. Model identification of the proton exchange membrane fuel cells by extreme learning machine and a developed version of arithmetic optimization algorithm. Energy Rep. 2021, 7, 2332–2342. [Google Scholar] [CrossRef]
  43. Izci, D.; Ekinci, S.; Kayri, M.; Eker, E. A novel improved arithmetic optimization algorithm for optimal design of PID controlled and Bode’s ideal transfer function-based automobile cruise control system. Evol. Syst. 2021, 13, 453–468. [Google Scholar] [CrossRef]
  44. Khatir, S.; Tiachacht, S.; Thanh, C.L.; Ghandourah, E.; Mirjalili, S.; Wahab, M.A. An improved artificial neural network using arithmetic optimization algorithm for damage assessment in FGM composite plates. Compos. Struct. 2021, 273, 114–287. [Google Scholar] [CrossRef]
  45. Viswanathan, G.M.; Afanasyev, V.; Buldyrev, S.; Murphy, E.J.; Prince, P.A.; Stanley, H.E. Lévy flight search patterns of wandering albatrosses. Nature 1996, 381, 413–415. [Google Scholar] [CrossRef]
  46. Humphries, N.E.; Queiroz, N.; Dyer, J.R.; Pade, N.G.; Musyl, M.K.; Schaefer, K.M.; Fuller, D.W.; Brunnschweiler, J.M.; Doyle, T.K.; Houghton, J.D.; et al. Environmental context explains Lévy and Brownian movement patterns of marine predators. Nature 2010, 465, 1066–1069. [Google Scholar] [CrossRef] [Green Version]
  47. Mirjalili, S. A sine cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  48. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  49. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  50. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  51. Li, S.M.; Chen, H.L.; Wang, M.J.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  52. Price, K.V. Differential evolution: A fast and simple numerical optimizer. In Proceedings of the North American Fuzzy Information Processing, Berkeley, CA, USA, 19–22 June 1996; pp. 524–527. [Google Scholar]
  53. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  54. Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2008, 38, 698–714. [Google Scholar] [CrossRef]
  55. Floudas, C.A. Recent advances in global optimization for process synthesis, design and control: Enclosure of all solutions. Comput. Chem. Eng. 1999, 23, S963–S973. [Google Scholar] [CrossRef]
  56. Nikkhah-Bahrami, M.; Oftadeh, R. An effective iterative method for computing real and complex roots of systems of nonlinear equations. Appl. Math. Comput. 2009, 215, 1813–1820. [Google Scholar] [CrossRef]
  57. Ding, X. Robot Control Research; Zhejiang University Press: Hangzhou, China, 2006; pp. 37–38. [Google Scholar]
  58. Xiang, Z.H.; Zhou, Y.Q.; Luo, Q.F.; Wen, C. PSSA: Polar coordinate salp swarm algorithm for curve design problems. Neural Process Lett. 2020, 52, 615–645. [Google Scholar] [CrossRef]
Figure 1. Two methods of segmentation when solving numerical integrals: (a) equidistant division and (b) equidistant division.
Figure 1. Two methods of segmentation when solving numerical integrals: (a) equidistant division and (b) equidistant division.
Mathematics 10 02152 g001
Figure 2. Flowchart of the IAOA.
Figure 2. Flowchart of the IAOA.
Mathematics 10 02152 g002
Figure 3. Flowchart for handling issues.
Figure 3. Flowchart for handling issues.
Mathematics 10 02152 g003
Figure 4. Convergence curve for tackling the NES (problem01–06 (af)).
Figure 4. Convergence curve for tackling the NES (problem01–06 (af)).
Mathematics 10 02152 g004aMathematics 10 02152 g004b
Figure 5. Convergence curve for the numerical integrations (F01–F10 (al)).
Figure 5. Convergence curve for the numerical integrations (F01–F10 (al)).
Mathematics 10 02152 g005aMathematics 10 02152 g005b
Figure 6. (a) The model of a robotic arm, and (b) a mathematical model for a robotic arm.
Figure 6. (a) The model of a robotic arm, and (b) a mathematical model for a robotic arm.
Mathematics 10 02152 g006
Table 1. Comparison of the experimental results for problem01.
Table 1. Comparison of the experimental results for problem01.
VariableAlgorithms
AOAIAOASCAWOA
x10.0063615834029600.2578386508255180.1867325911968690.260832096649832
x20.0057316538370620.3810981853472420.3998188140387280.381680691118263
x30.0105862820038800.2787425626287760.0089591451370850.258353295805450
x40.0025939895053340.2006655862758650.2272371036054130.215307146397956
x50.0335205580954320.4452559280274310.0038292399263200.448797960971748
x60.0764242182656310.1491888136213320.1859053818019680.147397359179682
x70.0388626944731510.4320107696720380.3688130505268180.442390776062597
x8−0.0000040078772100.0734061528187200.0377399893709970.137586270569043
x90.0290544321306850.3459662625130930.2064762351441250.342058064566263
x100.0136904257033940.4273245182694590.3633508449153270.401475021739693
f8.45665838921712 × 10−14.73405913551646 × 10−101.22078391539763 × 10−19.59544885085295 × 10−4
VariableAlgorithms
GWOHHODECSO
x10.2568510242488100.3243170239675322.0000000000000000.089951372914250
x20.3835657436206990.3039671926425141.9481574531909900.309487131659014
x30.2783123354836740.2161919614113622.0000000000000000.456410156556233
x40.1987373000409420.3052609742308291.8153085115465800.356392775439902
x50.4463116191775020.3252557835918422.0000000000000000.476086684751138
x60.1458941386322800.2230203516760542.0000000000000000.078921332097133
x70.1458941386322800.3231851430140292.0000000000000000.499580490394335
x8−0.0078320295550620.3279736093538221.9157621418245200.197756675883883
x90.3436546203943340.3334308546484332.0000000000000000.228228833675487
x100.4259026640808060.3241428883707132.0000000000000000.470195948900759
f1.25544451911646 × 10−37.79220329211044 × 10−27.96261500819178 × 10−26.61705221934444 × 10−2
VariableAlgorithms
SMAnAOAdAOA
x10.2499001322904170.0354306330515801.840704485033870
x20.3754283149775310.0539830627847721.213421005935260
x30.2724485802963180.0727353051660211.203555993641700
x40.1996982659554050.021399042985613−0.393935624266822
x50.4259341894458100.064655913970964−0.249476549706985
x60.0576999596456130.0125702813508310.459915310960444
x70.4318652758746180.057639809639213−0.675754718182326
x80.0150056400006410.005520004765830−0.895856414267328
x90.3479869927563880.0412294845110920.359139808282465
x100.4153041647822750.0795957199219091.529188120361250
f4.47411205566240 × 10−36.74563715208325 × 10−11.91503507134915
Table 2. Comparison of the experimental results for problem02.
Table 2. Comparison of the experimental results for problem02.
VariableAlgorithms
AOAIAOASCAWOA
x10.0407819581818600.0421247817152740.0000000000000000.041561373108785
x20.2686256557286910.0617546101389460.2665937489854950.268697327813652
f2.01752031872803 × 10−79.24446373305873 × 10−348.82826387279195 × 10−56.92247231102962 × 10−9
VariableAlgorithms
GWOHHODECSO
x10.2656228549304340.2678552970668150.2665891018623700.266620164671422
x20.1787181468176110.4587492790584290.3272750260161010.178514261126008
f1.13985864694418 × 10−76.55986405733090 × 10−81.31654979128584 × 10−181.49504500886345 × 10−9
VariableAlgorithms
SMAnAOAdAOA
x10.0214196242720500.0000000000000000.236558250181286
x20.0480752324608740.7191248113091220.508933311549167
f2.89316821274146 × 10−53.07109081317222 × 10−53.22387407689191 × 10−4
Table 3. Comparison of the experimental results for problem03.
Table 3. Comparison of the experimental results for problem03.
VariableAlgorithms
AOAIAOASCAWOA
x11.990744078311880−0.947268146986263−0.225974226141413−1.424482905343090
x20.220001522814532−0.7850200155682891.245763361231140−0.543544840817441
f5.61739095968327 × 10−34.02151576372412 × 10−327.95691890654021 × 10−41.06331568826728 × 10−3
VariableAlgorithms
GWOHHODECSO
x1−1.794053112053940−1.495480498807310−1.791308474954350−0.212779003619775
x2−0.303905803005920−0.4203946918641270.301889327351144−1.257141525856050
f2.77808608355359 × 10−56.12298193031725 × 10−51.84881969881973 × 10−96.26348225916795 × 10−7
VariableAlgorithms
SMAnAOAdAOA
x1−1.791387180972800−1.475077261850100−1.580085715978880
x2−0.302157020359872−0.4546735647625980.4651484d76848022
f5.47910691165820 × 10−82.17709293383390 × 10−45.12705019470938 × 10−2
Table 4. Comparison of the experimental results for problem04.
Table 4. Comparison of the experimental results for problem04.
VariableAlgorithms
AOAIAOASCAWOA
x1−0.000266868453558−0.000000091835793−0.120898772911816−0.310246574315981
x2−0.0002670361570510.0000139715975350.4911675683595850.467564824328878
x3−0.0002670362742810.00003045405141610.0000000000000001.071469773086650
x40.0000000254301970.000010000404353−0.178108600809833−0.404219784214681
x5−0.0002670393114950.0000112759180995.4232425687534003.552125620609660
x6−0.0002670361272240.000000019800029−0.049710980654501−1.834136698070800
x70.000000000091855−0.0000000001384370.4456624625113280.286050311387620
x80.000267036101457−0.000000454282127−10.000000000000000−2.931846497771810
x90.0002670338322240.000000000736505−0.144419405019169−4.812450845354100
x100.000267043884482−0.000002006069864−0.5181059719328463.756426716000660
f1.08498006397337 × 10−97.03339003909689 × 10−164.13237426374674 × 10−16.47066501369328 × 10−1
VariableAlgorithms
GWOHHODECSO
x10.044653752694561−0.0000477033797130.160723693838569−0.009650846541198
x2−0.2595676748829230.0000756910752490.4319231397183680.147278561202585
x3−1.777013199398760−0.0000297133723670.072922517980119−3.148557575646470
x40.042606334458592−0.0000501849148250.447403957744849−0.512428980703464
x5−4.9352860366636000.000033675529531−0.197972459731190−4.175819684412100
x6−8.1461566237858100.0000679894526341.490110445009050−7.123183974281880
x7−0.1081252749692010.0000312887628260.4722654260791251.268663892956760
x81.7470524574189100.0000484912905360.5094937055108663.198230908839320
x9−0.3119977782797450.0000638924521931.142101578993260−4.763105818868310
x108.430357427064680−0.000123055431652−2.1103354752123509.463108408596410
f7.56734706927375 × 10−36.11971561041781 × 10−109.87501536049260 × 10−12.18295386757873
VariableAlgorithms
SMAnAOAdAOA
x1−0.0000000000286770.000020144848903−0.934997016811202
x20.000014644312649−0.000060200695401−1.295640443505010
x30.000038790339140−0.000020118018817−5.634966911723890
x4−0.000000000221797−0.000060200956330−4.825343892476190
x50.000000055701981−0.0000201228038170.269511140973028
x6−0.000000030051237−0.000020134693956−7.253398121182340
x70.0000005959362320.0000201233415007.557747336452660
x8−0.0000000000253330.000020925519435−5.520361069927860
x90.0000007995047250.000043615727680−4.709534880735350
x100.0000000000129830.0000201206223738.954470788407880
f1.30095438660555 × 10−101.50696700666871 × 10−92.07190542503982 × 102
Table 5. Comparison of the experimental results for problem05.
Table 5. Comparison of the experimental results for problem05.
VariableAlgorithms
AOAIAOASCAWOA
x10.3719644868717920.5000000000000000.4711789943972670.503978268408352
x22.9903378808144303.1415926535897903.1182711721860203.142976305563530
f1.89048835343036 × 10−41.85873810048745 × 10−283.41504906318340 × 10−52.00099014478417 × 10−7
VariableAlgorithms
GWOHHODECSO
x10.4957220893820040.5033325777297950.2994486924450720.500482294032500
x23.1435665643410903.1427533052793102.8369277703629903.142098043614560
f1.12835512797232 × 10−61.16071617155615 × 10−76.25300383824133 × 10−232.13609775136897 × 10−8
VariableAlgorithms
SMAnAOAdAOA
x10.2989490616478570.3546400441439902.956994389007600
x22.8356912507506002.9569943890076001.890717921128260
f1.05189651760469 × 10−81.59376404093113 × 10−43.65946616757579 × 10−3
Table 6. Comparison of the experimental results for problem06.
Table 6. Comparison of the experimental results for problem06.
VariableAlgorithms
AOAIAOASCAWOA
x10.953663829653960−0.77954804507915811.1476591271765001.516510183032980
x20.663112382731748−0.7795480450791580.9007624007327280.694394649388567
x30.729782844271910−0.7795480450791580.91981611731449910.556407054559600
f3.35330112498813 × 10−11.00553388370096 × 10−202.756666431319738.65817545834561
VariableAlgorithms
GWOHHODECSO
x10.781303537791760−0.782460718139219−0.779277448448367−0.765447632695953
x20.777872878718449−0.789339702437282−0.779700789186745−0.784775197498564
x30.779780469890485−0.766810453292313−0.780020611467694−0.735052686517780
f5.49159538279891 × 10−41.00882211687459 × 10−26.71295836563811 × 10−62.92512803990831 × 10−1
VariableAlgorithms
SMAnAOAdAOA
x1−0.779731780102931−0.437772635064718−1.056395480177350
x2−0.779371556451744−7.6597416438778906.893981344148980
x3−0.779303513685515−2.620897335617900−1.876924860155790
f1.03517116885362 × 10−51.497206125847882.61017698945353 × 104
Table 7. Statistical results for the NES.
Table 7. Statistical results for the NES.
Algorithms Systems of Nonlinear Equations
problem01problem02problem03problem04problem05problem06
AOAbest7.02711 × 10−11.20198 × 10−88.30574 × 10−122.99534 × 10−105.32587 × 10−61.60969 × 10−8
worst9.05980 × 10−17.47231 × 10−79.55457 × 10−33.58264 × 10−95.96026 × 10−41.00599 × 10
mean8.45666 × 10−12.01752 × 10−73.18486 × 10−41.08498 × 10−91.89049 × 10−43.35330 × 10−1
std4.40686 × 10−21.78065 × 10−71.74442 × 10−38.49280 × 10−101.40374 × 10−41.83668
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−111.49399 × 10−113.01230 × 10−11
IAOAbest1.05462 × 10−100.000004.93038 × 10−322.97972 × 10−190.000001.81191 × 10−30
worst1.25230 × 10−93.08149 × 10−332.09541 × 10−315.52546 × 10−155.57614 × 10−272.98754 × 10−19
mean4.73406 × 10−109.24446 × 10−347.27231 × 10−327.03339 × 10−161.85874 × 10−281.00553 × 10−20
std2.84371 × 10−101.43626 × 10−334.02152 × 10−321.22291 × 10−151.01806 × 10−275.45273 × 10−20
SCAbest4.64629 × 10−21.20156 × 10−88.29788 × 10−67.08592 × 10−47.53679 × 10−91.19890 × 10−1
worst2.98744 × 10−18.60445 × 10−43.13588 × 10−32.835032.00649 × 10−43.29896 × 10
mean1.22078 × 10−18.82826 × 10−55.47683 × 10−44.13237 × 10−13.41505 × 10−52.75667
std5.72692 × 10−22.61875 × 10−47.59630 × 10−46.58494 × 10−14.69615 × 10−56.25475
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−111.49399 × 10−113.01230 × 10−11
WOAbest1.87873 × 10−46.72146 × 10−146.18945 × 10−134.04945 × 10−62.16928 × 10−111.76476 × 10−5
worst5.56233 × 10−31.30541 × 10−74.48907 × 10−24.997254.78904 × 10−67.91148 × 10
mean9.59545 × 10−46.92247 × 10−94.26773 × 10−36.47067 × 10−12.00099 × 10−78.65818
std1.06419 × 10−32.49080 × 10−81.24385 × 10−21.071978.71177 × 10−72.24136 × 10
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−111.49399 × 10−113.01230 × 10−11
GWObest2.65480 × 10−62.31886 × 10−121.77817 × 10−81.01688 × 10−62.21126 × 10−99.05730 × 10−5
worst6.59898 × 10−31.73256 × 10−69.94266 × 10−25.57604 × 10−21.70979 × 10−51.58625 × 10−3
mean1.25544 × 10−31.13986 × 10−73.33932 × 10−37.56735 × 10−31.12836 × 10−65.49160 × 10−4
std2.25868 × 10−34.16137 × 10−71.81481 × 10−21.36923 × 10−23.33417 × 10−63.69947 × 10−4
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−111.49399 × 10−113.01230 × 10−11
HHObest2.03768 × 10−28.99794 × 10−314.93038 × 10−321.21192 × 10−117.70372 × 10−343.83242 × 10−5
worst1.33302 × 10−11.91904 × 10−65.78702 × 10−41.00491 × 10−93.34700 × 10−67.08247 × 10−2
mean7.79220 × 10−26.55986 × 10−84.12782 × 10−56.11972 × 10−101.16072 × 10−71.00882 × 10−2
std2.90524 × 10−23.50117 × 10−71.19896 × 10−42.78236 × 10−106.10656 × 10−71.45023 × 10−2
p-value3.01986 × 10−111.01490 × 10−115.56066 × 10−83.01986 × 10−111.30542 × 10−103.01230 × 10−11
DEbest6.05782 × 10−38.15969 × 10−282.49399 × 10−202.59514 × 10−12.59615 × 10−314.23182 × 10−11
worst9.69921 × 10−11.19322 × 10−175.91181 × 10−72.586156.37964 × 10−221.17012 × 10−4
mean7.96262 × 10−21.31655 × 10−183.33313 × 10−89.87502 × 10−16.25300 × 10−236.71296 × 10−6
std2.40157 × 10−12.91169 × 10−181.26981 × 10−76.21653 × 10−11.66035 × 10−222.15862 × 10−5
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−116.22236 × 10−113.01230 × 10−11
CSObest2.82411 × 10−27.30711 × 10−112.92752 × 10−96.03864 × 10−12.67109 × 10−102.27267 × 10−2
worst1.34962 × 10−17.15408 × 10−92.57784 × 10−64.349421.32416 × 10−71.31894
mean6.61705 × 10−21.49505 × 10−96.53698 × 10−72.182952.13610 × 10−82.92513 × 10−1
std2.71383 × 10−21.66707 × 10−95.69101 × 10−71.053183.36401 × 10−83.41112 × 10−1
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−111.49399 × 10−113.01230 × 10−11
SMAbest5.18988 × 10−41.26496 × 10−72.37253 × 10−112.08208 × 10−116.22359 × 10−113.95601 × 10−7
worst1.17331 × 10−22.46549 × 10−45.80093 × 10−72.89907 × 10−105.94920 × 10−84.75099 × 10−5
mean4.47411 × 10−32.89317 × 10−55.98652 × 10−81.30095 × 10−101.05190 × 10−81.03517 × 10−5
std3.00476 × 10−35.64857 × 10−51.28713 × 10−77.25135 × 10−111.30068 × 10−81.04158 × 10−5
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−111.49399 × 10−113.01230 × 10−11
nAOAbest4.73537 × 10−11.16733 × 10−93.11364 × 10−123.28064 × 10−102.13953 × 10−57.56334 × 10−8
worst7.39125 × 10−19.06936 × 10−48.22290 × 10−12.69391 × 10−94.30978 × 10−44.49162 × 10
mean6.74564 × 10−13.07109 × 10−52.77064 × 10−21.50697 × 10−91.59376 × 10−41.49721
std5.68300 × 10−21.65502 × 10−41.50077 × 10−16.31248 × 10−107.06193 × 10−58.20053
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−111.49399 × 10−113.01230 × 10−11
dAOAbest2.01052 × 10−18.99368 × 10−92.54429 × 10−43.09426 × 10−105.69606 × 10−68.50407 × 10−4
worst6.878721.28121 × 10−34.68145 × 10−19.87499 × 1021.56431 × 10−23.78263 × 105
mean1.915043.22387 × 10−46.56368 × 10−22.07191 × 1023.65947 × 10−32.61018 × 104
std2.161473.20053 × 10−41.21675 × 10−12.92259 × 1025.26309 × 10−38.07193 × 104
p-value3.01986 × 10−111.01490 × 10−111.07516 × 10−113.01986 × 10−111.49399 × 10−113.01230 × 10−11
Table 8. Details of the integrations F01–F10.
Table 8. Details of the integrations F01–F10.
IntegrationsDetailsRange
F01 f ( x ) = x 2 [0, 2]
F02 f ( x ) = x 4 [0, 2]
F03 f ( x ) = 1 + x 2 [0, 2]
F04 f ( x ) = 1 1 + x [0, 2]
F05 f ( x ) = sin x [0, 2]
F06 f ( x ) = e x [0, 2]
F07 f ( x ) = 1 + ( cos x ) 2 [0, 48]
F08 f ( x ) = e x , 0 x < 1 e x / 2 , 1 x < 2 e x / 3 , 2 x 3 [0, 3]
F09 f ( x ) = e x 2 [0, 1]
F10 f ( x ) = x cos x sin x m x , ( m = 10 , 20 , 30 ) [0, 2 π ]
Table 9. Comparison of the experimental results for F01–F03.
Table 9. Comparison of the experimental results for F01–F03.
MethodsIntegrations
F01F02F03
R-method2.0002.0002.828
T-method4.00016.0003.236
S-method2.6676.6672.964
H-method2.8307.0663.048
FN [26]2.6676.39952.95789
MBFES [24]2.6596.3382.956
ES [24]2.6666.3982.9577
DEBA [28]2.666985736.4012012.958169
PSO [25]2.6666.3982.9578
DE [27]2.6676.39952.958
AOA2.610061346.201471252.94004382
IAOA2.666617106.400000002.95788286
Exact2.666666676.400000002.95788572
Table 10. Comparison of the experimental results for F04–F06.
Table 10. Comparison of the experimental results for F04–F06.
MethodsIntegrations
F04F05F06
R-method1.0001.6835.437
T-method1.3330.9098.389
S-method1.1111.4256.421
H-method1.1121.4526.691
FN [26]1.09861.4166.389
MBFES [24]1.0901.4196.390
ES [24]1.0981.4166.388
DEBA [28]1.0987541.4160826.388921
PSO [25]1.09851.4166.3887
DE [27]1.0991.4166.389
AOA1.089238181.401015466.29531692
IAOA1.098612291.416139576.38901606
Exact1.098612291.416146846.38905610
Table 11. Comparison of the experimental results for F07–F09.
Table 11. Comparison of the experimental results for F07–F09.
MethodsIntegrations
F07F08F09
R-method52.139751831.513495420.77782078
T-method62.437371401.611793050.74621972
S-method117.614903342.487205050.74683657
H-method58.997761081.561642580.75403569
FN [26]58.47051.546040.746823
MBFES [24]58.488281.54550.74652
ES [24]58.470651.54598050.74683
DEBA [28]58.4705053723511.54603883457670.7468269544604
PSO56.801397751.528973300.74328459
DE56.045980851.524259000.74202909
AOA56.174979701.526415140.74223182
IAOA58.470469151.546036030.74682413
Exact58.470469151.546036030.74682413
Table 12. Comparison of the experimental results for F10.
Table 12. Comparison of the experimental results for F10.
MethodsIntegrations
F10 (m = 10)F10 (m = 20)F10 (m = 30)
G32−0.6340207−1.2092524−1.5822272
2n × L5−0.55875940−0.27789620−0.18508448
H-method−0.210435750.17309499−0.02945756
MBFES [24]−0.68134052−0.37280425−0.17305621
ES [24]−0.65034080−0.30583435−0.23556815
DEBA−0.63466518−0.31494663−0.20967248
PSO −1.50150183−1.33949737−1.10170197
DE [27]−0.63982173−0.31035906−0.21438251
AOA−3.07253909−0.56489050−0.42642997
IAOA−0.63466518−0.31494663−0.20967248
Exact−0.63466518−0.31494663−0.20967248
Table 13. Statistical results for the numerical integrations (F01–F06).
Table 13. Statistical results for the numerical integrations (F01–F06).
Algorithms Integrations
F01F02F03F04F05F06
AOAbest5.660532 × 10−21.985287 × 10−11.784189 × 10−29.374106 × 10−31.513137 × 10−29.373918 × 10−2
worst6.785842 × 10−22.466178 × 10−12.112411 × 10−21.103594 × 10−21.827849 × 10−21.105054 × 10−1
mean6.196485 × 10−22.238141 × 10−11.970905 × 10−21.041648 × 10−21.679104 × 10−21.013200 × 10−1
std2.473863 × 10−31.277362 × 10−26.790772 × 10−44.381854 × 10−47.886715 × 10−43.985235 × 10−3
IAOAbest4.956295 × 10−50.0000002.855397 × 10−60.0000007.267277 × 10−64.004088 × 10−5
worst1.070986 × 10−49.632589 × 10−61.471988 × 10−57.241931 × 10−63.035345 × 10−51.136393 × 10−4
mean7.267766 × 10−59.617999 × 10−76.357033 × 10−61.274560 × 10−61.595556 × 10−57.989662 × 10−5
std1.561025 × 10−52.672207 × 10−62.828416 × 10−61.942626 × 10−65.989208 × 10−62.032255 × 10−5
PSO [25]best3.966996 × 10−21.282142 × 10−11.263049 × 10−26.772669 × 10−31.115352 × 10−26.495427 × 10−2
worst5.467546 × 10−21.880821 × 10−11.614274 × 10−29.112184 × 10−31.385859 × 10−29.718717 × 10−2
mean4.406724 × 10−21.593799 × 10−11.405265 × 10−27.745239 × 10−31.208230 × 10−27.327404 × 10−2
std3.262431 × 10−31.528260 × 10−29.707823 × 10−46.532329 × 10−47.146743 × 10−46.698801 × 10−3
DE [27]best5.444535 × 10−21.776272 × 10−11.740389 × 10−29.410606 × 10−31.537737 × 10−29.229490 × 10−2
worst6.223208 × 10−21.992612 × 10−11.943564 × 10−21.043440 × 10−21.668422 × 10−21.003285 × 10−1
mean5.887766 × 10−21.887098 × 10−11.881844 × 10−21.003350 × 10−21.606658 × 10−29.665791 × 10−2
std1.717478 × 10−35.056921 × 10−34.230737 × 10−42.412656 × 10−43.636407 × 10−41.886442 × 10−3
DEBA [28]best5.858312 × 10−21.958779 × 10−11.797733 × 10−29.632554 × 10−31.541447 × 10−29.078063 × 10−2
worst6.805128 × 10−22.566962 × 10−12.194973 × 10−21.144459 × 10−21.824156 × 10−21.096576 × 10−1
mean6.306158 × 10−22.287206 × 10−12.005007 × 10−21.048558 × 10−21.700868 × 10−21.008133 × 10−1
std2.059708 × 10−31.384008 × 10−28.428458 × 10−44.319549 × 10−47.193521 × 10−44.457879 × 10−3
ES [24]best3.634854 × 10−21.053634 × 10−11.178783 × 10−26.152581 × 10−39.742411 × 10−36.028495 × 10−2
worst3.704455 × 10−21.076016 × 10−11.197536 × 10−26.272540 × 10−39.921388 × 10−36.120127 × 10−2
mean3.662145 × 10−21.064150 × 10−11.189432 × 10−26.206519 × 10−39.813727 × 10−36.070549 × 10−2
std1.618502 × 10−44.726931 × 10−44.687831 × 10−52.718416 × 10−54.560503 × 10−52.303572 × 10−4
Table 14. Statistical results for numerical integrations (F07–F10).
Table 14. Statistical results for numerical integrations (F07–F10).
Algorithms Integrations
F07F08F09F10 (m = 10)F10 (m = 20)F10 (m = 30)
AOAbest2.2954891.962088 × 10−24.592313 × 10−32.4378732.499438 × 10−12.167574 × 10−1
worst2.5240122.400262 × 10−25.421672 × 10−33.6110123.4290533.115022
mean2.4249972.226327 × 10−25.031127 × 10−33.2258361.6174259.721188 × 10−1
std5.634089 × 10−21.017542 × 10−32.167135 × 10−42.620454 × 10−19.081448 × 10−17.417795 × 10−1
IAOAbest0.0000000.0000000.0000000.0000000.0000000.000000
worst4.285648 × 10−49.665730 × 10−67.650313 × 10−94.941453 × 10−48.932970 × 10−44.121824 × 10−4
mean5.817808 × 10−51.079836 × 10−61.094646 × 10−96.843408 × 10−59.159354 × 10−56.487479 × 10−5
std9.331558 × 10−52.377176 × 10−62.051844 × 10−91.219906 × 10−41.972260 × 10−49.370544 × 10−5
PSO [25]best1.0937171.499542 × 10−23.212480 × 10−35.688245 × 10−11.0245508.920294 × 10−1
worst2.0772972.010782 × 10−24.674802 × 10−31.5999951.4854511.953066
mean1.6690711.706272 × 10−23.539538 × 10−38.668366 × 10−11.2195381.489201
std2.419795 × 10−11.205259 × 10−33.409595 × 10−42.759571 × 10−11.216184 × 10−12.065585 × 10−1
DE [27]best2.2557852.091958 × 10−24.575317 × 10−32.5430133.4617943.889322
worst2.5224052.254710 × 10−25.009106 × 10−33.2366454.6844675.201887
mean2.4244882.177702 × 10−24.795040 × 10−33.0150914.2426094.687029
std5.766110 × 10−24.602533 × 10−41.146454 × 10−41.967397 × 10−12.313007 × 10−12.923496 × 10−1
DEBA [28]best2.361570 × 10−12.057410 × 10−24.776881 × 10−36.043389 × 10−141.208677 × 10−135.319404 × 10−13
worst2.4688312.474051 × 10−25.441200 × 10−36.043389 × 10−141.208677 × 10−135.319404 × 10−13
mean1.1635142.294436 × 10−25.157892 × 10−36.043389 × 10−141.208677 × 10−135.319404 × 10−13
std6.919695 × 10−19.765442 × 10−41.475304 × 10−43.851264 × 10−297.702528 × 10−293.081011 × 10−28
ES [24]best1.2982691.319474 × 10−23.051746 × 10−31.4607731.6343731.152204
worst1.3216231.341748 × 10−23.121709 × 10−31.6659122.3551532.380726
mean1.3085461.331615 × 10−23.081151 × 10−31.5687811.8690041.719830
std5.523404 × 10−35.640941 × 10−51.521690 × 10−54.627499 × 10−21.831224 × 10−12.898513 × 10−1
Table 15. The results obtained by the IAOA for the engineering problem.
Table 15. The results obtained by the IAOA for the engineering problem.
Algorithm Joint Angles
A2B2C2
IAOAinitial angle150132.7026127.0177
Result145.7291139.0180123.9864
Table 16. The results obtained by the PSO for the engineering problem.
Table 16. The results obtained by the PSO for the engineering problem.
Algorithm Joint Angles
A2B2C2
PSOinitial angle150132.7026127.0177
result139.653468.223596.4886
Table 17. The results obtained by the GA for the engineering problem.
Table 17. The results obtained by the GA for the engineering problem.
Algorithm Joint Angles
A2B2C2
GAinitial angle150132.7026127.0177
result129.8653118.962552.6691
Table 18. The results obtained by the PSSA for the engineering problem.
Table 18. The results obtained by the PSSA for the engineering problem.
Algorithm Joint Angles
A2B2C2
PSSA [58]initial angle150132.7026127.0177
result147.101592.537189.5116
Table 19. Comparison of the experimental results for the IAOA, PSO, GA, and PSSA.
Table 19. Comparison of the experimental results for the IAOA, PSO, GA, and PSSA.
Objective FuntionsAlgorithms
IAOAPSOGAPSSA
f1.3618 × 103.0608 × 1063.2329 × 1062.0199 × 105
A 2 A 1 + B 2 B 1 + C 2 C 1 13.6176105.3548118.223480.5701
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, M.; Zhou, Y.; Luo, Q. An Improved Arithmetic Optimization Algorithm for Numerical Optimization Problems. Mathematics 2022, 10, 2152. https://doi.org/10.3390/math10122152

AMA Style

Chen M, Zhou Y, Luo Q. An Improved Arithmetic Optimization Algorithm for Numerical Optimization Problems. Mathematics. 2022; 10(12):2152. https://doi.org/10.3390/math10122152

Chicago/Turabian Style

Chen, Mengnan, Yongquan Zhou, and Qifang Luo. 2022. "An Improved Arithmetic Optimization Algorithm for Numerical Optimization Problems" Mathematics 10, no. 12: 2152. https://doi.org/10.3390/math10122152

APA Style

Chen, M., Zhou, Y., & Luo, Q. (2022). An Improved Arithmetic Optimization Algorithm for Numerical Optimization Problems. Mathematics, 10(12), 2152. https://doi.org/10.3390/math10122152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop