Next Article in Journal
Modern Physical-Mathematical Models and Methods for Design Surface Acoustic Wave Devices: COM Based P-Matrices and FEM in COMSOL
Next Article in Special Issue
Modified Artificial Gorilla Troop Optimization Algorithm for Solving Constrained Engineering Optimization Problems
Previous Article in Journal
Neural Subspace Learning for Surface Defect Detection
Previous Article in Special Issue
Prediction Model of Wastewater Pollutant Indicators Based on Combined Normalized Codec
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Sand Cat Swarm Optimization Algorithm for Solving Constrained Engineering Optimization Problems

1
School of Education and Music, Sanming University, Sanming 365004, China
2
School of Information Engineering, Sanming University, Sanming 365004, China
3
School of Computer Science and Technology, Hainan University, Haikou 570228, China
4
Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
5
Faculty of Information Technology, Middle East University, Amman 11831, Jordan
6
Faculty of Information Technology, Applied Science Private University, Amman 11931, Jordan
7
School of Computer Sciences, Universiti Sains Malaysia, Pulau Pinang 11800, Malaysia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4350; https://doi.org/10.3390/math10224350
Submission received: 24 October 2022 / Revised: 13 November 2022 / Accepted: 17 November 2022 / Published: 19 November 2022
(This article belongs to the Special Issue Computational Intelligence Methods in Bioinformatics)

Abstract

:
The sand cat swarm optimization algorithm (SCSO) is a recently proposed metaheuristic optimization algorithm. It stimulates the hunting behavior of the sand cat, which attacks or searches for prey according to the sound frequency; each sand cat aims to catch better prey. Therefore, the sand cat will search for a better location to catch better prey. In the SCSO algorithm, each sand cat will gradually approach its prey, which makes the algorithm a strong exploitation ability. However, in the later stage of the SCSO algorithm, each sand cat is prone to fall into the local optimum, making it unable to find a better position. In order to improve the mobility of the sand cat and the exploration ability of the algorithm. In this paper, a modified sand cat swarm optimization (MSCSO) algorithm is proposed. The MSCSO algorithm adds a wandering strategy. When attacking or searching for prey, the sand cat will walk to find a better position. The MSCSO algorithm with a wandering strategy enhances the mobility of the sand cat and makes the algorithm have stronger global exploration ability. After that, the lens opposition-based learning strategy is added to enhance the global property of the algorithm so that the algorithm can converge faster. To evaluate the optimization effect of the MSCSO algorithm, we used 23 standard benchmark functions and CEC2014 benchmark functions to evaluate the optimization performance of the MSCSO algorithm. In the experiment, we analyzed the data statistics, convergence curve, Wilcoxon rank sum test, and box graph. Experiments show that the MSCSO algorithm with a walking strategy and a lens position-based learning strategy had a stronger exploration ability. Finally, the MSCSO algorithm was used to test seven engineering problems, which also verified the engineering practicability of the proposed algorithm.

1. Introduction

With the development of science and technology, there are many difficulties in describing and dealing with complex problems. Among these problems, the solution cannot be described in detail. And with the change in application scenarios, the solutions are always different. Meta-heuristic algorithms (MAs) are constructed based on intuition or experience. The optimization problem solved in an acceptable computing time or space provides a feasible solution. This feasible solution cannot be predicted in advance. Many engineering optimization problems often need the optimal solution in the complex and colossal search space. Because of the complexity, nonlinearity, constraints, and modeling difficulties of practical engineering problems, seeking efficient optimization algorithms has become an important research direction.
Inspired by human intelligence, the social behavior of biological groups, and laws of natural phenomena, scholars have invented many Mas to solve complex optimization problems. Mas have been used to solve various complex problems in the past few years. Mas are mainly classified into the following four categories: (1) swarm-based algorithms, (2) evolutionary-based algorithms, (3) physical-based algorithms, and (4) human-based algorithms. As shown in Figure 1. The first category mainly simulates the social behavior of population organisms such as birds, ants, and wolves. Particle Swarm Optimization (PSO) [1] simulates the foraging behavior of birds. Ant Colony Optimization (ACO) [2] simulates the behavior of ants in searching for food. Grey Wolf Optimization (GWO) [3] simulated wolves’ hunting and leadership behavior. In addition, there are many excellent algorithms represented by swarm-based algorithms. For example, the Remora Optimization Algorithm (ROA) [4], Ant Lion Optimizer (ALO) [5], the Whale Optimization Algorithm (WOA) [6], and Moth Flame Optimization (MFO) [7]. The second category simulates the evolution process of organisms. The Genetic Algorithm (GA) [8] is inspired by Darwin’s theory of evolution and simulates the evolution process of organisms. It is one of the most representative algorithms in natural evolutionary-based algorithms. Similar algorithms include Genetic Programming (GP) [9], Biogeography Based Optimizer (BBO) [10], the Virulence Optimization Algorithm (VOA) [11], Evolutionary Programming (EP) [12], and Differential Evolution (DE) [13]. The third category simulates the laws of physics. Simulated Annealing (SA) [14] simulated the principle of annealing. It starts from a higher initial temperature and then decreases with the decrease of temperature parameters. Algorithms based on this principle include the Sine Cosine Algorithm (SCA) [15], Multi-Verse Optimization (MVO) [16], the Gravitational Search Algorithm (GSA) [17], the Black Hole Algorithm (BH) [18], Thermal Exchange Optimization (TEO) [19], and Ray Optimization (RO) [20]. The last category simulates human behavior. Teaching Learning Based Optimization (TLBO) [21] simulates the teaching process of the class. The Group Teaching Optimization Algorithm (GTOA) [22] simulates group learning behavior and divides students in the class into groups for teaching. Similar algorithms include Harmony Search (HS) [23], Social Group Optimization (SGO) [24], and the Exchanged Market Algorithm (EMA) [25]. These algorithms are representative of MAs. They have a good effect on solving optimization problems.
The Sand Cat Swarm Optimization (SCSO) [26] is a meta-heuristic optimization algorithm proposed in 2022. It is based on the idea of a swarm algorithm. The SCSO algorithm simulates the hunting behavior of the sand cat. Each sand cat is sensitive to sound frequency. According to the sound frequency of the prey, the sand cat will choose to attack or search for prey. In hunting, the sand cat will keep close to its prey. This will cause the sand cat to fall into the local optimum in the later stage, reducing the optimization performance of the algorithm. Li et al. proposed a sand cat swarm optimization algorithm based on stochastic variation and elite collaboration (SE-SCSO) [27]. The SE-SCSO algorithm adds a randomly changing elite cooperation strategy, which enables the algorithm to break away from the local extremum, and improves the algorithm’s optimization seeking accuracy and convergence speed. Jovanovic et al. proposed feature selection by an improved sand cat swarm optimizer for intrusion detection [28]. They use extreme machine learning to test the improved sand cat optimization algorithm (HSCSO). The HSCSO algorithm has achieved good results in feature selection. The SCSO algorithm has insufficient convergence ability and quickly falls into the local optimum. This paper proposes a modified sand cat swarm optimization algorithm (MSCSO) for the above problem. Each sand cat has added different wandering strategies when hunting in order to find a better position. When attacking prey, the sand cat will walk according to the Levy flight walking (LFW) strategy. When searching for prey, the sand cat uses the triangle walking (TW) strategy to wander. Sand cats judge the distance between themselves and their prey and then use a Roulette Wheel selection algorithm to choose the direction of walking, and finally obtain a new position according to the trigonometric function calculation principle. Each sand cat searches for a better location through its walk strategy, which enhances the mobility of the algorithm and makes the MSCSO algorithm have a stronger global exploration ability. After that, the global exploration capability of the MSCSO algorithm is further enhanced by lens opposition-based learning (LOBL).
Through these two strategies, the global capability of the MSCSO algorithm is enhanced and the MSCSO algorithm can converge better. In the experimental part, we used 23 standard and CEC2014 benchmark functions to verify the optimization effect of the MSCSO algorithm, and the tables, convergence curves, box charts, and Wilcoxon rank sum tests of benchmark test functions were analyzed. Finally, in order to verify the engineering practicability of the MSCSO algorithm, we selected six engineering problems to test the optimization performance of the MSCSO algorithm. The results illustrate that the MSCSO algorithm also performs well in solving optimization problems.
The main contributions of this paper are as follows:
  • The original SCSO algorithm is improved by the wandering strategy and the optimization performance of the original SCSO algorithm is enhanced.
  • When searching for prey, the triangle walk (TW) strategy is added to expand the search scope of the SCSO algorithm and improve the global exploration ability of the algorithm.
  • When attacking prey, the Levy flight walk (LFW) strategy is added to enable the sand cat to walk around the prey, so that the sand cat can find a better position and improve the optimization performance of the algorithm.
  • Adding lens opposition-based learning (LOBL) to the MSCSO algorithm enhances the global exploration ability of the algorithm
  • The MSCSO algorithm is tested and compared with the other eight algorithms, which proves that the MSCSO algorithm has a better optimization effect.
The structure of this article is as follows: The second part introduces the Restated Work. The third part briefly introduces the SCSO algorithm. The fourth part describes the improvement strategy of the MSCSO algorithm. The fifth and sixth parts give the experimental results of the MSCSO algorithm on benchmark functions and engineering problems. Finally, a summary is made in the seventh part.

2. Related Work

Meta-heuristic algorithms (MAs) are the improvement of the heuristic algorithm, which is the combination of a random algorithm and a local search algorithm. They mainly solve the optimal solution by simulating nature and human intelligence. The core of MAs is to balance the exploration and development capabilities of algorithms. MAs are widely used in various optimization fields because of their simplicity, easy implementation, and high accuracy of solution [29].
However, according to the NFL theorem [30], no MAs can solve all optimization problems. For this reason, improving the known MAs to better solve different optimization problems has become the research direction of many scholars. Many scholars have conceived many good methods [31]. For the defects of different algorithms, many scholars have proposed excellent solutions. For example, Mohammad H. Nadimi-Shahraki et al. proposed a multi-trial vector-based differential evolution algorithm (MTDE). The MTDE is distinguished by introducing an adaptive movement step designed based on a new multi-trial vector approach (MTV), which combines different search strategies in the form of trial vector producers (TVPs). The article uses the MTV method in the MTDE algorithm through three TVPs and verifies that the MTDE algorithm is more effective in dealing with different complex problems [32]. The Salp Swarm Algorithm (SSA) [33] simulated the foraging behavior of the salp swarm. Each salp will follow the best Salp for foraging. The algorithm has high convergence and coverage, and can approximate the optimal solution for the population. However, being too close to the optimal solution leads to the decline of the exploration ability of the SSA, which makes the algorithm difficult to converge in the later period. Hongliang Zhang et al. proposed the ensemble mutation-driven salp swarm algorithm with a restart mechanism (CMSRSSSA). The algorithm adds an ensemble mutation strategy. In this strategy, they adopt mutation schemes based on DE rand local mutation methods in Adaptive CoDE [34]. The exploration ability of the SSA was enhanced by strengthening the communication between different salps. Secondly, a restart mechanism is added, which enables individuals trapped in the local optimum to jump out of the local optimum to obtain a better position. These two mechanisms greatly improve the exploration ability of the SSA algorithm [35] The GWO algorithm lacks population diversity, and it is difficult to balance the exploitation and exploration, leading to premature convergence of the algorithm. Mohammad H. Nadimi Shahraki et al. proposed an improved Grey Wolf Optimizer (I-GWO). The I-GWO algorithm benefits from a new movement strategy named a dimension learning-based hanging (DLH) search strategy inherited from the individual hanging behavior of wolves in nature. The I-GWO algorithm uses the DLH strategy to build a domain for each gray wolf so that neighboring gray wolves can share information. This strategy balances the ability of the GWO algorithm exploration and exploitation and enhances the diversity of the population [36]. The idea of the Remora Optimization Algorithm (ROA) is that remora depends on powerful marine organisms to forage. Different organisms forage in different situations with novel content but lack autonomy. Zheng et al. proposed an autonomous foraging mechanism [37]. Remora not only depends on powerful marine organisms to find food but also can find food independently, which is more in line with biological characteristics and has achieved good optimization results. Mohammad H. Nadimi-Shahraki proposed a multi-trial vector-based moth-flame optimization (MTV-MFO) algorithm. In the algorithm, the MFO movement strategy is substituted by the multi-trial vector (MTV) approach to using a combination of different movement strategies, each of which is adjusted to accomplish a particular behavior. The MTV-MFO algorithm uses three different search strategies to improve the global search ability, maintain the balance between exploration and exploitation, and prevent the original MFO from premature convergence in the optimization process [38].

3. The Sand Cat Swarm Optimization Algorithm (SCSO)

3.1. Initialize Population

Each sand cat is a 1 × dim array in the dim dimension optimization problem. It represents the solution to the problem, as shown in Figure 2. In a set of variable values (Pos1, Pos2, …, Posdim), each Pos must lie between the lower and upper boundary. In the initialization algorithm, an initialization matrix is created according to the size of the problem (N × dim). In addition, the corresponding solution will be output in each iteration. The current solution will be replaced if the next output value is better. If no better solution is found in the next iteration, the solution of this iteration will not be stored.

3.2. Search for Prey (Exploration Stage)

The position of each sand cat is expressed as Posi. The SCSO algorithm benefits from the hearing ability of sand cats in low-frequency detection. Each sand cat can sense the low frequency below 2 kHz. Therefore, in mathematical modeling, the sensitivity rG is defined by Formula (1), so that the sensitivity range of the dune cat is 2 to 0 kHz. In addition, the parameter R is obtained according to Formula (2), and the algorithm exploration and exploitation ability is controlled.
r G = S M ( S M × t T )
R = 2 × r G × r a n d ( 0 , 1 ) r G
where SM is 2, t is the current iteration number, and T is the maximum iteration number.
Each sand cat will randomly find a new location within the sensitivity range when searching for prey. This is more conducive to the exploration and exploitation of algorithms. To avoid falling into the local optimum, each sand cat’s sensitivity range (r) is different. As shown in Formula (3).
r = r G × r a n d ( 0 , 1 )
where rG is used for the guidance parameter r.
Each sand cat will search for the position of prey according to the optimal candidate position (Posbc), current position (Posc(t)), and its sensitivity range (r). The specific Formula is shown in (4).
P o s ( t + 1 ) = r × ( P o s b c ( t ) r a n d ( 0 , 1 ) × P o s c ( t ) )

3.3. Attack Prey (Exploitation Stage)

The distance (Posrnd) between the sand cat and prey is shown by Formula (5) to simulate the process of the sand cat attacking prey. Assume that the sensitivity range of the sand cat is a circle, and the direction of movement uses the Roulette Wheel selection algorithm to select a random angle (α). Since the random angle selected is between 0° and 360°, its value is between −1 and 1. In this way, each sand cat can move in different circumferential directions in the search space, as shown in Figure 3. Then, the prey is attacked according to Formula (6). In this way, the dune cat can approach the hunting position faster.
p o s r n d = | r a n d ( 0 , 1 ) × p o s b ( t ) p o s c ( t ) |
p o s ( t + 1 ) = P o s b ( t ) r × P o s r n d × cos ( α )

3.4. Implementation of the SCSO Algorithm

The SCSO algorithm regulates the exploration and exploitation of the algorithm by controlling the adaptive parameters rG and R. Formula (1), and shows that rG decreases linearly from 2 to 0 during iteration. Therefore, the parameter R is a random value of [−4, 4]. The sand cat will attack prey when R is less than or equal to 1. Otherwise, the sand cat will search for prey, as shown in Formula (7).
P o s ( t + 1 ) { r × ( P o s b c ( t ) r a n d ( 0 , 1 ) × P o s c ( t ) ) | R |   >   1 ;   exploration P o s b ( t ) P o s r n d × cos ( α ) × r   | R |     1 ;   exploitation
Formula (7) shows the location update of each sand cat during the exploration and exploitation stage. When R ≤ 1, the sand cat will attack its prey. Otherwise, the task of the sand cat is to find new prey in the global area. The pseudo-code is shown in Algorithm 1.
Algorithm 1. Sand Cat Swarm Optimization Algorithm Pseudo-Code
Initialize the population
Calculate the fitness function based on the objective function
Initialize the r, rG, and R
While (t ≤ maximum iteration)
 For each search agent
  Obtain a random angle based on the Roulette Wheel Selection (0° ≤ α ≤ 360°)
  If (abs(R) > 1)
   Update the search agent position based on Formula (4)
  Else
   Update the search agent position based on Formula (6)
 End
T = t + 1
End

4. The Modified Sand Cat Swarm Optimization Algorithm (MSCSO)

4.1. Wandering Strategy

4.1.1. Triangle Walk Strategy

The triangle walk strategy is for the sand cats to walk around as they approach their prey. First, obtain the distance L1 between the sand cat and its prey. Then, obtain the step size range L2 of the sand cat. Then, define the sand cat’s walking direction (β) according to Formula (10). L1 and L2 are shown in Formulas (8) and (9). After that, calculate the distance P between the position obtained by swimming and the prey by Formula (11). See Figure 4a for details. Finally, the position of the sand cat is obtained by Formula (12).
L 1 = p o s b ( t ) p o s c ( t )
L 2 = r a n d ( ) × L 1
β = 2 × π × r a n d ( )
P = L 1 2 + L 2 2 2 × L 1 × L 2 × cos ( β )
P o s n e w = p o s b ( t ) + r × P
Among them, Posnew is the position obtained through the walking strategy.

4.1.2. Levy Flight Walk Strategy

When attacking prey, the sand cat is very close to its prey. Levy flight is a very effective mathematical method for providing random factors. Levy flight can provide a walking method that conforms to Levy distribution. However, sometimes the step length of Levy’s flight is too long. In order to better conform to the behavior of sand cats attacking prey, the constant C = 0.35 is multiplied in Levy flight. This allows the sand cat to walk as close to its prey as possible, as shown in Figure 4b. Levy’s flight walking strategy is shown in Formula (13).
P o s n e w = p o s b ( t ) + ( p o s b ( t ) p o s c ( t ) ) × C × L e v y

4.2. Lens Opposition-Based Learning

The main idea of lens opposition-based learning comes from the principle of convex lens imaging. The search range is expanded by generating a reverse position based on the current coordinates [39], which can be seen in Figure 5. In two-dimensional coordinates, the search range of the x-axis is (a, b) and the y-axis represents a convex lens. Suppose that the projection of object A on the x-axis is x and the height is h. Through lens imaging, the image on the other side is A*, A* is projected on the x-axis as x*, and the height is h*. Through the above analysis, we can calculate the reverse projection x* of x.
In Figure 5, x takes o as the base point to obtain its corresponding reverse point x*, which can be obtained from the lens imaging principle.
( a + b ) / 2 x x * ( a + b ) / 2 = h h *
Let k = h/h* to obtain the Formula (15) based on lens opposition-based learning.
x j * = a j + b j 2 + a j + b j 2 k x j k
where xj is the individual’s position in the jth dimension and xj* is the inverse solution of xj. aj and bj are the maximum and minimum boundaries of dimension j in the search space.

4.3. Implementation of the MSCSO Algorithm

Initialization: In the initialization phase, initialize the population size N, dimension dim, iteration number T, and initialize the population as shown in Formula (16).
p o s i , j = ( u b j l b j ) × r a n d + l b j
where ubj is the upper bound of individual i in the j dimension, lbj is the lower bound of individual i in the j dimension, and rand is a random number of [0, 1].
Search for prey: The hunting behavior of the sand cat is affected by the parameter R. When |R| is greater than 1, it means that the prey is far away. At this time, the sand cat will search for prey according to the sensitivity range, as shown in Formula (4).
Triangle walk strategy (TW): While the sand cat is searching for its prey, it can not only search for its prey according to sensitivity range. Through the triangular walk strategy, the sand cat can choose the walking angle to randomly obtain new positions. The update is shown in Formula (12).
Attack prey: When the parameter |R| is less than or equal to 1, this means that the sand cat is attacking its prey. Sand cats attack through the Roulette Wheel Selection algorithm by selecting angles and sensitivity range (r). As shown in Formula (6)
Levy flight walk strategy (LFW): In the stage of attacking prey, the sand cat is close to the optimal solution, which tends to lead to the population concentrating on the local optimal solution and being unable to find a better solution. Therefore, the levy flight can provide a walking method that conforms to levy distribution and make the sand cat more mobile. The specific implementation is shown in Formula (13).
Lens Opposition-Based Learning (LOBL): In order to further enhance the exploration ability of the MSCSO algorithm, lens opposition-based learning is added to further enhance the global exploration ability of the algorithm when updating the location. As shown in Formula (15).
Update population position: The location is updated by comparing fitness values. When the fitness value obtained from the update is better, the original individual will be replaced. On the contrary, the fitness value of the original individual is better than that of the newly acquired individual, and the original individual will be retained.
The pseudo-code of the MSCSO algorithm such as Algorithm 2.
Algorithm2. TheModified Sand Cat Swarm Optimization Algorithm Pseudo-Code
Initialize the population according to Formula (16)
Calculate the fitness function based on the objective function
Initialize the r, rG, and R
While (t ≤ maximum iteration)
 For each search agent
  Obtain a random angle based on the Roulette Wheel Selection (0° ≤ α ≤ 360°).
  If (abs(R) > 1)
   Update the search agent position based on Formula (4)
   Use Formula (12) for the triangle walk strategy to obtain a new position
  Else
   Update the search agent position based on Formula (6)
   Use Formula (13) to carry out the Levy flight walk strategy to obtain a new position
 End
Conduct the lens opposition-based learning strategy according to Formula (15)
T = t + 1
End
The flow chart of the MSCSO algorithm is shown in Figure 6:

4.4. Complexity Analysis

The time complexity depends on the population size of the sand cat (N), the dimension of the given problem (dim), the number of iterations of the algorithm (T), and the evaluation cost required to solve the function (C). Therefore, the time complexity of the MSCSO algorithm is shown in Formula (17).
O ( MGTOA ) = O ( define   parameters ) + O ( population   initialization ) + O ( dunction   evaluation   cos t ) + O ( location   update )
The specific definitions of each complexity are:
(1)
The initialization parameter time is O(1).
(2)
Initialization of population position time O(N × dim).
(3)
Time required for sand cats to prey O(T × N × dim).
(4)
Time required for position update of lens opposition-based learning O(T × N × dim).
(5)
The cost time of the calculation function includes the calculation time cost of the algorithm itself O(T × N × C), the calculation time cost of walk strategy O(T × N × C), and the calculation time cost of lens opposition-based learning O(T × N × C). Total O(3 × T × N × C).
Therefore, the time complexity of the MSCSO algorithm is.
O ( MSCSO ) = O ( 1 + N × d i m + 3 × T × N × C + 2 × T × N × d i m )
Because 1 << T × N × C, 1 << T × N × dim, N × dim << T × N × C, and N × dim << T × N × dim, Formula (18) can be simplified to Formula (19).
O ( MSCSO ) O ( 3 × T × N × C + 2 × T × N × d i m )

5. Experimental Results and Discussion

All the experiments in this paper are completed on the computer with the 11th Gen Intel(R) Core(TM) i7-11700 processor with a primary frequency of 2.50 GHz, 16 GB memory, and an operating system of 64-bit Windows 11 using matlab2021a.
To verify the optimization effect of the MSCSO algorithm, this paper uses 23 standard benchmark functions and CEC2014 benchmark functions to verify the performance of the MSCSO algorithm. To better show the optimization effect, the MSCSO algorithm is compared with Sand Cat Swarm Optimization (SCSO) [26], the Arithmetic Optimization Algorithm (AOA) [40], Bald Eagle Search (BES) [41], the Whale Optimization Algorithm (WOA) [6], the Remora Optimization Algorithm (ROA) [4], the Sine Cosine Algorithm (SCA) [15], the Sooty Tern Optimization Algorithm (STOA) [42], and Genetic Algorithms (GA) [8]. The parameter settings of these algorithms are shown in Table 1.

5.1. Experiments on the 23 Standard Benchmark Functions

The 23 standard benchmark functions are shown in Table 2. This benchmark contains seven unimodal, six multimodal, and ten fixed-dimension multimodal functions. Where F is the mathematical function, dim is the dimension, Range is the interval of the search space, and Fmin is the optimal value the corresponding function can achieve, as seen in Figure 7. In this experiment, set the population size N = 30, the spatial dimension dim = 30/500, and the maximum number of iterations T = 500. The MSCSO algorithm and the eight comparison algorithms were independently run thirty times to obtain each algorithm’s best fitness, average fitness, and standard deviation.

5.1.1. Result Statistics and Convergence Curve Analysis of the 23 Standard Reference Functions

Table 3 shows the statistical results of nine algorithms in the twenty-three standard benchmark functions. In the table, the MSCSO algorithm has obtained theoretical optimal values in F1–F4. The BES obtained the theoretical optimal value in F1. The ROA also has good convergence ability in F1. In the 30 dimensions, the AOA achieves the best in F2. In F5–F6, the MSCSO algorithm’s best and mean are only next to the BES. In F7, the MSCSO algorithm obtains the optimal fitness value and is very stable. In F8, the MSCSO algorithm is inferior to the WOA, the ROA, and the BES algorithms, but superior to other comparison algorithms. The MSCSO algorithm achieves theoretical optimum in F9–F11. Compared with the SCSO algorithm, it has been dramatically improved. In F12–F13, the MSCSO algorithm did not obtain the best fitness value, but it achieved a better fitness value. In the 30 dimensions of F13, GA obtain better results, indicating that GA also have better optimization effects. The function of F14–F23 is relatively simple, and it is easy to find a better fitness value, but it also tests the optimization ability of the algorithm. The MSCSO algorithm obtains the optimal fitness value in the combination function’s optimal fitness. The above analysis proves that the SCSO has a better optimization effect in the MSCSO algorithm with the TW, LFW, and LOBL strategies.
The Table 3 analysis cannot fully prove the optimization effect of the MSCSO algorithm in the 23 standard benchmark functions. In order to better understand the optimization effect of MSCSO, Figure 8, Figure 9 and Figure 10 show the convergence curves of each algorithm. It can be seen from the image that the MSCSO algorithm has a strong convergence ability in F1–F4, and the optimal value is found quickly. There is a small gap between algorithms in F5. In F6 and F12 of Figure 8, the MSCSO algorithm can jump out of the local optimum in the later stage so that the algorithm can converge better. Because the walking strategy is added, the sand cat group has stronger mobility, which makes the sand cat have a stronger walking ability. The exploration ability of the MSCSO algorithm is enhanced by lens opposition-based learning. It can be concluded that the MSCSO algorithm has a better optimization effect than the SCSO algorithm in these functions. In F7, the MSCSO algorithm can quickly find a very excellent fitness value. This shows that the exploration ability of the MSCSO algorithm has been enhanced and better solutions can be found. In F9–F11, the MSCSO algorithm can quickly find the optimal value compared with other comparison algorithms. In F14–F23, these algorithms can find a better fitness value. These algorithms have good optimization effects, but the MSCSO algorithm can also find very good fitness values. It can be seen from F14, F15, F21, F22, and F23 that the MSCSO algorithm is more excellent. According to the comprehensive analysis of tables and images, the MSCSO algorithm is more stable and can find better values.

5.1.2. Analysis of the Wilcoxon Rank Sum Test Results

The Wilcoxon rank sum test is a nonparametric statistical test that can find more complex data distribution. Table 3 gives the best fitness value, average value, and standard deviation of each algorithm but does not compare with the results of multiple algorithms. Therefore, the Wilcoxon rank sum test is required for further verification and testing. Table 4 shows the experimental results of the MSCSO algorithm and eight other different algorithms running thirty times in the twenty-three standard benchmark functions. The significance level is 5%. Less than 5% indicates a significant difference between the two algorithms. It can be seen from the table that most test results are less than 5%, but some results are more than 5%. There are many results equal to one in F9–F11. This is because many algorithms can find the optimal value in F9–F11, resulting in the consistency of the final optimal fitness value. The MSCSO and BES algorithms have many results greater than 5% in unimodal functions, which shows that these two algorithms have good convergence ability in unimodal functions. Many algorithms can find a better value in F14 because the function is relatively simple. In the rest of the functions, the MSCSO algorithm has a significant difference compared with other algorithms. The MSCSO algorithm has generally achieved good results in the Wilcoxon rank sum test.
The above experimental analysis shows that the MSCSO algorithm has a good optimization effect in the 23 standard benchmark functions. Compared with the SCSO algorithm, it has excellent improvement. Compared with other comparison algorithms, it also has more significant advantages.

5.2. Experiments on the CEC2014 Benchmark Function

The 23 standard benchmark functions are simple test functions, which are insufficient to prove the MSCSO algorithm’s optimization performance fully. In order to thoroughly verify the optimization effect of the MSCSO algorithm, the CEC2014 benchmark function is used for testing in this section. Table 5 shows the specific introduction of the CEC2014 benchmark functions. Set the number of individuals of each algorithm N = 30, the maximum number of iterations T = 500, and the dimension dim = 10. Eight algorithms run thirty times independently to obtain each algorithm’s best, average, and standard deviation.

5.2.1. The CEC2014 Benchmark Function Results Statistics and Image Analysis

Table 6 shows the statistical results of the benchmark functions of the MSCSO algorithm and the eight comparison algorithms in CEC2014. The data in the table refer to literature [43]. From the table data, it can be concluded that the MSCSO algorithm has achieved good results in the CEC2014 benchmark function. In CEC1–CEC3, the MSCSO algorithm can obtain a better fitness value compared with other comparison algorithms. Only in CEC2 is the stability inferior to the WOA. In CEC4–CEC8, the MSCSO algorithm can obtain a better fitness value, but its stability is not enough. Because the MSCSO algorithm may find a better solution through the walking strategy, but it is not necessarily able to find a better fitness value. However, the solution found is generally superior to other algorithms. In CEC9, the STOA algorithm can obtain a better fitness value. In CEC10–CEC16, the MSCSO algorithm obtains a better fitness value. Only part of the standard deviation of the function is insufficient. In CEC17–CEC30, the MSCSO algorithm has a very significant optimization effect. The standard deviation of CEC22 and CEC27 is lower than that of the SCA and ROA. The average fitness value and standard deviation of CEC24 are insufficient. Among other functions, the MSCSO algorithm achieves the optimal value. According to the analysis in Table 6, the addition of a walking strategy and a lens position-based learning improves the exploration ability of the algorithm, making the MSCSO algorithm have a stronger optimization ability.
Figure 11 shows the convergence curve of the MSCSO algorithm and eight comparison algorithms in the CEC2014 benchmark function. It can be seen that the MSCSO algorithm has better convergence ability. In the unimodal functions of CEC1–CEC3, the MSCSO algorithm can find a better location and converge constantly. The SCSO algorithm is easy to fall into the local optimum. The convergence curve of other comparison algorithms is still inferior to the MSCSO algorithm. In simple multimodal functions, the MSCSO algorithm also has better global optimization capability. From the convergence curve of CEC4–CEC17, it can be seen that the MSCSO algorithm can find a better position in many functions and converge quickly. The MSCSO with the TW, LFW, and LOBL has a stronger exploration ability. They can jump out of the local optimum and obtain a better fitness value. In CEC17–CEC30, many algorithms are trapped in local optima, resulting in the algorithm not being able to converge better. However, the MSCSO algorithm can find a better location in CEC17, CEC18, CEC20, CEC21, CEC24, and CEC25, which makes the algorithm converge better to the best solution.

5.2.2. Analysis of Box Plot Results

A box chart is a statistical chart that uses five statistics in data: minimum, upper quartile, median, lower quartile, and maximum to describe data. The box chart’s top- and bottom-line segments represent the data’s maximum and minimum values, respectively. The upper and lower segments of the box chart represent the third quartile and the first quartile, respectively. The thick line in the middle of the box chart represents the median of the data. It can intuitively display the abnormal value of the data, the dispersion degree of distribution, and the symmetry of the data. Figure 12 is a block diagram obtained after thirty independent operations of nine algorithms. It can be seen that the MSCSO algorithm is very narrow and keeps the lowest point. Compared with the SCSO algorithm, the MSCSO algorithm can obtain low box graphs. Compared with GA, the MSCSO algorithm has a better optimization effect. Some of the box charts have little difference because it is easy to find a good value in the function, resulting in a small variance. In general, the box graph of the MSCSO algorithm has achieved better results.

5.2.3. Analysis of the Wilcoxon Rank Sum Test Results

The MSCSO algorithm has achieved good results in the CEC2014 benchmark function through the above analysis. Table 7 shows that the similarity between the MSCSO algorithm and the seven comparison algorithms is low, mostly less than 5%. However, Hybrid Function 1 and Composition Functions are partially greater than 5%. This means that the fitness values obtained by the eight comparison algorithms in these functions are not significantly different from those of the MSCSO algorithm. Many 1 of CEC23 and CEC28 occur, which means that the MSCSO algorithm achieves the same fitness values as these comparison algorithms. Some of the other functions are greater than 5%. This means that in these functions, the difference between the values obtained by the MSCSO algorithm and the comparison algorithm is not obvious, and the difference between the fitness values obtained by the MSCSO algorithm and the comparison algorithm is small. However, most of them are less than 5%, which indicates that the MSCSO algorithm differs significantly from the comparison algorithm in most functions.

6. Constrained Engineering Design Problems

In the fifth part, the optimization performance of the MSCSO algorithm is verified to verify the practical effect of the MSCSO algorithm in engineering problems. In this paper, seven engineering problems are selected for testing. The specific experimental results are as follows.

6.1. Pressure Vessel Design Problem

The purpose of pressure vessel design is to minimize the total cost of a cylinder-shaped pressure vessel. The schematic diagram of the pressure vessel is shown in Figure 13. The variables in question are shell thickness Ts, head thickness Th, inner radius R, and vessel length L. The minimum cost of the pressure vessel is obtained through constraints.
Consider:
x = [ x 1 x 2 x 3 x 4 ] = [ T s T h R     L ]
Objective function:
f ( x ) = 0.6224 x 1 x 2 x 3 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to:
g 1 ( x ) = x 1 + 0.0193 x 3 0
g 2 ( x ) = x 3 + 0.00954 x 3 0
g 3 ( x ) = π x 3 2 x 4 + 4 3 π x 3 3 + 1,296,000 0
g 4 ( x ) = x 4 240 0
Variable range:
0 x 1 99 , 0 x 2 99 , 10 x 3 200 , 10 x 4 200
The results of pressure vessel design problems are shown in Table 8. It shows that the MSCSO algorithm has a good effect in solving the engineering problem. As can be seen in the table, the MSCSO algorithm obtains Ts = 0.742406, Th = 0.370292, R = 40.31962, and L = 200, resulting in the minimum cost of 5734.915. Among the other comparison algorithms, eight have achieved cost values greater than 6000, and four have less than 6000. The resulting costs are greater than those of the MSCSO algorithm.

6.2. Speed Reducer Design Problem

The goal of the speed reducer design is to find the minimum mass of the reducer to meet four design constraints: bending stress of gear teeth, covering stress, lateral deflection of shaft, and stress in the shaft. This problem has seven variables, namely the width of the tooth surface x1, the gear module x2, the number of teeth on the pinion x3, the length of the first shaft between bearings x4, the length of the second shaft between bearings x5 the diameter of the first shaft x6, and the diameter of the second shaft x7. The schematic diagram of variables is shown in Figure 14.
The mathematical formulation of this problem is shown below:
Consider:
x = [ x 1   x 2   x 3   x 4   x 5   x 6   x 7 ]
Objective function:
f ( x ) = 07854 × x 1 × x 2 2 × ( 3.3333 × x 3 2 + 14.9334 × x 3 43.0934 ) 1.508 × x 1 × ( x 6 2 + x 7 2 ) + 7.4777 × x 6 3 + x 7 3 + 0.7854 × x 4 × x 6 2 + x 5 × x 7 2
Subject to:
g 1 ( x ) = 27 x 1 × x 2 2 × x 3 1 0  
g 2 ( x ) = 397.5 x 1 × x 2 2 × x 3 2 1 0  
g 3 ( x ) = 1.93 × x 4 3 x 2 × x 3 × x 6 4 1 0  
g 4 ( x ) = 1.93 × x 5 3 x 2 × x 3 × x 7 4 1 0  
g 5 ( x ) = 1 110 × x 6 3 × ( 745 × x 4 x 2 × x 3 ) 2 + 16.9 × 10 6 1 0  
g 6 ( x ) = 1 85 × x 7 3 × ( 745 × x 5 x 2 × x 3 ) 2 + 16.9 × 10 6 1 0  
g 7 ( x ) = x 2 × x 3 40 1 0  
g 8 ( x ) = 5 × x 2 x 1 1 0  
g 9 ( x ) = x 1 12 × x 2 1 0  
g 10 ( x ) = 1.5 × x 6 + 1.9 x 4 1 0  
g 11 ( x ) = 1.1 × x 7 + 1.9 x 5 1 0  
Boundaries:
2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.3 x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5
In Table 9, the MSCSO algorithm finally obtained a weight of 2995.438. The first one is obtained in the comparison algorithm. Compared with other algorithms, it has particular improvement.

6.3. Welded Beam Design Problem

The design problem of the welded beam is to minimize the cost of the welded beam under four decision variables and seven constraints. This problem has four variables: weld width h, connecting beam length l, beam height t, and connecting beam thickness b. See Figure 15 for details.
The mathematical formulation of this problem is shown below:
Consider:
x = [ x 1   x 2   x 3   x 4 ] = [ h   l   t   b ]
Objective function:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 )
Subject to:
g 1 ( x ) = τ ( x ) τ max 0
g 2 ( x ) = σ ( x ) σ max 0
g 3 ( x ) = δ ( x ) δ max 0
g 4 ( x ) = x 1 x 4 0
g 5 ( x ) = P P c ( x ) 0
g 6 ( x ) = 0.125 x 1 0
g 7 ( x ) = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) 0.5 0
where:
τ ( x ) = ( τ ) 2 + 2 τ τ " x 2 2 R + ( τ " ) , τ = P 2 x 1 x 2 , τ " = M R J ,
M = P ( L + x 2 2 ) , R = x 2 2 4 + ( x 1 + x 3 2 ) 2 , σ ( x ) = 6 P L x 4 x 3 2 ,
J = 2 { 2 x 1 x 2 [ x x 2 4 + ( x 1 + x 3 2 ) 2 ] } , δ ( x ) = 6 P L 3 E x 4 x 3 2 ,
P c ( x ) = x 3 2 x 4 6 0 4.013 E L 2 , ( 1 x 3 2 L E 4 G ) , ( 1 x 3 2 L E 4 G ) ,  
P = 6000 l b , L = 14     i n , δ max = 0.25   in , E = 30 × 10 6       p s i ,  
τ max = 13,600       p s i , a n d       σ max = 30,000               p s i
Boundaries:
0.1 x i 2 , i = 1 , 4 ; 0.1 x i 10 , i = 2.3
The results of the welded beam design problems are shown in Table 10. The weld width h = 0.205723, connecting beam length l = 3.253494, beam height t = 9.036686, and connecting beam thickness b = 0.205731 obtained by the MSCSO algorithm. Compared with other algorithms, the MSCSO algorithm obtains the minimum weight. The final weight is 1.695309.

6.4. Tension/Compression Spring Design Problem

The tension/compression spring design’s purpose is to reduce the spring’s mass through three variables and four constraints. Constraints include minimum deviation (g1), shear stress (g2), impact frequency (g3), and outer diameter limit (g4). The corresponding variables include wire diameter d, average coil diameter D, and effective coil number N. f(x) is the minimum spring mass. See Figure 16 for details.
The mathematical formulation of this problem is shown below:
Consider:
x = [ x 1   x 2   x 3 ] = [ d   D   N ]
Objective function:
f ( x ) = ( x 3 + 2 ) × x 2 × x 1 2
Subject to:
g 1 ( x ) = 1 x 3 × x 2 3 71,785 × x 1 4 0
g 2 ( x ) = 4 × x 2 2 x 1 × x 2 12,566 × x 1 4 + 1 5108 × x 1 2 1 0  
g 3 ( x ) = 1 140.45 × x 1 x 2 2 × x 3 0  
g 4 ( x ) = x 1 + x 2 1.5 1 0  
Boundaries:
0.05 x 1 2.0 ; 0.25 x 2 1.3 ; 2.0 x 3 15.0
As can be seen in Table 11, the weight obtained by each algorithm is relatively small. This extensively tests the accuracy of the algorithm in solving engineering problems. The MSCSO algorithm achieves a minimum weight of 0.009872 among these algorithms. It shows that the MSCSO algorithm is more accurate in solving the engineering problem.

6.5. Cantilever Beam Design Problem

The optimization purpose of the cantilever beam design is to minimize the weight of the cantilever, given the following decision variables: the height or width of five hollow square blocks with constant thickness. The model of the cantilever beam is shown in Figure 17.
The mathematical formulation of this problem is shown below:
Consider:
x = [ x 1   x 2   x 3   x 4   x 5 ]
Objective function:
f ( x ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 )
Subject to:
g ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
Boundaries:
0.01 x i 100 ( i = 1 , 2 , 5 )
The statistical table of the cantilever beam design is shown in Table 12. The xi(i = 1, 2, ⋯, 5) obtained by the MSCSO algorithm decreases gradually, which conforms to the design of the cantilever beam and, finally, a minimum weight of 1.33995853466334 is obtained. Compared with the data of other algorithms, the data obtained by the MSCSO algorithm are more consistent with the characteristics of the engineering problem.

6.6. Multiple Disc Clutch Brake Problem

The purpose of the multiple disc clutch brake problem is to find five related variable values of the minimum mass multi-plate brake under eight constraints. The five variables are inner radius ri, outer radius ro, brake disc thickness t, driving force F, and surface friction number Z. The specific model is shown in Figure 18.
The mathematical formulation of this problem is shown below:
Consider:
x = [ x 1 x 2 x 3 x 4     x 5 ] = [ r i r o   t     F     Z ]
Objective function:
f ( x ) = ΙΙ ( r o 2 r i 2 ) t ( Z + 1 ) ρ   ( ρ = 0 . 0000078 )
Subject to:
g 1 ( x ) = r o r i Δ r 0
g 2 ( x ) = l m a x ( Z + 1 ) ( t + δ ) 0  
g 3 ( x ) = P m a x P r z 0  
g 4 ( x ) = P m a x ν s r   m a x P r z υ s r 0  
g 5 ( x ) = ν s r   m a x υ s r 0  
g 6 ( x ) = T m a x T 0  
g 7 ( x ) = M h s M s 0  
g 8 ( x ) = T 0  
Variable range:
60 x 1 80 , 90 x 2 110 , 1 x 3 3 , 600 x 4 1000 , 2 x 5 9
Other parameters:
M h = 2 3 μ F Z r o 3 r i 2 r o 2 r i 3 , P r z = F ΙΙ ( r o 2 r i 2 ) ,
υ r z = 2 ΙΙ ( r o 3 r i 3 ) 90 ( r o 2 r i 2 ) , T = I z ΙΙ n 30 ( M h + M f )  
Δ r = 20   mm ,   I z = 55   kg mm 2 ,   P m a x = 1   MPa ,   F m a x = 1000   N ,  
T m a x = 15   s ,   μ = 0.5 ,   s = 1.5 ,   M s = 40   Nm ,   M f = 3   Nm ,  
n = 250   rpm ,   υ s r   m a x = 10   m / s ,   l m a x = 30   mm  
In Table 13, the weight obtained by the MSCSO algorithm is 0.235242. Compared with other algorithms, the first algorithm is obtained. Other algorithms also have some effect, but the weight obtained is more excellent. It is proved that the MSCSO algorithm has a good effect on this problem.

6.7. Car Crashworthiness Design Problem

This problem also belongs to a minimal problem with eleven variables, subject to ten constraints. Figure 19 shows the finite element model of this problem. The decision variables are, respectively, the internal thickness of the B-pillar, the thickness of B-pillar reinforcement, the internal thickness of the floor, the thickness of the cross beam, the thickness of the door beam, the thickness of the door belt line reinforcement, the thickness of the roof longitudinal beam, the internal material of the B-pillar, the internal material of the floor, the height of the obstacle, and the impact position of the obstacle. The constraints are, respectively, the abdominal load, the upper viscosity standard, the middle viscosity standard, the lower viscosity standard, the upper rib deflection, the middle rib deflection, the lower rib deflection pubic symphysis force, B-pillar midpoint speed, and B-pillar front door speed.
The mathematical formulation of this problem is shown below:
Minimize:
f ( x ) = Weight ,
Subject to:
g 1 ( x ) = F a ( load in abdomen ) 1   kN ,
g 2 ( x ) = V × C u ( dummy upper chest ) 0.32   m / s ,
g 3 ( x ) = V × C m ( dummy middle chest ) 0.32   m / s ,
g 4 ( x ) = V × C l ( dummy lower chest ) 0.32   m / s ,
g 5 ( x ) = Δ ur ( upper rib deflection ) 32   mm ,
g 6 ( x ) = Δ mr ( middle rib deflection ) 32   mm ,
g 7 ( x ) = Δ lr ( lower rib deflection ) 32   mm ,
g 8 ( x ) = F ( Public force ) p 4   kN ,
g 9 ( x ) = V MBP ( Velocity of V Pillar at middle point ) 9.9   mm / ms ,
g 10 ( x ) = V FD ( Velocity of front door at V Pillar ) 15.7   mm / ms ,
Variable Range:
0.5 x 1 x 7 1.5 , x 8 , x 9 ( 0.192 , 0.345 ) , 30 x 10 , x 11 30 ,
Table 14 shows the statistical results of the car crash worthiness design problem. From the table data, it can be concluded that the MSCSO algorithm can obtain a better solution to this problem. The MSCSO algorithm can obtain more precise unknowns in variable solving.

7. Conclusions

The sand cat swarm optimization algorithm (SCSO) is a recently proposed population intelligence optimization algorithm. The SCSO algorithm simulates the hunting process of sand cats. Each sand cat will gradually move close to its prey, but the SCSO algorithm has insufficient exploration ability in the later stage, and it is easy to fall into local optimization, leading to difficulties in the convergence of the algorithm. To solve this problem, this paper proposes a modified sand cat swarm optimization algorithm (MSCSO). The core of the MSCSO algorithm is to use the wandering strategy when sand cats are hunting. When searching for prey, in order to increase the search range of the sand cat group, the triangle walking (TW) strategy is used to further search for a better position in the search range. The TW strategy first calculates the distance from the prey, then selects the walking direction through the Roulette Wheel Selection, and finally obtains the walking step length. This method increases the exploration ability of the SCSO algorithm and makes the MSCSO algorithm more global. In order to find a better position when the sand cat attacks its prey, it walks through the Levy flight walking (LFW) strategy. After adding the wandering strategy, the global exploration ability of the SCSO algorithm is enhanced. Then, the lens alternative-based learning (LBOL) strategy is added to enhance the optimization effect of the SCSO algorithm. The following conclusions can be drawn from the results of experimental performance evaluation and statistical analysis:
-
According to the experimental image analysis, the proposed TW, FLW, and LBOL enhance the global exploration ability of the MSCSO algorithm.
-
According to the experimental statistics, the proposed TW, FLW, and LBOL enhance the optimization performance of the MSCSO algorithm and can find better solutions in most functions.
-
In engineering problems, the MSCSO algorithm has obtained better solutions than many other algorithms. It is proved that MSCSO has a good effect in solving engineering problems.
The MSCSO algorithm has strong exploration ability and can jump out of local optimization to prevent premature convergence of the algorithm. However, the exploitation ability of the MSCSO algorithm is relatively reduced, and the algorithm is difficult to converge faster when finding a better location. However, compared with SCSO, it is greatly improved. In a future work, we will strengthen the exploitation capability of the MSCSO algorithm and apply it to UAV 3D path planning, text clustering, feature selection, scheduling in cloud computing, parameter estimation, image segmentation, intrusion detection, etc.

Author Contributions

Conceptualization, D.W. and H.R.; methodology, D.W.; software, H.J., H.R., and C.W.; validation, H.J., and D.W.; formal analysis, D.W., H.R., and C.W.; investigation, D.W., and H.J.; resources, Q.L. and D.W.; data curation, H.R. and C.W.; writing—original draft preparation, H.R. and D.W.; writing—review and editing, H.J. and L.A.; visualization, D.W., and H.J.; supervision, H.J., and D.W.; funding acquisition, D.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by National Education Science Planning Key Topics of the Ministry of Education—“Research on the core quality of applied undergraduate teachers in the intelligent age” (DIA220374).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for helping us improve this paper’s quality.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fearn, T. Particle swarm optimization. NIR News 2014, 25, 27. [Google Scholar]
  2. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  3. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  4. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  5. Assiri, A.S.; Hussien, A.G.; Amin, M. Ant lion optimization: Variants, hybrids, and applications. IEEE Access 2020, 8, 77746–77764. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  7. Hussien, A.G.; Amin, M.; Abd El Aziz, M. A comprehensive review of moth-flame optimisation: Variants, hybrids, and applications. J. Exp. Theor. Artif. Intell. 2020, 32, 705–725. [Google Scholar] [CrossRef]
  8. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  9. Banzhaf, W.; Koza, J.R. Genetic programming. IEEE Intell. Syst. 2000, 15, 74–84. [Google Scholar] [CrossRef] [Green Version]
  10. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  11. Jaderyan, M.; Khotanlou, H. Virulence Optimization Algorithm. Appl. Soft. Comput. 2016, 43, 596–618. [Google Scholar] [CrossRef]
  12. Sinha, N.; Chakrabarti, R.; Chattopadhyay, P. Evolutionary programming techniques for economic load dispatch. IEEE Trans. Evol. Comput. 2003, 7, 83–94. [Google Scholar] [CrossRef]
  13. Storn, R.; Price, K. Differential Evolution-A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  14. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  15. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  17. Rashedi, E.; Nezamabadi-Pour, H.S. GSA: A Gravitational Search Algorithm. Inform. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  18. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inform. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  19. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  20. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112, 283–294. [Google Scholar] [CrossRef]
  21. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Inform. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  23. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A New Heuristic Optimization Algorithm: Harmony Search. Simulation 2001, 2, 60–68. [Google Scholar] [CrossRef]
  24. Satapathy, S.; Naik, A. Social group optimization (SGO): A new population evolutionary optimization technique. Complex Intell. Syst. 2016, 2, 173–203. [Google Scholar] [CrossRef] [Green Version]
  25. Naser, G.; Ebrahim, B. Exchange market algorith. Appl. Soft. Comput. 2014, 19, 177–187. [Google Scholar]
  26. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2022, 1–25. [Google Scholar] [CrossRef]
  27. Li, Y.; Wang, G. Sand Cat Swarm Optimization Based on Stochastic Variation With Elite Collaboration. IEEE Access 2022, 10, 89989–90003. [Google Scholar] [CrossRef]
  28. Jovanovic, D.; Marjanovic, M.; Antonijevic, M.; Zivkovic, M.; Budimirovic, N.; Bacanin, N. Feature Selection by Improved Sand Cat Swarm Optimizer for Intrusion Detection. In Proceedings of the 2022 International Conference on Artificial Intelligence in Everything (AIE), Lefkosa, Cyprus, 2–4 August 2022; pp. 685–690. [Google Scholar]
  29. Roman, R.C.; Precup, R.E.; Petriu, E.M. Hybrid data-driven fuzzy active disturbance rejection control for tower crane systems. Eur. J. Control 2021, 58, 373–387. [Google Scholar] [CrossRef]
  30. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  31. Chi, R.; Li, H.; Shen, D.; Hou, Z.; Huang, B. Enhanced P-type Control: Indirect Adaptive Learning from Set-point Updates. IEEE Trans. Autom. Control 2022. [Google Scholar] [CrossRef]
  32. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  33. Hussien, A.G. An enhanced opposition-based salp swarm algorithm for global optimization and engineering problems. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 129–150. [Google Scholar] [CrossRef]
  34. Wang, Y.; Cai, Z.; Zhang, Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  35. Zhang, H.; Wang, Z.; Chen, W.; Heidari, A.A.; Wang, M.; Zhao, X.; Liang, G.; Chen, H.; Zhang, X. Ensemble mutation-driven salp swarm algorithm with restart mechanism: Framework and fundamental analysis. Expert Syst. Appl. 2021, 165, 113897. [Google Scholar] [CrossRef]
  36. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  37. Zheng, R.; Jia, H.; Abualigah, L.; Wang, S.; Wu, D. An improved remora optimization algorithm with autonomous foraging mechanism for global optimization problems. Math. Biosci. Eng. 2022, 19, 3994–4037. [Google Scholar] [CrossRef]
  38. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Ewees, A.A.; Abualigah, L.; Abd Elaziz, M. MTV-MFO: Multi-Trial Vector-Based Moth-Flame Optimization Algorithm. Symmetry 2021, 13, 2388. [Google Scholar] [CrossRef]
  39. Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L. Modified remora optimization algorithm for global optimization and multilevel thresholding image segmentation. Mathematics 2022, 10, 1014. [Google Scholar] [CrossRef]
  40. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  41. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  42. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  43. Rao, H.; Jia, H.; Wu, D.; Wen, C.; Liu, Q.; Abualigah, L. A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 3765. [Google Scholar] [CrossRef]
  44. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  45. He, Q.; Wang, L. A hybrid particle swarm optimization with a feasibilitybased rule for constrained optimization. Appl. Math. Comput. 2007, 186, 1407–1422. [Google Scholar]
  46. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  47. Laith, A.; Dalia, Y.; Mohamed, A.E.; Ahmed, A.E.; Mohammed, A.A.A.; Amir, H.G. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar]
  48. Wang, S.; Hussien, A.G.; Jia, H.; Abualigah, L.; Zheng, R. Enhanced Remora Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 1696. [Google Scholar] [CrossRef]
  49. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Zong, W.G.; Gandomi, A.H. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2021, 191, 116158. [Google Scholar] [CrossRef]
  50. Babalik, A.; Cinar, A.C.; Kiran, M.S. A modification of tree-seed algorithm using Deb’s rules for constrained optimization. Appl. Soft. Comput. 2018, 63, 289–305. [Google Scholar] [CrossRef]
  51. Wen, C.; Jia, H.; Wu, D.; Rao, H.; Li, S.; Liu, Q.; Abualigah, L. Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem. Mathematics 2022, 10, 3604. [Google Scholar] [CrossRef]
  52. Beyer, H.G.; Schwefel, H.P. Evolution strategies–A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  53. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  54. Hayyolalam, V.; Kazem, A.A.P. Black widow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  55. Song, M.; Jia, H.; Abualigah, L.; Liu, Q.; Lin, Z.; Wu, D.; Altalhi, M. Modified Harris Hawks Optimization Algorithm with Exploration Factor and Random Walk Strategy. Comput. Intell. Neurosci. 2022, 2022, 23. [Google Scholar] [CrossRef]
  56. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  57. Sayed, G.I.; Darwish, A.; Hassanien, A.E. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems. J. Exp. Theor. Artif. Intell. 2018, 30, 293–317. [Google Scholar] [CrossRef]
  58. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  59. Long, W.; Jiao, J.; Liang, X.; Cai, S.; Xu, M. A random opposition-based learning grey wolf optimizer. IEEE Access 2019, 7, 113810–113825. [Google Scholar] [CrossRef]
  60. Houssein, E.H.; Neggaz, N.; Hosney, M.E.; Mohamed, W.M.; Hassaballah, M. Enhanced Harris hawks optimization with genetic operators for selection chemical descriptors and compounds activities. Neural Comput. Appl. 2021, 33, 13601–13618. [Google Scholar] [CrossRef]
  61. Wang, S.; Sun, K.; Zhang, W.; Jia, H. Multilevel thresholding using a modified ant lion optimizer with opposition-based learning for color image segmentation. Math. Biosci. Eng. 2021, 18, 3092–3143. [Google Scholar] [CrossRef]
Figure 1. Classification of meta-heuristic optimization algorithms.
Figure 1. Classification of meta-heuristic optimization algorithms.
Mathematics 10 04350 g001
Figure 2. Population initialization diagram.
Figure 2. Population initialization diagram.
Mathematics 10 04350 g002
Figure 3. Location update mechanism of the SCSO algorithm. (a) The position of the sand cat group in t iteration; (b) The position of the sand cat group in t + 1 iteration.
Figure 3. Location update mechanism of the SCSO algorithm. (a) The position of the sand cat group in t iteration; (b) The position of the sand cat group in t + 1 iteration.
Mathematics 10 04350 g003
Figure 4. Schematic Diagram of the Wandering Strategy.
Figure 4. Schematic Diagram of the Wandering Strategy.
Mathematics 10 04350 g004
Figure 5. Lens opposition-based learning diagram.
Figure 5. Lens opposition-based learning diagram.
Mathematics 10 04350 g005
Figure 6. Flowchart for the proposed MSCSO algorithm.
Figure 6. Flowchart for the proposed MSCSO algorithm.
Mathematics 10 04350 g006
Figure 7. Schematic diagram of the 23 standard reference functions.
Figure 7. Schematic diagram of the 23 standard reference functions.
Mathematics 10 04350 g007aMathematics 10 04350 g007b
Figure 8. Convergence curves for the optimization algorithms for standard benchmark functions (F1–F13) with dim = 30.
Figure 8. Convergence curves for the optimization algorithms for standard benchmark functions (F1–F13) with dim = 30.
Mathematics 10 04350 g008
Figure 9. Convergence curves for the optimization algorithms for standard benchmark functions (F1–F13) with dim = 500.
Figure 9. Convergence curves for the optimization algorithms for standard benchmark functions (F1–F13) with dim = 500.
Mathematics 10 04350 g009
Figure 10. Convergence curves for the optimization algorithms for standard benchmark functions (F14–F23).
Figure 10. Convergence curves for the optimization algorithms for standard benchmark functions (F14–F23).
Mathematics 10 04350 g010
Figure 11. Convergence curve of the benchmark function optimization algorithm on CEC2014.
Figure 11. Convergence curve of the benchmark function optimization algorithm on CEC2014.
Mathematics 10 04350 g011aMathematics 10 04350 g011b
Figure 12. Box plot obtained from CEC2014.
Figure 12. Box plot obtained from CEC2014.
Mathematics 10 04350 g012aMathematics 10 04350 g012b
Figure 13. Model of the pressure vessel design.
Figure 13. Model of the pressure vessel design.
Mathematics 10 04350 g013
Figure 14. Model of the speed reducer design.
Figure 14. Model of the speed reducer design.
Mathematics 10 04350 g014
Figure 15. Model of the welded beam design.
Figure 15. Model of the welded beam design.
Mathematics 10 04350 g015
Figure 16. Model of the tension/compression spring design.
Figure 16. Model of the tension/compression spring design.
Mathematics 10 04350 g016
Figure 17. Model of the cantilever beam design.
Figure 17. Model of the cantilever beam design.
Mathematics 10 04350 g017
Figure 18. Model of the multiple disc clutch brake.
Figure 18. Model of the multiple disc clutch brake.
Mathematics 10 04350 g018
Figure 19. The car crashworthiness design.
Figure 19. The car crashworthiness design.
Mathematics 10 04350 g019
Table 1. Parameter settings for the comparative algorithms.
Table 1. Parameter settings for the comparative algorithms.
AlgorithmParametersValue
GAType
Selection
Crossover
 
Mutation
Real coded
Roulette Wheel (proportionate)
Whole aritharithmetic
(Probability = 0.7)
Gaussian
(Probability = 0.01)
STOASa
b
[0, 2]
1
SCAα2
ROAC0.1
WOA Coefficient   vectors   A
Coefficient   vectors   C
Helical parameter b
Helical parameter l
1
[−1, 1]
0.75
[−1, 1]
BESα
r
[1.5, 2.0]
[0, 1]
AOAMOP_Max
MOP_Min
A
Mu
1
0.2
5
0.499
SCSOSM
Roulette Wheel selection
2
[0, 360]
MSCSOC
SM
β
Roulette Wheel selection
0.35
2
[0, 2π]
[0, 360]
Table 2. Details of the 23 benchmark functions.
Table 2. Details of the 23 benchmark functions.
TypeFdimRangeFmin
Unimodal
benchmark functions
F 1 ( x ) = i = 1 n x i 2 30/500[−100, 100]0
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30/500[−10, 10]0
F 3 ( x ) = i = 1 n ( j 1 i x j ) 2 30/500[−100, 100]0
F 4 ( x ) = max { | x i | , 1 i n } 30/500[−100, 100]0
F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30/500[−30, 30]0
F 6 ( x ) = i = 1 n ( x i + 5 ) 2 30/500[−100, 100]0
F 7 ( x ) = i = 1 n i × x i 4 + r a n d o m [ 0 , 1 ) 30/500[−1.28, 1.28]0
Multimodal
benchmark functions
F 8 ( x ) = i = 1 n x i sin ( | x i | ) 30/500[−500, 500]−418.9829 × dim
F 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30/500[−5.12, 5.12]0
F 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e ) 30/500[−32, 32]0
F 11 ( x ) = 1 400 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30/500[−600, 600]0
F 12 ( x ) = π n { 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) , where   y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = { k ( x i a ) m     x i > a 0 a < x i < a k ( x i a ) m x i < a 30/500[−50, 50]0
F 13 ( x ) = 0.1 ( sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] ) + i = 1 n u ( x i , 5 , 100 , 4 ) 30/500[−50, 50]0
Fixed-dimension multimodal
benchmark functions
F 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65, 65]1
F 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.00030
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + x 2 4 2[−5, 5]−1.0316
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 2 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 5[−2, 2]3
F 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[−1, 2]−3.86
F 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
F 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
F 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
F 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 3. Statistical results of the 23 standard reference functions (we bold the data with good algorithm).
Table 3. Statistical results of the 23 standard reference functions (we bold the data with good algorithm).
FdimMetricMSCSOSCSOAOABESWOAROASCASTOAGA
F130min08.42 × 10−1253.07 × 10−16908.8 × 10−9101.46 × 10−24.12 × 10−96.97 × 10−3
mean03.70 × 10−1119.15 × 10−2002.43 × 10−731.43 × 10−3229.486.54 × 10−72.29 × 10−2
std02.01 × 10−1105.01 × 10−1901.01 × 10−72014.41.03 × 10−67.21 × 10−3
500min01.64 × 10−1105.6 × 10−106.15 × 10−8309.67 × 1041.3168.7
mean04.64 × 10−966.49 × 10−109.16 × 10−691.16 × 10−3152 × 1059.4672
std02.54 × 10−954.23 × 10−203.76 × 10−6806.07 × 10410.32.62
F230min03.02 × 10−6504.66 × 10−2299.23 × 10−571.46 × 10−1802.06 × 10−45.75 × 10−73.60 × 10−1
mean01.28 × 10−5804.45 × 10−1591.25 × 10−491.81 × 10−1563.41 × 10−27.79 × 10−64.97 × 10−1
std06.77 × 10−5802.44 × 10−1586.69 × 10−499.86 × 10−1569.25 × 10−28.89 × 10−66.27 × 10−2
500min04.89 × 10−565.74 × 10−103.89 × 10−2201 × 10−545.87 × 10−18031.91.07 × 10−21.33 × 102
mean07.07 × 10−501.44 × 10−32.40 × 10−1601.24 × 10−481.82 × 10−1611.21 × 1021.26 × 10−11.4 × 102
std03.81 × 10−492.22 × 10−31.32 × 10−1596.21 × 10−489.02 × 10−16169.61.04 × 10−12.59
F330min06.38 × 10−1128.44 × 10−11608.72 × 1031.44 × 10−3201.76 × 1032.22 × 10−36.06 × 103
mean02.52 × 10−984.35 × 10−33.67 × 10−234.15 × 1046.51 × 10−2809.64 × 1035.12 × 10−12.28 × 104
std01.31 × 10−979.38 × 10−32.01 × 10−221.32 × 10406.10 × 1032.268.15 × 103
500min01.53 × 10−9815.501.23 × 1074.22 × 10−2974.34 × 1062.31 × 1054.66 × 105
mean06.14 × 10−8234.415.12.93 × 1071.51 × 10−2606.85 × 1065.39 × 1057.25 × 105
std03.36 × 10−8117.882.61.26 × 10701.53 × 1061.93 × 1051.28 × 105
F430min01.99 × 10−541.31 × 10−488.59 × 10−2324.542.98 × 10−17620.91.39 × 10−22.23 × 10−1
mean05.92 × 10−492.53 × 10−21.96 × 10−15851.68.49 × 10−15732.15.94 × 10−22.91 × 10−1
std03.14 × 10−482.01 × 10−21.07 × 10−15728.74.31 × 10−15612.26.97 × 10−24.6 × 10−2
500min05.55 × 10−511.64 × 10−16.92 × 10−2275.578.10 × 10−17798.597.49.44 × 10−1
mean03.65 × 10−441.78 × 10−14.41 × 10−15373.74.30 × 10−1569998.69.69 × 10−1
std01.84 × 10−431.52 × 10−22.41 × 10−15228.92.33 × 10−1552.8 × 10−16 × 10−11.17 × 10−2
F530min24.526.227.75.99 × 10−127.226.188.527.317
mean27.127.928.425.227.9272.62 × 10428.167.5
std1.379.07 × 10−13.21 × 10−18.945.15 × 10−15.78 × 10−14.94 × 1044.74 × 10−130.6
500min4.96 × 1024.98 × 1024.99 × 1021.014.96 × 1024.94 × 1021.02 × 1092.63 × 1034.87 × 103
mean4.97 × 1024.98 × 1024.99 × 1024.66 × 1024.96 × 1024.95 × 1022.01 × 1091.56 × 1045.14 × 103
std4.8 × 10−11.95 × 10−16.6 × 10−21.14 × 1024.84 × 10−12.98 × 10−14.52 × 1081.63 × 1041.46 × 102
F630min9.9 × 10−61.132.711.61 × 10−31.19 × 10−12.86 × 10−25.142.027.75
mean7.21 × 10−12.093.192.334.95 × 10−11.34 × 10−123.52.698.07
std3.38 × 10−16.99 × 10−13.4 × 10−13.443.07 × 10−11.17 × 10−164.55.22 × 10−11.18 × 10−1
500min60.6991.14 × 1025.72 × 10−516.76.079.69 × 1041.14 × 1023.31 × 102
mean85.71.06 × 1021.16 × 1022232.116.22.25 × 1051.23 × 1023.42 × 102
std7.373.211.1146.910.36.155.95 × 1048.44.69
F730min1.11 × 10−63.21 × 10−66.35 × 10−69.28 × 10−44.29 × 10−55.79 × 10−61.16 × 10−22.03 × 10−37.19 × 10−2
mean6.98 × 10−51.95 × 10−48.25 × 10−57.35 × 10−33.88 × 10−31.44 × 10−41.11 × 10−16.07 × 10−31.73 × 10−1
std7.4 × 10−52.66 × 10−47.76 × 10−54.37 × 10−33.49 × 10−31.42 × 10−41.17 × 10−12.82 × 10−35.63 × 10−2
500min9.26 × 10−73.18 × 10−65.76 × 10−61.59 × 10−32.45 × 10−41.1 × 10−58.79 × 1032.15 × 10−13.89 × 103
mean6.66 × 10−53.09 × 10−41.04 × 10−45.15 × 10−34.1 × 10−31.73 × 10−41.52 × 1044.62 × 10−14.46 × 103
std5.99 × 10−54.23 × 10−41.09 × 10−43.19 × 10−34.75 × 10−31.7 × 10−43.81 × 1032.53 × 10−12.77 × 102
F830min−9.12 × 103−7.99 × 103−6.03 × 103−1.25 × 104−1.26 × 104−1.26 × 104−4.49 × 103−6.35 × 103−5.71 × 103
mean−7.99 × 103−6.56 × 103−5.25 × 103−9.66 × 103−1 × 104−1.23 × 104−3.79 × 103−5.37 × 103−4.66 × 103
std5.05 × 1029.09 × 1024.7 × 1022.05 × 1031.76 × 1034.17 × 1022.65 × 1025.59 × 1026.4 × 102
500min−8.43 × 104−6.91 × 104−2.6 × 104−2.05 × 105−2.09 × 105−2.09 × 105−1.84 × 104−3.09 × 104−3.67 × 104
mean−7.36 × 104−5.93 × 104−2.3 × 104−1.6 × 105−1.74 × 105−2.07 × 105−1.58 × 104−2.53 × 104−3.31 × 104
std4.2 × 1036.73 × 1031.5 × 1032.65 × 1042.64 × 1046.23 × 1031.71 × 1033.86 × 1031.36 × 103
F930min0000001.05 × 10−23 × 10−89.71 × 10−1
mean00001.89 × 10−15040.610.22.6
std00001.04 × 10−14034.114.57.95 × 10−1
500min0000004.53 × 1021.35 × 10−22.25 × 103
mean005.45 × 10−606.06 × 10−1401.29 × 10325.92.39 × 103
std007.17 × 10−602.31 × 10−1305.38 × 10230.158.1
F1030min8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−162.65 × 10−2208.76 × 10−2
mean8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−165.15 × 10−158.88 × 10−1614.1201.36 × 10−1
std00002.36 × 10−1508.691.6 × 10−33.13 × 10−2
500min8.88 × 10−168.88 × 10−167.33 × 10−38.88 × 10−168.88 × 10−168.88 × 10−1610.2202.86
mean8.88 × 10−168.88 × 10−168.08 × 10−38.88 × 10−164.91 × 10−158.88 × 10−1618.9202.9
std003.38 × 10−402.59 × 10−1503.674.72 × 10−52.74 × 10−2
F1130min003.81 × 10−20003.69 × 10−13.47 × 10−84.14 × 10−4
mean002.6 × 10−101.49 × 10−209.24 × 10−13.32 × 10−22.11 × 10−2
std001.67 × 10−104.6 × 10−203.27 × 10−14.76 × 10−21.07 × 10−1
500min006.39 × 1030008.3 × 1021.61 × 10−12.18 × 10−1
mean001 × 1040001.65 × 1037.19 × 10−12.6 × 10−1
std002.67 × 1030007.49 × 1023.62 × 10−19.74 × 10−2
F1230min3.26 × 10−64.27 × 10−24.35 × 10−16.17 × 10−56.98 × 10−31.92 × 10−31.928.13 × 10−21.61
mean2.46 × 10−21.19 × 10−15.21 × 10−11.52 × 10−12.52 × 10−21.02 × 10−24.06 × 1052.8 × 10−11.73
std1.98 × 10−26.52 × 10−25.16 × 10−23.83 × 10−12.24 × 10−29.82 × 10−31.64 × 1061.5 × 10−13.77 × 10−2
500min3.4 × 10−16.38 × 10−11.063.69 × 10−64.62 × 10−28.46 × 10−34.91 × 1092.012.74
mean5.03 × 10−17.74 × 10−11.081.63 × 10−11.01 × 10−14.4 × 10−25.79 × 1094.892.81
std6.85 × 10−26.54 × 10−28.88 × 10−34.16 × 10−14.49 × 10−22.8 × 10−21.26 × 1093.073.7 × 10−2
F1330min2.02 × 10−12.032.589 × 10−52.2 × 10−12.39 × 10−23.11.631.73 × 10−3
mean1.522.382.831.425.16 × 10−12.12 × 10−11.01 × 1051.944.96 × 10−3
std7.43 × 10−14.55 × 10−11.19 × 10−11.492.23 × 10−11.36 × 10−13.66 × 1052.23 × 10−13.54 × 10−3
500min48.949.7150.11.06 × 10−39.851.846.27 × 1091.05 × 10210.2
mean49.449.850.214.118.67.459.85 × 1091.74 × 10211
std1.7 × 10−17.01 × 10−23.82 × 10−221.85.863.891.83 × 10976.33.72 × 10−1
F142min9.98 × 10−19.98 × 10−11.999.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−11
mean4.135.769.133.262.94.451.461.989.68
std3.994.364.041.453.084.78.53 × 10−11.913.61
F154min3.07 × 10−43.07 × 10−43.8 × 10−45.61 × 10−43.09 × 10−43.09 × 10−46.11 × 10−43.2 × 10−44.07 × 10−4
mean3.42 × 10−44.39 × 10−41.62 × 10−26.14 × 10−37.34 × 10−44.81 × 10−41.11 × 10−32.31 × 10−31.78 × 10−2
std1.68 × 10−43.2 × 10−42.63 × 10−27.26 × 10−35 × 10−42.47 × 10−43.61 × 10−44.92 × 10−32.5 × 10−2
F162min−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03
mean−1.03−1.03−1.03−9.97 × 10−1−1.03−1.03−1.03−1.03−1
std7.84 × 10−129.12 × 10−101.57 × 10−71.65 × 10−11.72 × 10−97.67 × 10−84.86 × 10−52.26 × 10−61.43 × 10−2
F172min3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.99 × 10−1
mean3.98 × 10−13.98 × 10−13.98 × 10−15.31 × 10−13.98 × 10−13.98 × 10−13.99 × 10−13.98 × 10−11.15
std2.83 × 10−102.78 × 10−88.39 × 10−82.11 × 10−11.08 × 10−57.56 × 10−61.62 × 10−39.87 × 10−56.38 × 10−1
F185min3333.0133333.1
mean338.46.41333324.2
std2.59 × 10−71.1 × 10−51110.41.07 × 10−41.25 × 10−42.05 × 10−42.27 × 10−421.3
F193min−3.86−3.86−3.86−3.85−3.86−3.86−3.86−3.86−3.86
mean−3.86−3.86−3.85−3.7−3.86−3.86−3.85−3.86−3.71
std1.82 × 10−84.3 × 10−34.49 × 10−32.17 × 10−14.36 × 10−32.5 × 10−36.17 × 10−37.68 × 10−33.15 × 10−1
F206min−3.32−3.32−3.16−3.25−3.32−3.32−3.12−3.13−3.32
mean−3.29−3.2−3.05−2.87−3.25−3.24−2.77−2.93−3.28
std5.54 × 10−21.47 × 10−19.24 × 10−22.62 × 10−19.14 × 10−29.84 × 10−25.08 × 10−14.13 × 10−15.8 × 10−2
F214min−10.2−10.2−6.91−10.2−10.2−10.2−4.8−10.1−5.05
mean−10.2−4.9−3.78−6.15−7.52−10.1−1.92−3.5−1.1
std2.08 × 10−61.941.42.662.922.75 × 10−21.563.91.11
F224min−10.4−10.4−6.87−10.3−10.4−10.4−5.72−10.3−5.08
mean−10.4−6.56−3.43−6.16−7.11−10.4−3.43−5.88−1.23
std4.87 × 10−62.61.412.2233.04 × 10−21.774.438.7 × 10−1
F234min−10.5−10.5−7.27−10.5−10.5−10.5−5.15−10.5−5.13
mean−10.5−7.11−4−6.4−6.69−10.5−3.65−8.08−1.66
std1.97 × 10−62.951.773.163.32.27 × 10−22.023.961.09
Table 4. Experimental results of the Wilcoxon rank−sum test on the 23 standard benchmark functions (We bold the good data).
Table 4. Experimental results of the Wilcoxon rank−sum test on the 23 standard benchmark functions (We bold the good data).
FdimSCSO
vs.
MSCSO
AOA
vs.
MSCSO
BES
vs.
MSCSO
WOA
vs.
MSCSO
ROA
vs.
MSCSO
SCA
vs.
MSCSO
STOA
vs.
MSCSO
GA
vs.
MSCSO
F1301.73 × 10−61.73 × 10−611.73 × 10−62.5 × 10−11.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−611.73 × 10−63.13 × 10−21.73 × 10−61.73 × 10−61.73 × 10−6
F2301.73 × 10−611.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F3301.73 × 10−61.73 × 10−611.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.25 × 10−11.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F4301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F5304.49 × 10−24.07 × 10−52.96 × 10−31.29 × 10−39.1 × 10−11.73 × 10−62.6 × 10−58.47 × 10−6
5002.35 × 10−61.73 × 10−68.73 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F6304.29 × 10−61.73 × 10−61.66 × 10−27.51 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−67.69 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F7303.85 × 10−39.26 × 10−11.73 × 10−62.35 × 10−61.06 × 10−11.73 × 10−61.73 × 10−61.73 × 10−6
5003.5 × 10−27.81 × 10−11.73 × 10−61.73 × 10−64.11 × 10−31.73 × 10−61.73 × 10−61.73 × 10−6
F8301.02 × 10−51.73 × 10−64.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5004.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F930111111.73 × 10−61.73 × 10−61.73 × 10−6
50014.38 × 10−41111.73 × 10−61.73 × 10−61.73 × 10−6
F10301118.19 × 10−611.73 × 10−61.73 × 10−61.73 × 10−6
50011.73 × 10−65 × 10−11.87 × 10−611.73 × 10−61.73 × 10−61.73 × 10−6
F113011.73 × 10−615 × 10−111.73 × 10−61.73 × 10−61.73 × 10−6
50011.73 × 10−61111.73 × 10−61.73 × 10−61.73 × 10−6
F12301.73 × 10−61.73 × 10−62.99 × 10−18.97 × 10−25.29 × 10−41.73 × 10−61.73 × 10−61.73 × 10−6
5002.13 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F13304.2 × 10−41.73 × 10−62.43 × 10−25.75 × 10−61.73 × 10−61.73 × 10−67.66 × 10−11.73 × 10−6
5001.73 × 10−61.73 × 10−61.24 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F1428.61 × 10−11.36 × 10−59.92 × 10−13.82 × 10−19.1 × 10−15.98 × 10−21.53 × 10−11.64 × 10−5
F1544.86 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.36 × 10−51.73 × 10−61.73 × 10−61.73 × 10−6
F1621.92 × 10−61.73 × 10−61.73 × 10−63.72 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F1723.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F1851.92 × 10−65.17 × 10−11.73 × 10−61.24 × 10−69.32 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F1961.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F2036.04 × 10−31.73 × 10−61.73 × 10−69.84 × 10−38.22 × 10−31.73 × 10−64.29 × 10−64.07 × 10−2
F2141.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F2241.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F2341.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
Table 5. Details of 30 CEC2014 benchmark functions.
Table 5. Details of 30 CEC2014 benchmark functions.
NameNO.FunctionsFmin
Unimodal
Functions
CEC 1Rotated High Conditioned Elliptic Function100
CEC 2Rotated Bent Cigar Function200
CEC 3Rotated Discus Function300
Simple
Multimodal
Functions
CEC 4Shifted and Rotated Rosenbrock’s Function400
CEC 5Shifted and Rotated Ackley’s Function500
CEC 6Shifted and Rotated Weierstrass Function600
CEC 7Shifted and Rotated Griewank’s Function700
CEC 8Shifted Rastrigin’s Function800
CEC 9Shifted and Rotated Rastrigin’s Function900
CEC 10Shifted Schwefel’s Function1000
CEC 11Shifted and Rotated Schwefel’s Schwefel’s Function1100
CEC 12Shifted and Rotated Katsuura Function1200
CEC 13Shifted and Rotated HappyCat Function1300
CEC 14Shifted and Rotated HGBat Function1400
CEC 15Shifted and Rotated Expanded Griewank’splus
Rosenbrock’s Function
1500
CEC 16Shifted and Rotated Expanded Scaffer’s F6
Function
1600
Hybrid
Function 1
CEC 17Hybrid Function 1 (N = 3)1700
CEC 18Hybrid Function 2 (N = 3)1800
CEC 19Hybrid Function 3 (N = 4)1900
CEC 20Hybrid Function 4 (N = 4)2000
CEC 21Hybrid Function 5 (N = 5)2100
CEC 22Hybrid Function 6 (N = 5)2200
Composition
Functions
CEC 23Composition Function 1 (N = 5)2300
CEC 24Composition Function 2 (N = 3)2400
CEC 25Composition Function 3 (N = 3)2500
CEC 26Composition Function 4 (N = 5)2600
CEC 27Composition Function 5 (N = 5)2700
CEC 28Composition Function 6 (N = 5)2800
CEC 29Composition Function 7 (N = 3)2900
CEC 30Composition Function 8 (N = 3)3000
Search Range: [−100, 100]dim
Table 6. CEC2014 Algorithm Results of the Benchmark Function.
Table 6. CEC2014 Algorithm Results of the Benchmark Function.
CECMetricMSCSOSCSOAOABESWOAROASCASTOAGA
CEC 1min1.1 × 1053.11 × 1055.86 × 1061.49 × 1071.57 × 1068.09 × 1053.25 × 1066.71 × 1054.74 × 106
mean4.85 × 1068.49 × 1067.06 × 1079.19 × 1071.33 × 1071.84 × 1071.18 × 1075.27 × 1067.96 × 107
std4.17 × 1065.21 × 1067.92 × 1073.95 × 1079.26 × 1061.3 × 1074.46 × 1064.54 × 1067.05 × 107
CEC 2min3.42 × 1024.8 × 1032.91 × 1093.55 × 1081.09 × 1061.03 × 1075.93 × 1081.89 × 1061.94 × 109
mean1.98 × 1079.73 × 1076.65 × 1092.5 × 1093.72 × 1078.66 × 1081.04 × 1094.73 × 1084.25 × 109
std6.8 × 1073.3 × 1082.1 × 1091.69 × 1095.70 × 1078.71 × 1083.64 × 1084.5 × 1081.37 × 109
CEC 3min6.28 × 1021.58 × 1031.08 × 1041.32 × 1041.32 × 1042.14 × 1032.09 × 1032.66 × 1038.27 × 103
mean3.91 × 1036.51 × 1031.82 × 1046.91 × 1045.52 × 1047.73 × 1031.15 × 1041.41 × 1043.91 × 105
std3.26 × 1033.43 × 1034.6 × 1037.61 × 1042.82 × 1043.64 × 1038.21 × 1038.94 × 1036.95 × 105
CEC 4min4 × 1024.02 × 1025.4 × 1024.96 × 1024.05 × 1024.2 × 1024.47 × 1024.19 × 1025.19 × 102
mean4.3 × 1024.42 × 1021.7 × 1039.18 × 1024.65 × 1024.79 × 1024.9 × 1024.54 × 1021.02 × 103
std32.725.57.75 × 1023.03 × 10244.455.532.832.44.24 × 102
CEC 5min5.2 × 1025.2 × 1025.2 × 1025.2 × 1025.20 × 1025.2 × 1025.2 × 1025.2 × 1025.2 × 102
mean5.2 × 1025.2 × 1025.2 × 1025.2 × 1025.20 × 1025.2 × 1025.2 × 1025.2 × 1025.2 × 102
std6.61 × 1021.11 × 10−15.24 × 1021.32 × 10−11.44 × 10−11.25 × 10−17.51 × 10−29.08 × 10−22.36 × 10−1
CEC 6min6.01 × 1026.04 × 1026.08 × 1026.05 × 1026.05 × 1026.04 × 1026.05 × 1026.04 × 1026.07 × 102
mean6.05 × 1026.06 × 1026.1 × 1026.09 × 1026.09 × 1026.07 × 1026.08 × 1026.08 × 1026.09 × 102
std1.91.549.85 × 1011.891.831.61.251.471.3
CEC 7min7 × 1027 × 1027.43 × 1027.19 × 1027.01 × 1027.01 × 1027.08 × 1027.01 × 1027.35 × 102
mean7.01 × 1027.02 × 1028.44 × 1027.57 × 1027.02 × 1027.05 × 1027.14 × 1027.05 × 1027.79 × 102
std6.16 × 10−12.451.832.95.19 × 1015.963.354.231.6
CEC 8min8.03 × 1028.09 × 1028.24 × 1028.38 × 1028.13 × 1028.16 × 1028.29 × 1028.12 × 1028.57 × 102
mean8.18 × 1028.34 × 1028.52 × 1028.69 × 1028.48 × 1028.39 × 1028.48 × 1028.26 × 1028.79 × 102
std7.8712.414.316.619.311.77.89.6714.9
CEC 9min9.16 × 1029.14 × 1029.24 × 1029.48 × 1029.2 × 1029.15 × 1029.37 × 1029.12 × 1029.47 × 102
mean9.34 × 1029.37 × 1029.45 × 1029.66 × 1029.52 × 1029.44 × 1029.49 × 1029.32 × 1029.74 × 102
std11.89.099.2612.421118.2910.113.5
CEC 10min1.04 × 1031.47 × 1031.14 × 1031.75 × 1031.07 × 1031.11 × 1031.77 × 1031.46 × 1031.44 × 103
mean1.22 × 1031.79 × 1031.74 × 1032.3 × 1031.72 × 1031.7 × 1032.16 × 1031.83 × 1031.97 × 103
std1.89 × 1021.85 × 1022.34 × 1022.75 × 1022.49 × 1022.64 × 1021.87 × 1022.21 × 1022.59 × 102
CEC 11min1.15 × 1031.64 × 1031.65 × 1032.32 × 1031.94 × 1031.75 × 1032.22 × 1031.78 × 1032.15 × 103
mean1.84 × 1032.04 × 1032.08 × 1032.77 × 1032.25 × 1032.18 × 1032.58 × 1032.26 × 1032.89 × 103
std2.95 × 1023.18 × 1023.49 × 1022.47 × 1023.45 × 1023.57 × 1022.25 × 1023.52 × 1023.49 × 102
CEC 12min1.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 103
mean1.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 103
std1.5 × 1013 × 10−12.71 × 10−13.7 × 10−14.86 × 10−13.37 × 10−13.12 × 10−13.86 × 10−16.82 × 10−1
CEC 13min1.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 103
mean1.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 103
std1.29 × 10−13.81 × 10−11.161.232.23 × 10−17.01 × 10−11.23 × 1012.19 × 10−19.89 × 10−1
CEC 14min1.4 × 1031.4 × 1031.41 × 1031.41 × 1031.4 × 1031.4 × 1031.4 × 1031.4 × 1031.4 × 103
mean1.4 × 1031.4 × 1031.43 × 1031.42 × 1031.4 × 1031.4 × 1031.4 × 1031.4 × 1031.41 × 103
std2.31 × 1011.1111.19.563.18 × 10−155.64 × 10−11.056.66
CEC 15min1.5 × 1031.5 × 1031.62 × 1031.54 × 1031.5 × 1031.5 × 1031.51 × 1031.5 × 1031.52 × 103
mean1.5 × 1031.52 × 1035.06 × 1033.09 × 1031.51 × 1031.68 × 1031.52 × 1031.52 × 1034.88 × 103
std1.289.15.61 × 1033.39 × 1039.826.46 × 10254.4945.41 × 103
CEC 16min1.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 103
mean1.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 103
std3.67 × 10−14.48 × 10−13.03 × 10−13.16 × 10−14.77 × 10−13.7 × 10−12.66 × 1013.87 × 10−12.91 × 10−1
CEC 17min1.95 × 1033.16 × 1036.99 × 1042.27 × 1041.05 × 1043.88 × 1031.87 × 1047.62 × 1035.18 × 105
mean6.66 × 1034.9 × 1045.22 × 1059.41 × 1053.7 × 1051.24 × 1058.44 × 1041.57 × 1058.93 × 106
std3.29 × 1031.34 × 1053.95 × 1051.63 × 1066.73 × 1051.89 × 1051.41 × 1052.04 × 1051.64 × 107
CEC 18min1.88 × 1033 × 1032.6 × 1038.21 × 1032.58 × 1032.98 × 1031.1 × 1043.04 × 1031.45 × 104
mean1 × 1041.52 × 1041.4 × 1041.6 × 1061.67 × 1041.22 × 1046.31 × 1041.92 × 1043.29 × 107
std5.84 × 1039.62 × 1039.68 × 1035.46 × 1061.42 × 1049.04 × 1039.88 × 1041.62 × 1044.42 × 107
CEC 19min1.9 × 1031.9 × 1031.91 × 1031.91 × 1031.9 × 1031.9 × 1031.91 × 1031.9 × 1031.91 × 103
mean1.9 × 1031.9 × 1031.94 × 1031.91 × 1031.91 × 1031.91 × 10+1.91 × 1031.9 × 1031.93 × 103
std6.93 × 1011.4128.610.42.2312.71.351.2422.8
CEC 20min2.04 × 1032.67 × 1035.66 × 1034.3 × 1032.57 × 1032.27 × 1032.9 × 1032.55 × 1037.6 × 103
mean5.83 × 1038.12 × 1031.38 × 1041.12 × 1051.5 × 1041.04 × 1049.66 × 1031.4 × 1041.53 × 107
std3.17 × 1034.05 × 1031.03 × 1044.47 × 1051.25 × 1045.03 × 1037.32 × 1039.73 × 1032.34 × 107
CEC 21min2.29 × 1033.19 × 1037.03 × 1034.92 × 1031.25 × 1043.16 × 1037. × 1033.64 × 1037.7 × 104
mean7.85 × 1031.08 × 1041.57 × 1063.77 × 1051.05 × 1065.54 × 1052.05 × 1041.42 × 1043.13 × 106
std4.57 × 1036.58 × 1032.3 × 1068.68 × 1053.05 × 1062.97 × 1061.12 × 1041.05 × 1043.71 × 106
CEC 22min2.22 × 1032.24 × 1032.28 × 1032.26 × 1032.23 × 1032.23 × 1032.26 × 1032.24 × 1032.32 × 103
mean2.25 × 1032.32 × 1032.42 × 1032.42 × 1032.33 × 1032.31 × 1032.3 × 1032.29 × 1032.67 × 103
std47.966.51.12 × 1021.11 × 10297.179.74362.41.74 × 102
CEC 23min2.5 × 1032.5 × 1032.5 × 1032.5 × 1032.5 × 1032.5 × 1032.64 × 1032.63 × 1032.5 × 103
mean2.5 × 1032.5 × 1032.5 × 1032.6 × 1032.64 × 1032.5 × 1032.65 × 1032.65 × 1032.7 × 103
std0001.03 × 10228.709.0810.41.12 × 102
CEC 24min2.52 × 1032.56 × 1032.55 × 1032.57 × 1032.54 × 1032.56 × 1032.55 × 1032.53 × 1032.56 × 103
mean2.59 × 1032.6 × 1032.59 × 1032.59 × 1032.59 × 1032.6 × 1032.56 × 1032.55 × 1032.6 × 103
std23.97.0920.314.3287.110.120.617.8
CEC 25min2.64 × 1032.7 × 1032.7 × 1032.69 × 1032.69 × 1032.7 × 1032.69 × 1032.7 × 1032.69 × 103
mean2.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.71 × 103
std001.897.135.958.378.81.344.6
CEC 26min2.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 103
mean2.7 × 1032.7 × 1032.72 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.71 × 103
std1.01 × 1013.82 × 10−129.51.3918.218.12.12 × 10−11.34 × 10−125
CEC 27min2.7 × 1032.71 × 1032.9 × 1032.86 × 1033.1 × 1032.9 × 1032.73 × 1033.1 × 1032.75 × 103
mean2.89 × 1032.89 × 1032.92 × 1033.14 × 1033.14 × 1032.9 × 1033.02 × 1033.17 × 1033.2 × 103
std35.135.489.21.76 × 1021.37 × 10201.63 × 10265.11.47 × 102
CEC 28min3 × 1033 × 1033 × 1033 × 1033.23 × 1033 × 1033.24 × 1033.17 × 1033.54 × 103
mean3 × 1033 × 1033.06 × 1033.34 × 1033.45 × 1033 × 1033.3 × 1033.19 × 1033.88 × 103
std002.35 × 1022.35 × 1021.87 × 102072.7122.2 × 102
CEC 29min3.1 × 1033.1 × 1033.1 × 1035.19 × 1033.46 × 1033.36 × 1034.47 × 1033.65 × 1035.62 × 103
mean3.62 × 1032.03 × 1052.29 × 1061.06 × 1064.86 × 1052.42 × 1052.46 × 1046.52 × 1038.91 × 106
std3.94 × 1026.07 × 1057.64 × 1061.65 × 1061.16 × 1066.18 × 1052.69 × 1044.52 × 1031.42 × 107
CEC 30min3.2 × 1033.94 × 1033.2 × 1035.33 × 1034.19 × 1033.94 × 1034.41 × 1033.72 × 1031.01 × 104
mean4.21 × 1035.11 × 1035.72 × 1043.49 × 1047.96 × 1035.21 × 1035.59 × 1034.32 × 1036.06 × 104
std4.93 × 1029.05 × 1029.2 × 1048.4 × 1047.98 × 1031.28 × 1031.25 × 1036.61 × 1028.71 × 104
Table 7. CEC2014 Experimental Results of the Wilcoxon Rank Sum Test on Benchmark Functions.
Table 7. CEC2014 Experimental Results of the Wilcoxon Rank Sum Test on Benchmark Functions.
CECSCSO
vs.
MSCSO
AOA
vs.
MSCSO
BES
vs.
MSCSO
WOA
vs.
MSCSO
ROA
vs.
MSCSO
SCA
vs.
MSCSO
STOA
vs.
MSCSO
GA
vs.
MSCSO
CEC 12.84 × 10−51.73 × 10−61.73 × 10−67.51 × 10−51.8 × 10−56.89 × 10−54.28 × 10−21.73 × 10−6
CEC 29.63 × 10−41.73 × 10−61.92 × 10−61.25 × 10−41.73 × 10−62.6 × 10−62.6 × 10−51.92 × 10−6
CEC 34.45 × 10−51.73 × 10−61.73 × 10−61.73 × 10−62.16 × 10−52.35 × 10−62.88 × 10−61.73 × 10−6
CEC 48.61 × 10−11.73 × 10−61.73 × 10−64.2 × 10−46.34 × 10−62.6 × 10−61.49 × 10−51.73 × 10−6
CEC 51.71 × 10−35.22 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−6
CEC 61.66 × 10−21.92 × 10−61.73 × 10−64.29 × 10−69.71 × 10−56.34 × 10−61.13 × 10−51.73 × 10−6
CEC 78.19 × 10−51.73 × 10−61.73 × 10−61.13 × 10−52.6 × 10−61.73 × 10−61.92 × 10−61.73 × 10−6
CEC 89.32 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.13 × 10−61.73 × 10−62.41 × 10−41.73 × 10−6
CEC 92.18 × 10−22.58 × 10−31.73 × 10−69.63 × 10−47.71 × 10−42.35 × 10−69.27 × 10−31.92 × 10−6
CEC 102.35 × 10−61.92 × 10−61.73 × 10−63.18 × 10−63.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
CEC 114.49 × 10−21.6 × 10−43.18 × 10−62.37 × 10−59.71 × 10−51.73 × 10−62.84 × 10−51.73 × 10−6
CEC 123.88 × 10−41.73 × 10−61.92 × 10−61.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
CEC 133.87 × 10−21.73 × 10−61.92 × 10−65.45 × 10−23.68 × 10−21.64 × 10−58.31 × 10−41.73 × 10−6
CEC 149.59 × 10−11.73 × 10−61.73 × 10−62.7 × 10−22.07 × 10−25.22 × 10−61.53 × 10−11.73 × 10−6
CEC 151.04 × 10−31.73 × 10−61.73 × 10−65.22 × 10−65.75 × 10−61.73 × 10−67.51 × 10−51.73 × 10−6
CEC 162.58 × 10−32.88 × 10−62.16 × 10−57.69 × 10−64.07 × 10−21.92 × 10−61.02 × 10−51.73 × 10−6
CEC 177.66 × 10−13.52 × 10−63.41 × 10−51.36 × 10−54.72 × 10−23.11 × 10−52.84 × 10−51.73 × 10−6
CEC 183.82 × 10−15.04 × 10−11.73 × 10−61.06 × 10−11.31 × 10−18.92 × 10−51.02 × 10−11.92 × 10−6
CEC 192.58 × 10−31.73 × 10−61.73 × 10−61.73 × 10−65.22 × 10−61.73 × 10−63.59 × 10−41.73 × 10−6
CEC 207.27 × 10−31.29 × 10−32.41 × 10−37.51 × 10−53.38 × 10−36.42 × 10−31.96 × 10−21.73 × 10−6
CEC 217.27 × 10−31.02 × 10−53.18 × 10−61.92 × 10−66.87 × 10−21.85 × 10−21.85 × 10−21.73 × 10−6
CEC 222.58 × 10−33.88 × 10−61.24 × 10−56.16 × 10−45.45 × 10−22.18 × 10−23.16 × 10−21.73 × 10−6
CEC 23114.38 × 10−48.3 × 10−611.73 × 10−61.73 × 10−61.73 × 10−6
CEC 243.34 × 10−41.81 × 10−25.2 × 10−13.09 × 10−14.69 × 10−24.45 × 10−53.52 × 10−68.29 × 10−1
CEC 255 × 10−16.25 × 10−11.34 × 10−18.2 × 10−11.56 × 10−22.35 × 10−61.73 × 10−63.72 × 10−5
CEC 269.37 × 10−21.73 × 10−62.13 × 10−61.96 × 10−24.53 × 10−41.92 × 10−64.49 × 10−21.73 × 10−6
CEC 278.75 × 10−11.56 × 10−29.15 × 10−51.36 × 10−58.75 × 10−15.31 × 10−51.73 × 10−61.73 × 10−6
CEC 28113.79 × 10−61.73 × 10−611.73 × 10−61.73 × 10−61.73 × 10−6
CEC 297.73 × 10−38.94 × 10−12.13 × 10−64.72 × 10−24.68 × 10−33.11 × 10−51.36 × 10−41.73 × 10−6
CEC 301.83 × 10−32.56 × 10−62.88 × 10−65.22 × 10−62.22 × 10−43.06 × 10−49.43 × 10−11.73 × 10−6
Table 8. Experimental results of the pressure vessel design.
Table 8. Experimental results of the pressure vessel design.
AlgorithmTsThRLBest Cost
MSCSO0.7424060.37029240.319622005734.915
MGTOA [43]0.7543640.36637540.42809198.56525752.402458
CPSO [44]0.81250.437542.0913176.74656061.0777
HPSO [45]0.81250.437542.0984176.63666059.7143
GWO [3]0.81250.434542.08918176.75876059.5639
CS [46]0.81250.437542.09845176.63666059.714335
AO [47]1.0540.18280659.621939.8055949.2258
EROA [48]0.843430.40076244.786145.95785935.7301
WOA [6]0.81250.437542.09827176.6396059.741
GA [8]0.81250.437542.0974176.65416059.94634
MVO [16]0.81250.437542.09074176.73876060.8066
ACO [2]0.81250.437542.10362176.57276059.0888
Table 9. Experimental results of the speed reducer design.
Table 9. Experimental results of the speed reducer design.
AlgorithmOptimal Values for VariablesOptimal Weight
x1x2x3x4x5x6x7
MSCSO3.4975920.7177.37.83.3500435.2855042995.438
AOA [40]3.503840.7177.37.729333.356495.28672997.9157
MFO [7]3.4974550.7177.827757.7124573.3517875.2863522998.94083
CS [46]3.50150.7177.6057.81813.3525.28753000.981
RSA [49]3.502790.7177.308127.747153.350675.286752996.5157
HS [23]3.5201240.7178.377.83.366975.2887193029.002
Table 10. Experimental results of the welded beam design.
Table 10. Experimental results of the welded beam design.
AlgorithmhltbBest Weight
MSCSO0.2057233.2534949.0366860.2057311.695309
TSA [50]0.2441576.2230668.295550.2444052.38241101
WOA [6]0.205363.482939.037460.2062761.730499
ROA [4]0.2000773.3657549.0111820.2068931.706447
GWO [3]0.2056763.4783779.036810.2057781.72624
GA [8]0.18294.04839.36660.20591.8242
MFO [7]0.20573.47039.03640.20571.72452
MVO [16]0.2054633.4731939.0445020.2056951.72645
GSA [17]0.1821293.856979100.2023761.879952
RO [20]0.2036873.5284679.0042330.2072411.735344
MROA [51]0.20621853.2548939.0200030.2064891.699058
Table 11. Experimental results of the tension/compression spring design.
Table 11. Experimental results of the tension/compression spring design.
AlgorithmdDVBest Weight
MSCSO0.050.3744338.5465790.009872
MFO [7]0.0519940.36410910.868420.012667
SSA [33]0.0512070.34521512.004030.012676
ES [52]0.0519890.36396510.890520.012681
PSO [1]0.0517280.35764411.244540.012675
EROA [48]0.0537990.469515.8110.010614
HHO [53]0.0517960.35930511.138860.012665
HS [23]0.0511540.34987112.076430.012671
MVO [16]0.052510.3760210.335130.01279
GA [8]0.051480.35166111.63220.012705
GWO [3]0.051690.35673711.288850.012666
DE [13]0.0516090.35471411.410830.01267
Table 12. Experimental results of the cantilever beam design.
Table 12. Experimental results of the cantilever beam design.
AlgorithmOptimal Values for VariablesOptimum Weight
x1x2x3x4x5
MSCSO6.012655.3154524.4920163.5010962.1524811.33995853466334
WOA [6]5.12615.61885.09523.93292.32191.37873150673956
BWO [54]6.20946.20946.20946.20946.20941.93736251728534
PSO [1]6.00405.29504.49153.51252.17101.33998298081255
GSA [17]5.60524.95535.66193.19593.20261.41155753917296
ERHHO [55]6.05095.26394.5143.46052.18781.3402
Table 13. Experimental results of the multiple disc clutch brake.
Table 13. Experimental results of the multiple disc clutch brake.
AlgorithmOptimal Values for VariablesOptimum Weight
x1x2x3x4x5
MSCSO70901637.79120.235242
TLBO [21]7090181030.313656611
WCA [56]7090191030.313656
MVO [16]7090191030.313656
CMVO [57]7090191030.313656
MFO [7]7090191030.313656
RSA [49]70.034790.03491801.72852.9740.31176
Table 14. Experimental results of the car crashworthiness design.
Table 14. Experimental results of the car crashworthiness design.
AlgorithmMSCSOROA [4]MPA [58]ROLGWO [59]HHOCM [60]MALO [61]
x10.5001115980.50.50.5012550.5001640.5
x21.2282689721.229421.228231.2455511.2486121.2281
x30.5000127640.50.50.5000460.6595580.5
x41.2025476781.211971.20491.1802541.0985151.2126
x50.5001933410.50.50.5000350.7579890.5
x61.0528076021.377981.23931.165880.7672681.308
x70.5000295250.500050.50.5000880.5000550.5
x80.344993080.344890.344980.3448950.3431050.3449
x90.3359519090.192630.1920.2995830.1920320.2804
x100.4611768860.622390.440353.595082.8988050.4242
x111.050120991-1.785042.29018-4.6565
Best Weight23.1908511623.2354423.1998223.2224324.4835823.2294
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, D.; Rao, H.; Wen, C.; Jia, H.; Liu, Q.; Abualigah, L. Modified Sand Cat Swarm Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 4350. https://doi.org/10.3390/math10224350

AMA Style

Wu D, Rao H, Wen C, Jia H, Liu Q, Abualigah L. Modified Sand Cat Swarm Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics. 2022; 10(22):4350. https://doi.org/10.3390/math10224350

Chicago/Turabian Style

Wu, Di, Honghua Rao, Changsheng Wen, Heming Jia, Qingxin Liu, and Laith Abualigah. 2022. "Modified Sand Cat Swarm Optimization Algorithm for Solving Constrained Engineering Optimization Problems" Mathematics 10, no. 22: 4350. https://doi.org/10.3390/math10224350

APA Style

Wu, D., Rao, H., Wen, C., Jia, H., Liu, Q., & Abualigah, L. (2022). Modified Sand Cat Swarm Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics, 10(22), 4350. https://doi.org/10.3390/math10224350

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop