Next Article in Journal
Generalized Hopf–Ore Extensions of Hopf Group-Coalgebras
Next Article in Special Issue
Optimizing PV Microgrid Isolated Electrification Projects—A Case Study in Ecuador
Previous Article in Journal
Fekete–Szegö Functional Problem for a Special Family of m-Fold Symmetric Bi-Univalent Functions
Previous Article in Special Issue
Searching for a Unique Exciton Model of Photosynthetic Pigment–Protein Complexes: Photosystem II Reaction Center Study by Differential Evolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization

by
Akram Belazi
1,†,
Héctor Migallón
2,*,†,
Daniel Gónzalez-Sánchez
2,†,
Jorge Gónzalez-García
2,†,
Antonio Jimeno-Morenilla
3,† and
José-Luis Sánchez-Romero
3,†
1
Laboratory RISC-ENIT (LR-16-ES07), Tunis El Manar University, Tunis 1002, Tunisia
2
Department of Computer Engineering, Miguel Hernández University, 03202 Elche, Spain
3
Department of Computer Technology, University of Alicante, 03071 Alicante, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(7), 1166; https://doi.org/10.3390/math10071166
Submission received: 8 March 2022 / Revised: 26 March 2022 / Accepted: 30 March 2022 / Published: 3 April 2022

Abstract

:
The sine cosine algorithm’s main idea is the sine and cosine-based vacillation outwards or towards the best solution. The first main contribution of this paper proposes an enhanced version of the SCA algorithm called as ESCA algorithm. The supremacy of the proposed algorithm over a set of state-of-the-art algorithms in terms of solution accuracy and convergence speed will be demonstrated by experimental tests. When these algorithms are transferred to the business sector, they must meet time requirements dependent on the industrial process. If these temporal requirements are not met, an efficient solution is to speed them up by designing parallel algorithms. The second major contribution of this work is the design of several parallel algorithms for efficiently exploiting current multicore processor architectures. First, one-level synchronous and asynchronous parallel ESCA algorithms are designed. They have two favors; retain the proposed algorithm’s behavior and provide excellent parallel performance by combining coarse-grained parallelism with fine-grained parallelism. Moreover, the parallel scalability of the proposed algorithms is further improved by employing a two-level parallel strategy. Indeed, the experimental results suggest that the one-level parallel ESCA algorithms reduce the computing time, on average, by 87.4% and 90.8%, respectively, using 12 physical processing cores. The two-level parallel algorithms provide extra reductions of the computing time by 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processing cores, including physical and logical cores. Comparison analysis is carried out on 30 unconstrained benchmark functions and three challenging engineering design problems. The experimental outcomes show that the proposed ESCA algorithm behaves outstandingly well in terms of exploration and exploitation behaviors, local optima avoidance, and convergence speed toward the optimum. The overall performance of the proposed algorithm is statistically validated using three non-parametric statistical tests, namely Friedman, Friedman aligned, and Quade tests.

1. Introduction

Metaheuristic optimization methods are widely used. Many of these algorithms are based on populations that evolve towards the optimal through an iterative process. In many cases, this iterative process is governed by rules based on natural phenomena, physical processes, or mathematical functions. Depending on both the evolutionary process of the populations (i.e., the algorithm used) and the characteristics of the function to be optimized (single-objective or multi-objective), the use of these methods may not be feasible, either because of the high computing cost or because of the poor quality of the result.
Some of the well-known metaheuristic optimization algorithms are based on natural phenomena. The most common algorithms are the ant colony optimization (ACO) algorithm [1], which imitates the foraging behavior of ant colonies; the evolutionary strategy (ES) algorithm [2], which is based on the processes of mutation and selection seen in evolution; the evolutionary programming [3] uses techniques for evolving programs based on the selection of individuals for reproduction (crossover) and mutation, as well the genetic programming [4]; the particle swarm optimization (PSO) algorithm [5], which is based on the social behavior of fish schooling or bird flocking; the shuffled frog leaping [6] algorithm, which imitates the collaborative behavior of frogs; and the artificial bee colony (ABC) algorithm [7], which was inspired by the foraging behavior of honey bees. Some algorithms are based on physical phenomena, for instance, the simulated annealing (SA) algorithm [8], which is based on the annealing process in metallurgy. Some algorithms based on human or non-human physiological processes have been proposed, such as genetic algorithms (GA) [9], which reflects the process of natural selection; the differential evolution (DE) [10,11,12] optimizes a problem by iteratively working to promote an agent concerning a given measure of quality; and the artificial immune algorithm (AIA) [13], which is based on the behavior of the human immune system. Some algorithms based on human social processes have also been proposed, such as the harmony search algorithm (HSA) [14] inspired by the process of musical performance. Finally, there are proposed algorithms based on mathematical processing, such as the SCA algorithm [15], which is based on the sine and cosine trigonometric functions.
Almost all of the algorithms mentioned require configuration parameters for an optimal optimization process. An incorrect setting of these parameters can cause either a poor quality solution or that the computational cost drastically increases as more generations are required to be processed. For example, ABC needs the number of bees and limits to be defined, HSA needs the harmony memory consideration rate, the number of improvisations, etc., to be adjusted. However, some of these algorithms do not require parameter tunings, such as teaching-learning based optimization algorithm (TLBO), Jaya, and SCA algorithms. The latter is employed in this paper.
The SCA algorithm has been proven to be efficient in various applications. In [16], SCA is used to train feed foreword neural network to breast cancer classification. Authors in [17] employ SCA algorithm to improve an adaptive fuzzy logic PID (proportional integral derivative) controller for the load frequency control of an autonomous power generation system. In addition, it is used to optimize the parameters of a fractional-order proportional integral differential controller for coordinated control of power consumption in heat pumps [18]. In [19], the unified power quality conditioner is formulated as a single objective problem optimized using SCA. The application spectrum of the SCA algorithm is too large, see for example [20,21,22,23,24,25,26,27]. However, its convergence speed is a bit slow, especially when considered multimodal objectives functions. Indeed, it maintains high global searchability even at the end of iterations. This paper aims to improve the SCA algorithm optimization behavior by intensifying the current solution’s refinement with a promising diversification level during the course of the algorithm, speeding it up both in terms of optimization and computational cost.
The major findings of the work are:
  • A new optimization algorithm is proposed, dubbed the Enhanced Sine Cosine Algorithm (ESCA), which improves the SCA algorithm and offers better performance than a set of state-of-the-art algorithms. The outstanding optimization performance of the ESCA algorithm is based on the embedding of a best-guided approach along with the local search capability already existing in the SCA algorithm, leading to a decrease in the diversification behavior at the end of the iterations.
  • To improve the computational performance of the proposed algorithm, synchronous and asynchronous parallel algorithms have been designed based on parallelization, initially at an outer, i.e., at a coarse-grained level. Since this level of parallelization is related to subpopulations, the number of subpopulations cannot increase indefinitely. These synchronous and asynchronous one-level parallel ESCA algorithms decrease the computing time by 87.4% and 90.8%, respectively, using 12 processing cores.
  • To improve parallel scalability without harming the optimization performance and increasing the number of processes, two-level parallel algorithms have been designed. The parallel strategy includes two levels, namely the outer level and the internal level. The outer level corresponds to coarse-grained parallelization, while the internal level corresponds to fine-grained parallelization. Accordingly, the parallel scalability of the proposed algorithms is extremely improved. The experimental results show significant reductions in the computing time of 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processes mapped on 12 physical cores. These time reductions correspond to speed-ups of x 12.5 , x 15.9 , and x 19.0 with 16, 20, and 24 processes correctly mapped on 12 physical cores, i.e., using hyperthreading.
The rest of the paper is organized as follows. The preliminaries, including the sine cosine algorithm (SCA) and the related works, are provided in Section 2. The proposed enhanced SCA algorithm (ESCA) along with the proposed parallel algorithms based on multi-population are described in Section 3. Section 4 lists the benchmark functions and the engineering problems employed for testing the performance of the proposed algorithm. The experimental results of these algorithms are discussed in Section 5. Finally, Section 6 concludes the paper.

2. Related Work

The SCA algorithm, on which our ESCA proposal is based, is described in Section 2.1. Other proposals based on the SCA algorithm are listed and briefly described in Section 2.2.

2.1. Sine Cosine Algorithm

The SCA algorithm is an optimization algorithm based on an initial population that evolves in search of a function’s optimum, called a cost function. This evolution, i.e., the generation of consecutive new populations (the typical procedure of population-based algorithms), is mainly based on (1) and (2).
P o p m k = P o p m k + ( r 1 sin ( r 2 k ) r 3 k B e s t P o p k P o p m k )
P o p m k = P o p m k + ( r 1 cos ( r 2 k ) r 3 k B e s t P o p k P o p m k )
As can be seen, (1) and (2) differ only in the use of the mathematical functions sine or cosine. In these equations, it has been adopted that each population is composed of m individuals, each individual consists of k variables (this parameter depends on the cost function), and finally, the best current individual is denoted by B e s t P o p . Each individual is generated based on both the current individual ( P o p m ) and the current best individual ( B e s t P o p ). However, the generation of each variable of each new individual is tuned by using three random values that define the magnitude of the sine or cosine range ( r 1 ), the sine or cosine domain ( r 2 k ), and the magnitude of the contribution of the target ( B e s t P o p ) in defining the new position of the solution ( r 3 k ).
In practice, the random numbers r 1 divide the search space into two sub-spaces based on the current individual and the best individual in the current population. Thus, if r 1 is greater than 1, the candidate solutions vacillate outwards the destination, else they fluctuate inwards the destination (see Figure 1).
Both exploration and exploitation phases of the SCA optimization algorithm depend on the capabilities provided by (1) and (2). This selection is decided at random with the same probability.
In heuristic optimization algorithms, which are iterative, the exploration phase is usually more decisive in the iterative procedure’s final phase. The SCA algorithm prioritizes the exploration phase as more iterations are performed through r 1 (see Equation (3)).
r 1 = i n i V a l u e r 1 c u r r e n t I T i n i V a l u e r 1 m a x I T s
From an initial value ( i n i V a l u e _ r 1 ), the value of r 1 decreases as the number of iterations performed increases, towards the r 1 minimum value when the last iteration is performed ( m a x _ I T s ). The initial value of r 1 is set to 2. The number of iterations to be performed ( m a x _ I T s ) is necessary for all population-based heuristic optimization algorithms. In practice, the value of r 1 modifies the range of values of the terms associated with the sine and cosine, from the original range 1 , 1 to the decreasing variable range, this variables range starts at i n i V a l u e _ r 1 , i n i V a l u e _ r 1 . These variables’ contribution can be seen in Algorithm 1, which shows the steps of the SCA algorithm. The computed new individual is n e w P o p m , the number of individuals in the population is p o p S i z e and the number of cost function design variables is n u m D e s i g n V a r s .
Algorithm 1 The SCA optimization algorithm.
1:
Set i n i V a l u e r 1 = 2
2:
Set m a x I T s variable
3:
Set population size (m - iterator for individuals)
4:
Define function cost (k - iterator for design variables)
5:
Generate initial population P o p 0
6:
for i t e r a t o r = 1 to m a x I T s  do
7:
   Search for the current B e s t P o p
8:
    r 1 = i n i V a l u e r 1 i t e r a t o r i n i V a l u e r 1 m a x I T s
9:
   for  m = 0 to p o p S i z e  do
10:
     for  k = 1 to n u m D e s i g n V a r s  do
11:
         r 2 = 2 π r a n d 0 . . 1
12:
         r 3 = 2 r a n d 0 . . 1
13:
         r 4 = r a n d 0 . . 1
14:
        if  r 4 < 0.5  then
15:
           n e w P o p m k = P o p m k + r 1 sin ( r 2 ) r 3 B e s t P o p k P o p m k
16:
        else
17:
           n e w P o p m k = P o p m k + r 1 cos ( r 2 ) r 3 B e s t P o p k P o p m k
18:
        end if
19:
     end for
20:
      P o p m = n e w P o p m
21:
   end for
22:
end for
23:
Search for the current B e s t P o p

2.2. SCA-Based Proposals

Thanks to its simplicity, the SCA algorithm was widely adopted and refined in many research proposals. In [28], the authors proposed a modified SCA algorithm in which the linear transition rule was substituted by a non-linear transition to guarantee a better transition from exploration to exploitation. Second, the best guidance based on the elite candidate solution was entered in the SCA’s search equations. Third, to escape from local optimums, a mutation operator is utilized to produce a new position during the course of the algorithm. An improved alternative of SCA named HSCA for train multilayer perceptrons was reported in [29]. The HSCA adjusted the search mechanism of SCA by combining the leading guidance and the simulated quenching algorithm. In [30], a novel SCA based on orthogonal parallel information was presented. It is based on two approaches; multiple-orthogonal parallel information and experience-based opposition direction strategy. The former enabled the algorithm to save the solution diversification and search around promising regions simultaneously. The latter serves to guard the exploration ability of the SCA algorithm. Authors in [31] proposed an improved sine cosine algorithm (ISCA) for feature selection of text categorization. In addition to the position of the leading solution, the ISCA worked with random positions from the search space. That alteration of the solution’s position mitigated premature convergence and submitted adequate performance. Ref. [32] suggested an improved sine cosine algorithm in which a couple of new mechanisms are provided. One is the mixing of the exploitation abilities of crossover with the personal lead position of individual solutions. The other is the combination of self-learning and global search tools. Zhiliu et al. proposed a modified SCA algorithm based on vicinity search and greedy levy mutation [33]. It suggests three optimization tactics. Firstly, it mixed the exponential decreasing of conversion parameter and the linear decreasing of inertia weight, which yielded an equilibrium between the algorithm’s global and local search abilities. Secondly, to escape from local optimums, a random strategy for search agents around the best one is performed. Thirdly, the greedy Levy mutation strategy is adopted for the best individuals to intensify the algorithm’s local searchability. A hybrid modified SCA algorithm was studied in [34]. It was benefited from the ability of random populations through the Latin hypercube sampling method. Next, it was used for hybridization with the cuckoo search algorithm. The algorithm showed sufficient local and global search skills. Mohamed et al. presented an improved SCA algorithm based on opposition-based learning (OBL) [35]. Indeed, OBL is a machine learning approach usually utilized to boost the performance of metaheuristic optimization algorithms. It allowed better accuracy of the obtained solutions by promoting the exploration skills of the algorithm. Since OBL elected the leading element falling between a given solution and its opposite, better solutions are afforded accordingly. An enhanced SCA algorithm for feature selection was described in [36]. It embedded an elitism strategy and a new strategy of best solution updating, yielding better accuracy for pattern classification. In [37], the authors proposed an improved SCA algorithm for solving high-dimensional global optimization problems. The equation for renovating the position of the current solution and the linearly decreasing parameter were modified. In the former, inertia weight was introduced to speed up the convergence rate and avoid local optimums. The latter was replaced by a Gaussian function-based strategy that enabled a non-linear decrease of the parameter. Therefore, a promising exploration-exploitation balance was yielded. Other good attempts for improving the SCA algorithm can be found in [38,39,40,41,42]. In this subsection, some SCA-based algorithms have been reviewed. The motivation for the improvements in each of them is briefly described.

3. Proposed Work

In Section 3.1 our proposed optimization algorithm based on the SCA algorithm, called ESCA, is presented. Then in Section 3.2, the parallel algorithms developed to computationally accelerate the ESCA algorithm are presented.

3.1. Enhanced Sine Cosine Algorithm

The proposed enhanced sine cosine algorithm (ESCA) aims to improve the optimization behavior of the original SCA algorithm. For this purpose, we enhance the exploration and exploitation phases of the SCA optimization algorithm. Indeed, they depend on the capabilities provided by (1) and (2). These capacities are boosted by introducing a new alternative, defined by (4), to generate each new individual.
P o p m k = B e s t P o p k + r 5 2 ( P o p m k r 6 B e s t P o p k )
When using (4), the new individual is generated based on the current individual and the distance between that individual and the best individual in the current population. Both the magnitude of the best individual and the magnitude of the distance are tuned using two random numbers, r 5 (which is squared) and r 6 respectively, as shown in (4).
The probability of using the sine-based equation, i.e., (1), remains at 50%. While the probability of using the cosine-based equation, i.e., (2), decreases to only 20%. The new equation uses neither sine nor cosine, and it has a 30% chance of being used. The proposed enhanced sine cosine algorithm (ESCA) is described in Algorithm 2.
Algorithm 2 Enhanced SCA (ESCA) optimization algorithm
1:
Set i n i V a l u e r 1 = 2
2:
Set m a x I T s variable
3:
Set population size (m - iterator for individuals)
4:
Define function cost (k - iterator for design variables)
5:
Generate initial population P o p 0
6:
for i t e r a t o r = 1 to m a x I T s  do
7:
   Search for the current B e s t P o p
8:
    r 1 = i n i V a l u e r 1 i t e r a t o r i n i V a l u e r 1 m a x I T s
9:
   for  m = 0 to p o p S i z e  do
10:
     for  k = 1 to n u m D e s i g n V a r s  do
11:
         r 2 = 2 π r a n d 0 . . 1
12:
         r 3 = 2 r a n d 0 . . 1
13:
         r 4 = r a n d 0 . . 1
14:
        if  r 4 < 0.5  then
15:
           n e w P o p m k = P o p m k + r 1 sin ( r 2 ) r 3 B e s t P o p k P o p m k
16:
        else if  r 4 < 0.7  then
17:
           n e w P o p m k = P o p m k + r 1 cos ( r 2 ) r 3 B e s t P o p k P o p m k
18:
        else
19:
           r 5 = r a n d 0 . . 1
20:
           r 6 = r o u n d ( 1 + r a n d 0 . . 1 )
21:
           n e w P o p m k = B e s t P o p k + r 5 2 ( P o p m k r 6 B e s t P o p k )
22:
        end if
23:
     end for
24:
      P o p m = n e w P o p m
25:
   end for
26:
end for
27:
Search for the current B e s t P o p
In more detail, in the SCA algorithm two equations can be used to obtain a new individual, as can be seen in Algorithm 1 (lines 14–18), the first based on the sine function and the second based on the cosine function. Both equations have the same probability of being used, as can be seen in line 14 of Algorithm 1. In contrast, in our proposal up to three equations can be used, the first two coincide with the functions of the SCA algorithm, and the third is shown in Equation (4). The probability of using the equation based on the sine of the SCA algorithm remains unchanged.The probability of using the cosine-based equation of the SCA algorithm is reduced to 20%, while the new equation proposed in the ESCA algorithm has a 30% probability of being used, as can be seen in Algorithm 1 (lines 18–22).
To compare search agents’ behavior of the SCA and ESCA algorithms, the two-dimensional versions of the benchmark functions are solved by 30 search agents. The search maps of the search agents under 300 function evaluation times are shown in Figure 2, Figure 3 and Figure 4. Similarly, the distributions of all possible solutions over the entire search space are depicted in Figure 5, Figure 6 and Figure 7. These figures reveal that the ESCA algorithm searches around thoroughly narrow regions from the promising regions of the search space, which means reaching the optimum faster. In contrast, the SCA algorithm searches in dispersed areas of the entire space, so more time is required to attain the promising regions. In addition, the obtained solutions by the ESCA algorithm are almost distributed around the global optimum. This proves that it efficiently exploits the previous solutions to improve the current one and bypass significant jumps in the search space. The SCA algorithm’s weakness is that it favors exploration even at the end of iterations. An efficient optimization algorithm should hit an equilibrium of exploitation and exploration. Indeed, it should maintain a high level of diversification at the beginning and a lower one at its end to avoid falling on local optimums. Simultaneously, the algorithm refines the current solution progressively. Briefly, the algorithm should promote exploration in the beginning and exploitation at the end. In this context, the ESCA algorithm is guided by the current best solution (see Equation (4)) to converge toward the optimum and sustain a high level of intensification at the end of the algorithm. Accordingly, a better balance between local search and global search is guaranteed over the course of iterations.

3.2. Proposed Parallel Algorithms

Almost all newer computing platforms, regardless of their computing power, are parallel. The main trends to increase the platforms’ computing power are (i) increasing the number of processing units (physical cores and/or logical threads) and (ii) including hardware accelerators (GPUs, FPGAs, etc.). We propose parallel algorithms based on multicore platforms to efficiently use the computational resources available on shared memory parallel platforms.
First, two parallel coarse-grained algorithms based on multi-population are developed. Similar strategies applied to different heuristic optimization are presented in [43,44] and some other well-known algorithms. In both, the SCA and the proposed ESCA algorithms, only the population size and the stop criterion need to be established. Since the proposed parallel algorithms are based on multi-populations, the selected population size is that of the initial population, i.e., before it is partitioned. The stop criterion is the number of new generations to be computed. Note that the number of generations and the population’s size implicitly determine the number of cost function evaluations to be performed.
The initial population is divided into subpopulations of equal or similar size. The size of the subpopulations depends on the number of used processing units as shown in Algorithm 3 (line 4). If the size of the initial population is not divisible by the number of processing units, the sizes of some subpopulations are increased by one as exhibited in lines 5–9 of Algorithm 3.
Algorithm 3 Multi-population sizes computing
1:
Initial population size: p o p I n i t S i z e
2:
Number of cores (or processes): N o C s
3:
Process ID: i d P r [ 0 , N o C s 1 ]
4:
s u b p o p S i z e = p o p I n i t S i z e N o C s
5:
if ( s u b p o p S i z e % N o C s ) ! = 0 then
6:
   if  i d P r < ( P o p u l a t i o n S i z e % N o C s )  then
7:
      s u b p o p S i z e = s u b p o p S i z e + 1
8:
   end if
9:
end if
Once the size of the subpopulations is determined according to the size of the initial population and the number of processes, as can be seen in Algorithm 3, each subpopulation is processed by a single process. The required communications between these concurrent processes depend on the operating algorithm. The asynchronous approach reduces these communications with respect to the synchronous algorithm. Note that when hyperthreading is not used, each core runs only one process. In our case, hyperthreading is used when more than 12 processes are required.
As stated, the proposed parallel algorithms are suitable for shared memory platforms. In both algorithms, to efficiently exploit shared-memory platforms, private memory has been used preferably. The first proposed parallel algorithm, shown in Algorithm 4, is asynchronous, i.e., communications between processes are not needed. Algorithm 4 shows the parallel processing implemented in the asynchronous parallel method, i.e., the processing performed once each sequential thread has spawned the parallel region. A new subpopulation individual ( n e w S P m ) is computed based on the current subpopulation individual ( S P m ) and the best subpopulation individual ( s u b p o p B e s t ).
It is worth mentioning that the concurrent processing shown in Algorithm 4 lacks synchronization points. This strategy allows having populations of significantly different sizes and leads to balancing the computing load through the number of generations processed by each thread and thus not degrading parallel efficiency.
Algorithm 5 presents the second parallel strategy in which the concurrent processes share data to obtain the best individual from the whole population, i.e., the best of all subpopulations. This process is done both at the beginning (line 7) and after computing each new generation by each parallel process (line 29). To ensure that all concurrent processes use the best individual from the whole population ( w h o l e p o p B e s t ) in each new generation, a synchronization point is needed after the critical section (line 35).
As shown in Algorithms 4 and 5, the population size assigned to each process depends on the size of the whole population ( p o p I n i t S i z e ) and the number of computing processes N o C s (see Algorithm 3). That is, as the number of processes increases, the size of the subpopulations decreases. When tiny populations are used in population-based heuristic optimization algorithms, the optimization behavior can be significantly degraded. To further increase the number of processes and thus further reduce the computing time without drastically reducing the subpopulation sizes, we propose a two-level parallel algorithm. The parallel second level (fine-grained level) is applied to obtain a new generation of each subpopulation (see lines 10 and 26 of Algorithm 4).
Algorithm 4 Asynchronous parallel algorithm.
1:
Allocate private memory for subpopulation: S P [ 0 , s u b p o p S i z e ]
2:
Allocate private memory for best individual: s u b p o p B e s t
3:
Set i n i V a l u e r 1 = 2
4:
Generation counter: g e n I t = 0
5:
Generate initial subpopulation S P 0
6:
while g e n I t < n u m G e n e r a t i o n s do
7:
   Search for the current subpop best s u b p o p B e s t
8:
    g e n I t = g e n I t + 1
9:
    r 1 = i n i V a l u e r 1 g e n I t i n i V a l u e r 1 n u m G e n e r a t i o n s
10:
   for  m = 1 to s u b p o p S i z e  do
11:
     for  k = 1 to n u m D e s i g n v a r s  do
12:
         r 2 = 2 π r a n d 0 . . 1
13:
         r 3 = 2 r a n d 0 . . 1
14:
         r 4 = r a n d 0 . . 1
15:
        if  r 4 < 0.5  then
16:
           n e w S P m k = S P m k + r 1 sin ( r 2 ) r 3 s u b p o p B e s t k S P m k
17:
        else if  r 4 < 0.7  then
18:
           n e w S P m k = S P m k + r 1 cos ( r 2 ) r 3 s u b p o p B e s t k S P m k
19:
        else
20:
           r 5 = r a n d 0 . . 1
21:
           r 6 = r o u n d ( 1 + r a n d 0 . . 1 )
22:
           n e w S P m k = s u b p o p B e s t k + r 5 2 ( S P m k r 6 s u b p o p B e s t k )
23:
        end if
24:
     end for
25:
      S P m = n e w S P m
26:
   end for
27:
end while
In the two-level algorithm the subpopulations are not calculated as a function of the total number of processes, since a single process will not process each subpopulation. The total number of processes in the two-level algorithm is equal to the number of subpopulations multiplied by the number of processes that will process each subpopulation. The number of subpopulations will be equal to the number of external processes ( N o C s ), while the number of processes that will process each subpopulation will be denoted by i n C s . Therefore, the total number of processes equals to N o C s × i n C s .
Important modifications in Algorithm 4 are required that could degrade the parallel performance of the two-level parallel algorithm given in Algorithm 6. Since several threads will process each subpopulation, it must be stored in shared memory (line 1 of Algorithm 6), instead of being stored in private memory as in Algorithm 4. Moreover, before processing each subpopulation, the best individual must be available for all the processes involved in processing each subpopulation. This implies a synchronization point (line 9 of Algorithm 6) that determine the best individual. Thereafter each process checks if the current best individual stored in its private memory ( s u b p o p B e s t ) should be updated.
Algorithm 5 Parallel algorithm with data sharing.
1:
Shared memory: w h o l e p o p B e s t
2:
Allocate private memory for: S P [ 0 , s u b p o p S i z e ] and s u b p o p B e s t
3:
Set i n i V a l u e r 1 = 2
4:
Generation counter: g e n I t = 1
5:
Generate initial subpopulation S P 0
6:
Search for the current subpopulation best s u b p o p B e s t
7:
w h o l e p o p B e s t = B e s t o f ( s u b p o p B e s t N o C s )
8:
while g e n I t < n u m G e n e r a t i o n s do
9:
    g e n I t = g e n I t + 1
10:
    r 1 = i n i V a l u e r 1 g e n I t i n i V a l u e r 1 n u m G e n e r a t i o n s
11:
   for  m = 1 to s u b p o p S i z e  do
12:
     for  k = 1 to n u m D e s i g n v a r s  do
13:
         r 2 = 2 π r a n d 0 . . 1
14:
         r 3 = 2 r a n d 0 . . 1
15:
         r 4 = r a n d 0 . . 1
16:
        if  r 4 < 0.5  then
17:
           n e w S P m k = S P m k + r 1 sin ( r 2 ) r 3 s u b p o p B e s t k S P m k
18:
        else if  r 4 < 0.7  then
19:
           n e w S P m k = S P m k + r 1 cos ( r 2 ) r 3 s u b p o p B e s t k S P m k
20:
        else
21:
           r 5 = r a n d 0 . . 1
22:
           r 6 = r o u n d ( 1 + r a n d 0 . . 1 )
23:
           n e w S P m k = s u b p o p B e s t k + r 5 2 ( S P m k r 6 s u b p o p B e s t k )
24:
        end if
25:
     end for
26:
      S P m = n e w S P m
27:
   end for
28:
   Search for the current subpopulation best s u b p o p B e s t
29:
   CRITICAL parallel section:
30:
     if  F e v a l ( s u b p o p B e s t ) < F e v a l ( w h o l e p o p B e s t )  then
31:
         w h o l e p o p B e s t = s u b p o p B e s t
32:
     else
33:
         s u b p o p B e s t = w h o l e p o p B e s t
34:
     end if
35:
   end CRITICAL
36:
end while
Note that, in Algorithm 6 the total number of processes is increased from N o C s to N o C s × i n C s , using the same subpopulation size. There are several options to implement the second level of parallelism (lines 13–29 of Algorithm 6), which will be discussed in Section 5.
Algorithm 6 Two-level parallel algorithm.
1:
Allocate shared memory for N o C s subpopulations: S P [ 0 , s u b p o p S i z e ]
2:
Total number of processes: N o C s × i n C s processes.
3:
Allocate private memory for best individual: s u b p o p B e s t
4:
Set i n i V a l u e r 1 = 2
5:
Generation counter: g e n I t = 0
6:
Generate initial subpopulation S P 0
7:
while g e n I t < n u m G e n e r a t i o n s do
8:
   Search for the current subpopulation best s u b p o p B e s t
9:
   {Synchronization point}
10:
    g e n I t = g e n I t + 1
11:
    r 1 = i n i V a l u e r 1 g e n I t i n i V a l u e r 1 n u m G e n e r a t i o n s
12:
   {FOR processed in PARALLEL using i n C s processes}
13:
   for  m = 1 to s u b p o p S i z e  do
14:
     for  k = 1 to n u m D e s i g n v a r s  do
15:
         r 2 = 2 π r a n d 0 . . 1
16:
         r 3 = 2 r a n d 0 . . 1
17:
         r 4 = r a n d 0 . . 1
18:
        if  r 4 < 0.5  then
19:
           n e w S P m k = S P m k + r 1 sin ( r 2 ) r 3 s u b p o p B e s t k S P m k
20:
        else if  r 4 < 0.7  then
21:
           n e w S P m k = S P m k + r 1 cos ( r 2 ) r 3 s u b p o p B e s t k S P m k
22:
        else
23:
           r 5 = r a n d 0 . . 1
24:
           r 6 = r o u n d ( 1 + r a n d 0 . . 1 )
25:
           n e w S P m k = s u b p o p B e s t k + r 5 2 ( S P m k r 6 s u b p o p B e s t k )
26:
        end if
27:
     end for
28:
      S P m = n e w S P m
29:
   end for
30:
end while

4. Benchmark Test

The benchamark test used in this work is composed of 30 well-known unconstrained functions shown in Section 4.1, and three constrained engineering design problems shown in Section 4.2.

4.1. Benchmark Functions

A total of 30 well-known unconstrained functions used for the performance analysis are listed and described in Table 1 and Table 2.

4.2. Engineering Optimization Problems

The proposed algorithms’ optimization performance will be further examined through three constrained engineering design problems.

4.2.1. Pressure Vessel Design Problem

The structural design problem of pressure vessels is shown in Figure 8. In this design problem, four variables have to be computed: the thickness of the shell ( d s ), the thickness of the heads ( d h ), the internal radius (R), and the length (L) of the cylindrical section. These variables should minimize the financial cost by meeting the non-linear stress constraints and yield criteria. Note that d s and d h are not continuous variables. Indeed, from 0.0625 inches, the possible values are calculated in steps of 0.0625 inches. The pressure vessel design problem is formulated as in (5).
Pressure vessel design problem : f = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 x 1 = d s , x 2 = d h , x 3 = R , x 4 = L Constraints : g 1 = x 1 + 0.0193 x 3 0 g 2 = x 2 + 0.00954 x 3 0 g 3 = π x 3 2 x 4 ( 4 / 3 ) π x 3 3 + 1296000 0 g 4 = x 4 240 0 0.0625 x 1 , x 2 99 0.0625 10 x 3 , x 4 240

4.2.2. Welded Beam Design Problem

The welded beam design problem is depicted in Figure 9. The cost of manufacturing and assembling the welded beams must be minimized by considering the welding work, material, and labor cost. The variables to be computed are the thickness of the weld (h), the length of the welded joint (l), the width of the beam (t), and the thickness of the beam (b). The optimization problem is formulated as in (6), where τ ( x ) is the shear stress in the weld, τ m a x is the allowable shear stress of the weld, σ ( x ) is the normal stress in the beam, σ m a x is the allowable normal stress for the beam material, P c ( x ) is the bar buckling load, P is the load, δ ( x ) is the beam end deflection, and δ m a x is the allowable beam end deflection. Some auxiliary functions and constant values used to solve the welded beam design problem are given in (7).
Welded beam design problem : F = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) x 1 = h , x 2 = l , x 3 = t , x 4 = b Constraints : g 1 = τ ( x ) τ m a x 0 g 2 = σ ( x ) σ m a x 0 g 3 = x 1 x 4 0 g 4 = 0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) 5.0 0 g 5 = 0.125 x 1 0 g 6 = δ ( x ) δ m a x 0 g 7 = P ( x ) P c ( x ) 0 0.1 x 1 , x 4 2.0 0.1 x 2 , x 3 10.0
Functions and constants of welded beam problem : τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 ; τ = P 2 x 1 x 2 ; τ = M R J M = P L + x 2 2 ; R = x 2 2 4 + x 1 + x 3 2 2 J = 2 2 x 1 x 2 x 2 2 12 + x 1 + x 3 2 2 σ ( x ) = 6 P L x 4 x 3 2 δ ( x ) = 4 P L 3 E x 3 3 x 4 P c ( x ) = 4.013 E x 3 2 x 4 6 36 L 2 1 x 3 2 L E 4 G P = 6000 l b ; L = 14 i n ; δ m a x = 0.25 i n E = 30 e + 6 p s i ; G = 12 e + 6 p s i τ m a x = 13 , 600 p s i ; σ m a x = 30 , 000 p s i

4.2.3. Rolling Element Bearing Design Problem

The rolling element bearing design problem is a maximization problem aimed to maximize the dynamic load capacity of a rolling element bearing. This problem, depicted in Figure 10, has five decision variables, namely pitch diameter ( D m ), ball diameter ( D b ), number of balls (Z), curvature radius coefficient of inner raceway groove ( f i = r i / D b ), curvature radius coefficient of outer raceway groove ( f o = r o / D b ), and the inner and outer ring groove curvature ratio r i and r o , respectively. In addition, it has five constraints constants, K D m i n , K D m a x , ϵ , e and ψ . This problem can be formulated as in (8).
Rolling element bearing design problem : f = f c x 3 2 / 3 x 2 1.8 ; if x 2 25.4 f = 3.647 f c x 3 2 / 3 x 2 1.4 ; if x 2 > 25.4 x 1 = D m , x 2 = D b , x 3 = Z , x 4 = f i , x 5 = f o Constraints : g 1 = ϕ 0 2 sin 1 x 2 x 1 x 3 + 1 0 g 2 = 2.0 x 2 x 6 ( D d ) 0 g 3 = x 7 ( D d ) 2.0 x 2 0 g 4 = x 10 B w x 2 0 g 5 = x 1 0.5 ( D + d ) 0 g 6 = ( 0.5 + x 9 ) ( D + d ) x 1 0 g 7 = 0.5 ( D x 1 x 2 ) x 8 x 2 0 g 8 = x 4 0.515 0 g 9 = x 5 0.515 0 x 6 = K D m i n , x 7 = K D m a x , x 8 = ϵ , x 9 = e , x 5 = ψ
Auxiliary functions and constant values of rolling problem : γ = D b c o s α D m f c = 37.91 × 1 + 1.04 1 γ 1 + γ 1.72 f i ( 2 f o 1 ) f o ( 2 f i 1 ) 0.41 10 / 3 0.3 × γ 0.3 ( 1 γ ) 1.39 ( 1 + γ ) 1 / 3 2 f i 2 f i 1 0.41 T = D d ( 2.0 x 2 ) ϕ 0 = 2 π 2 cos 1 D d 2 3 T 4 2 + D 2 T 4 x 2 2 d 2 + T 4 2 2 D d 3 3 T 4 D 2 T 4 x 2 D = 160 ; d = 90 ; B w = 30 ; α = 0 90.0 x 1 150.0 10.5 x 2 31.5 4 x 3 50 0.515 x 4 , x 5 0.6 0.4 x 6 0.5 0.6 x 7 0.7 0.3 x 8 0.4 0.02 x 9 1.0 0.6 x 10 0.85

5. Numerical Experiments

All the numerical experiments have been obtained in Fujitsu Server PRIMERGY TX300 S8 Tower Server. This platform is a multicore platform equipped with a D2949-B1 motherboard with two CPU sockets. In each CPU the processor installed is an Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10 GHz, with 15 MB Intel Smart Cache. Each processor is composed of 6 physical cores, resulting in a total number of 12 physical cores in the system. The Intel Hyper-Threading Technology is enabled, the number of threads per physical core is 2, therefore the maximum number of processes (or threads) should not exceed 24, in order to obtain the best possible computational performance. The main memory size is 32 GB of DDR3. All the developments, both sequential and parallel, were implemented in the C programming language, using the GCC v.4.4.7 [45]. The OpenMP API v3.1 [46] has been used to develop parallel algorithms. Therefore, all the data in tables and figures included in this section have been obtained running simulations in this platform. In addition, for the computational results to be reliable, the Sun Grid Engine queuing system has been used.

5.1. Comparative Analysis ESCA vs. SCA

First, the computational costs of the SCA algorithm and the proposed ESCA algorithm are examined in Table 3. This table shows the computing time cost when optimizing the benchmark test reported in Section 4 with population sizes of 240, 120, and 60. The number of generations was 50,000, and the number of independent runs was 30. The results in Table 3 point that the proposed ESCA algorithm does not increase the computing cost compared to the SCA algorithm. On the contrary, in more than 80% of the experiments conducted, the computational cost decreases.
Once it has been proven that the proposed method decreases the computational cost of the SCA algorithm, the optimization behavior is investigated by comparing both methods in Table 4. This table shows the number of function evaluations for an error of less than <1 × 10 3 (for functions marked with * an error less than <1 × 10 2 ), with population sizes of 240, 120, and 60. Fewer function evaluations are required when the ESCA method is used instead of the SCA method. The dramatic decrease, particularly for the functions that require more evaluations, is higher than 100×, demonstrating the significant improvement of the SCA’s optimization behavior.
To perform a parallel efficiency analysis of both parallel proposals, experimental tests are conducted using the same parameters as those used so far, i.e., population sizes of 240, 120, and 60. The number of generations is equal to 50,000, and the number of independent runs is 30. The parallel speed-up values for the data sharing parallel algorithm, depending on the total population size ( p o p I n i t S i z e ) and the number of processes ( N o C s ), are exhibited in Table 5. The obtained speed-up values are close to ideal ones for the largest population size. These values slightly decrease, in most cases, as the population size decreases. However, the values significantly degrade when 12 parallel processes are used for the smaller population size and lower computing cost functions.
The parallel asynchronous algorithm’s speed-up values, shown in Table 6, remain close to the ideal values when the number of concurrent processes is increased or when the population size is decreased. Note that this behavior implies outstanding parallel scalability.
Considering the outstanding parallel performance results obtained for the parallel asynchronous algorithm using the 12 available physical cores (see Table 6), it can be concluded that the parallel scalability of the asynchronous algorithm allows increasing the number of processes efficiently. However, the results shown in Table 4 confirm that the size of the subpopulations requires a minimum dimension, which depends on the optimization algorithm and the problem under consideration. Algorithm 6 has been proposed to increase the number of processes without reducing the size of the subpopulations. To implement the inner level of parallelism of Algorithm 6, nested parallelism can be applied using OpenMP features. This strategy has been discarded due to poor experimental results that excessively degrade parallel scalability. When using nested parallelism the generation of each nested parallel region involves computational overhead [47]. The poor experimental results are due to many nested regions ( n u m G e n e r a t i o n s × N o C s ) and the insufficient computational cost of each nested parallel region. Note that this computational cost depends on the considered algorithm (quasi-non-variable cost) and the objective function.
The two-level parallel algorithm generates a parallel region of N o C s × i n C s processes, organized into N o C s groups of i n C s processes each. In each group, only one process works outside the inner parallel region, while all the processes in the group cooperate in the processing associated with the inner level of parallelism (lines 13–29 of Algorithm 6).
As mentioned above, the used parallel platform has two processors with six physical cores each. Hyperthreading can be enabled, allowing to run two processes (or threads) per core efficiently. Thus, it can be run up to 24 concurrent processes without excessively degrading the computer platform’s efficiency. Using hyperthreading and fine-grained parallelism, such as the proposed two-level algorithm, the strategy of thread placement on the cores may be relevant. To control the strategy of process placement in the cores, OpenMP affinity features are used. Figure 11a shows that the platform’s architecture is equipped with two processors of six physical cores and twelve logical cores each. An example of thread placement of 5 processes when no affinity is used is shown in Figure 11b, in which the operating system decides the process placement. There is no problem in this thread placement if neither hyperthreading nor fine-grained parallelism are used.
For instance, using 20 processes organized into 5 groups of 4 processes, a thread placement option without using affinity features is displayed in Figure 12a. To optimize parallel performance, the optimal thread placement can be forced using OpenMP affinity features as shown in Figure 12b.
Table 7 shows the parallel speed-up when more than 12 processes are used, i.e., using hyperthreading for the highest computational cost functions. Results manifested in Table 7 have been obtained using 16 and 20 processes by varying the number of groups ( N o C s ) and consequently varying the number of processes per group ( i n C s ). Important conclusions can be drawn by analyzing the results of this table: remarkable scalability is obtained through the two-level parallel algorithm, even using logical cores (hyperthreading); although the parallel performance allows setting the N o C s value (i.e., number of groups) according to the desired size of the subpopulations, i.e., according to the optimization performance rather than parallel behavior. All efficiency values are above 72%, except for the Foxholes function ( f 13 ), characterized by having only two design variables (see Table 1), which penalizes fine-grained parallelism. Although both fine-grained parallelism and hyperthreading slightly penalize parallel efficiency, a remarkable average greater than 75% parallel efficiency is obtained. The average efficiency barely decreases as the number of processes increases from 16 to 20, resulting in a slight fall of the average efficiency from 75.6% to 74.9%, i.e., the outstanding parallel scalability is maintained.
This outstanding behavior is confirmed by the results shown in Table 8, which are the results conducted on all the available threads (24) when hyperthreading is activated. It is found that the two-level parallel algorithm has remarkable parallel scalability with an average parallel efficiency of 74.4%.
Table 9 and Table 10 show the number of functions evaluations required by the data sharing parallel algorithm to obtain an error of less than 1 × 10 3 ( 1 × 10 2 for functions marked with an asterisk), when the total population size is 240 ( p o p I n i t S i z e = 240 ) and 60 ( p o p I n i t S i z e = 60 ), respectively. These results show that the number of concurrent processes does not modify the optimization behavior. The heuristic nature of the proposed optimization algorithm results in different evaluations for the same function depending on the concurrent processes.
Table 11 and Table 12 listed the number of functions evaluations required by the asynchronous parallel algorithm for population sizes 240 ( p o p I n i t S i z e = 240 ) and 60 ( p o p I n i t S i z e = 60 ), respectively. It is clear that, unlike the sharing data-parallel algorithm, the ratio of convergence depends on the number of concurrent processes used for the asynchronous parallel algorithm. In addition, the convergence ratio slightly worsens as the number of concurrent processes increases, but the outstanding parallel scalability offsets this behavior. Note that this behavior depends on the subpopulation sizes, which depend on the population size.
As earlier recorded, the parallel asynchronous algorithm allows each thread to have its population size without sacrificing parallel performance and thus exploring populations of different characteristics, which could improve the optimization’s performance. Table 13 compares the number of function evaluations (# FEs) for functions f 6 , f 22 , and f 27 when using homogeneous and heterogeneous subpopulation sizes. The latter improving the optimization performance. Moreover, not reaching a good solution due to small populations can be avoided by increasing the number of processes. For instance, 12 processes are used for f 6 and f 27 (see Table 12).
It is settled that the proposed parallel algorithms achieve a remarkable parallel performance without disordering the optimization behavior. Figure 13 and Figure 14 point the significant improvement in the convergence speed of the proposed ESCA algorithm compared to the SCA algorithm.
The last analysis discusses the optimization’s behavior when solving the engineering design problems described in Section 4.2. Table 14 compares the convergence ratio of the SCA and ESCA methods when only 10,000 and 20,000 generations are processed. As can be observed from this table, the ESCA outperforms the SCA algorithm in terms of convergence ratio. Similar results are obtained when optimizing the 30 benchmark functions. This behavior confirms that our proposal significantly boosts the SCA algorithm.
As for solution accuracy, the results on benchmark functions and challenging engineering problems are listed in Table 15. These results are acquired from 30 independent runs on each function, 10,000 iterations, and three population sizes, i.e., 60, 120, and 240. As can be observed from this table, the ESCA algorithm performs better than SCA in almost all functions. These outcomes are statistically compared in Table 16. Indeed, to measure the overall performance of the ESCA algorithm respect to its original counterpart SCA, the non-parametric statistical tests of Friedman, Friedman aligned, and Quade test are employed. The Friedman test or Friedman rank test is a non-parametric test developed by Milton Friedman [48] consisting of arranging the data by blocks, replacing them by their respective order, considering the existence of identical data. Therefore, in the Friedman test the performance of the analyzed algorithms are ranked separately for each data set. This ranking scheme only allows comparisons between sets, since comparisons between sets are meaningless. When the number of algorithms to be compared is small, this can be a disadvantage, in this case inter-dataset comparison may be desirable and we can employ the Friedman aligned or Friedman aligned rank method [49]. The Quade or Quade rank test [50] is also a non-parametric test, which shows its robustness for small data sets. Regardless of the population size, the ESCA is ranked first under all tests.

5.2. Further Comparison with Numerous State-of-the-Art Algorithms

In this section, we compare the sequential version of the ESCA algorithms to several well-known algorithms. Firstly, the comparison algorithms are benchmarked on a set of 30 unconstrained problems. Then, we test these algorithms in solving three challenging engineering problems with constrained and unknown search spaces.

5.2.1. Benchmarking of the Comparison Algorithms

The ESCA algorithm is benchmarked on 30 unconstrained functions that are listed in Table 1 and Table 2. The ESCA algorithm runs on each benchmark function 30 times. A comparison to grey wolf algorithm (GWO) [51], whale optimization algorithm (WOA) [52] and Harris hawk optimization algorithm (HHO) [53] is provided as well. To ensure a fair comparison, the individuals are replaced only if there is an improvement of the objective function over the course of iterations of each algorithm, i.e the selection operator used in ESCA was “rank selection” also used by GWO, WOA and HHO. Table 17, compares the convergence speed in terms of the number of functions evaluations (# FEs) required to obtain an error of less than 1 × 10 3 and (1 × 10 2 for functions marked with an asterisk), for a population size of 120. As can be observed from this table, the ESCA algorithm exhibits the lowest # FEs values for almost all functions. Accordingly, the ESCA algorithm can early converge to a feasible solution for almost all benchmark functions.
The statistical data (best cost function, and corresponding average, worst, and standard deviation) are summarized in Table 18. These results are derived from 30 independent runs on each function, a population size of 120 individuals, and 10,000 iterations. It can be seen from this table that the ESCA algorithm holds a competitive performance in terms of solution accuracy as opposed to the comparison algorithms.
Inferential statistics prove how well a sample of data sustains a particular hypothesis and whether the outcomes can be generalized for other data samples. To evaluate the overall performance of the ESCA algorithm and determine the significance of data in Table 17 (average) and Table 18, non-parametric statistical tests dubbed Friedman, Friedman aligned, and Quade test are employed [54]. Table 19 and Table 20 statistically compare the assessed algorithms in terms of convergence speed and solution accuracy, respectively. Table 21 and Table 22 estimate the contrast between medians of data in Table 17 (average) and Table 18, respectively, while considering all pairwise comparisons [54]. As can be observed from Table 19, the ESCA algorithm is ranked first under all statistical tests in terms of convergence speed. Similar results are obtained in Table 21 in which the ESCA algorithm always obtain a positive difference value with respect to the comparison algorithms. That is, the ESCA algorithm performs better than others. As for the solution accuracy, the ESCA and HHO algorithms are ranked first with a competitive performance, as shown in Table 20. However, according to the outcomes in Table 22, the proposed algorithm is slightly better than the HHO algorithm. Unlike this latter, the ESCA algorithm always has a positive contrast compared to the other tested algorithms.
The effectiveness of the proposed ESCA algorithm in solving high-dimensional problems is validated in Table 23. The outcomes show that the proposed algorithm exhibits promising and competitive performance compared to the state-of-the-art algorithms.

5.2.2. Optimization Outcomes for Classical Engineering Problems

The results for the pressure vessel design problem are compared in Table 24 and Table 25. The multi-strategy enhanced SCA (MSCA) was presented in [55], which also provides numerical results. The numerical results for the improved harmony search algorithm (IHS) [56], gravitational search algorithm (GSA) [57], DE [10], and HSA [14] were provided in [55]. Moreover, results for PSO [5] were taken from [58]. Results for GA [9] are provided in [59,60,61] for GA_1, GA_2 and GA_3 respectively. In [62], results for evolutionary strategy ES were provided, while those of the ACO algorithm were reported in [63]. GWO, WOA, WOA [52], and HHO [53] algorithms are included in the comparative study of classical engineering problems, i.e., pressure vessel problem, welded beam design problem, and rolling element bearing design.
The comparison for the pressure vessel problem is exhibited in Table 24 and Table 25. The former shows both the variables and the cost function’s optimal value, while the latter provides the constraints’ value. The proposed ESCA algorithm and the DE algorithm achieve the best feasible results. It should be noted that the solution provided by MSCA and HHO methods are not feasible since both variables d s and d h have been considered as continuous variables, which is not correct as they are actually discrete variables. In particular, they must be multiples of 0.0625 inches. The IHS and ACO methods are not feasible because they do not meet the g 3 and g 1 constraints, respectively, as shown in Table 25.
The results of the welded beam design problem are reported in Table 26 and Table 27. Table 26 exhibits the optimal cost of the function and its variables for several state-of-the-art algorithms, including; GSA algorithm [57], the ray optimization (RO) algorithm [64], IHS algorithm [56], genetic algorithm (GA_3) [61], the GWO algorithm, the WOA algorithm, and the HHO algorithm. Outcomes reveal that the ESCA algorithm outperforms the state-of-the-art algorithms in solving the welded beam design problem. The constraints of the leading solutions are listed in Table 27. It worth mentioning that the solution provided by HHO algorithm is not feasible as it does not meet the g 2 constraint.
The results for the rolling element bearing design problem are compared in Table 28. In addition to the SCA algorithm, the proposed ESCA algorithm is compared to the genetic algorithm (GA_4) [65], the TLBO algorithm [66], the mine blasting algorithm (MBA) [67], the supply demand-based optimization algorithm (SDO) [68], and the HHO algorithm. Note that, as shown in Table 29, neither TLBO nor MBA, nor SDO, nor HHO obtain feasible solutions. Indeed, the TLBO violates the g 7 constraint, while MBA, SDO and HHO violate the g 4 constraint. As shown in these tables, ESCA also carries the best feasible result on this constrained maximization problem.
Concisely, the outcomes on the assessed engineering problems prove that ESCA is high-performing in solving challenging problems as opposed to the comparison algorithms.

6. Conclusions

This paper proposed an enhanced SCA algorithm dubbed the ESCA algorithm in which the diversification behavior of the SCA algorithm is reduced at the end of the optimization course. Indeed, the SCA algorithm’s exploitation abilities are strengthened with a best-guided strategy that refines the current solution and leads the algorithm to converge swiftly toward the optimum. Experimental tests on benchmark functions and challenging engineering problems prove the supremacy of the proposed algorithm in overall performance, i.e., solution accuracy and convergence speed, compared to a set of state-of-the-art algorithms. This domination is confirmed through statistical tests. The proposed ESCA algorithms are ranked first according to Friedman, Friedman aligned, and Quade tests in terms of convergence speed and solution accuracy. Furthermore, one-level parallel ESCA algorithms that work synchronously and asynchronously are designed as well. They efficiently utilize multicore architectures by joining coarse-grained and fine-grained parallel techniques. The parallel scalability of these algorithms yields an efficient use of the physical and logical cores when hyperthreading is enabled, which increases the total number of threads that are efficiently used when the two-level parallel algorithm is executed. It was identified that the one-level parallel ESCA algorithms diminish the computing time, on average, by 87.4% and 90.8%, respectively, using 12 processing cores. Moreover, it has been shown that parallel performance can be improved by affinity techniques that permit mapping processes over the cores of multicore processors. In fact, the two-level parallel algorithms provide extra reductions of the computing time by 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processing cores. Considering its outstanding optimization performance and computational behavior capability of extracting the maximum performance from the available computational resources, the proposed algorithm is particularly fitting for high computational complexity problems.

Author Contributions

H.M. and A.B. conceived the optimization algorithms; H.M., J.-L.S.-R., A.J.-M., D.G.-S. and J.G.-G. conceived the parallel algorithms; H.M., J.G.-G. and D.G.-S. codified the parallel algorithms; A.B., H.M., J.G.-G., J.-L.S.-R. and A.J.-M. performed numerical experiments; H.M., A.B. and J.G.-G. analyzed the data; H.M. wrote the original draft. A.B., J.-L.S.-R. and A.J.-M. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Spanish Ministry of Science, Innovation and Universities and the Research State Agency under Grant RTI2018-098156-B-C54 cofinanced by FEDER funds and the Ministry of Science and Innovation and the Research State Agency under Grant PID2020-120213RB-I00 cofinanced by FEDER funds.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dorigo, M.; Di Caro, G. The Ant Colony Optimization Meta-heuristic. In New Ideas in Optimization; McGraw-Hill Ltd.: Maidenhead, UK, 1999; pp. 11–32. [Google Scholar]
  2. Schwefel, H.P. Evolutionsstrategie Und Numerische Optimierung. Ph.D. Thesis, Department of Process Engineering, Technical University of Berlin, Berlin, Germany, 1975. [Google Scholar]
  3. Bäck, T.; Rudolph, G.; Schwefel, H.P. Evolutionary Programming and Evolution Strategies: Similarities and Differences. In Proceedings of the Second Annual Conference on Evolutionary Programming, La Jolla, CA, USA, 25–26 February 1993; pp. 11–22. [Google Scholar]
  4. Koza, J.R. Genetic Programming: A Paradigm for Genetically Breeding Populations of Computer Programs to Solve Problems; Technical Report; Stanford University: Stanford, CA, USA, 1990. [Google Scholar]
  5. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  6. Eusuff, M.; Lansey, K.; Pasha, F. Shuffled frog-leaping algorithm: A memetic meta-heuristic for discrete optimization. Eng. Optim. 2006, 38, 129–154. [Google Scholar] [CrossRef]
  7. Karaboga, D.; Basturk, B. On the Performance of Artificial Bee Colony (ABC) Algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  8. Ingber, L. Simulated annealing: Practice versus theory. Math. Comput. Model. 1993, 18, 29–57. [Google Scholar] [CrossRef] [Green Version]
  9. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  10. Price, K.V. An Introduction to Differential Evolution. In New Ideas in Optimization; McGraw-Hill Ltd.: Maidenhead, UK, 1999; pp. 79–108. [Google Scholar]
  11. Storn, R. On the usage of differential evolution for function optimization. In Proceedings of the North American Fuzzy Information Processing, Berkeley, CA, USA, 19–22 June 1996; pp. 519–523. [Google Scholar]
  12. Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar]
  13. Farmer, J.D.; Packard, N.H.; Perelson, A.S. The Immune System, Adaptation, and Machine Learning. Phys. D 1986, 2, 187–204. [Google Scholar] [CrossRef]
  14. Kim, J.H. Harmony Search Algorithm: A Unique Music-inspired Algorithm. Procedia Eng. 2016, 154, 1401–1405. [Google Scholar] [CrossRef] [Green Version]
  15. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  16. Kumar-Majhi, S. An Efficient Feed Foreword Network Model with Sine Cosine Algorithm for Breast Cancer Classification. Int. J. Syst. Dyn. Appl. (IJSDA) 2018, 7, 202397. [Google Scholar] [CrossRef] [Green Version]
  17. Rajesh, K.; Dash, S. Load frequency control of autonomous power system using adaptive fuzzy based PID controller optimized on improved sine cosine algorithm. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 2361–2373. [Google Scholar] [CrossRef]
  18. Khezri, R.; Oshnoei, A.; Tarafdar Hagh, M.; Muyeen, S. Coordination of Heat Pumps, Electric Vehicles and AGC for Efficient LFC in a Smart Hybrid Power System via SCA-Based Optimized FOPID Controllers. Energies 2018, 11, 420. [Google Scholar] [CrossRef] [Green Version]
  19. Ramanaiah, M.L.; Reddy, M.D. Sine cosine algorithm for loss reduction in distribution system with unified power quality conditioner. i-Manag. J. Power Syst. Eng. 2017, 5, 10. [Google Scholar]
  20. Dhundhara, S.; Verma, Y.P. Capacitive energy storage with optimized controller for frequency regulation in realistic multisource deregulated power system. Energy 2018, 147, 1108–1128. [Google Scholar] [CrossRef]
  21. Singh, V.P. Sine cosine algorithm based reduction of higher order continuous systems. In Proceedings of the 2017 International Conference on Intelligent Sustainable Systems (ICISS), Palladam, India, 7–8 December 2017; pp. 649–653. [Google Scholar] [CrossRef]
  22. Das, S.; Bhattacharya, A.; Chakraborty, A.K. Solution of short-term hydrothermal scheduling using sine cosine algorithm. Soft Comput. 2018, 22, 6409–6427. [Google Scholar] [CrossRef]
  23. Kumar, V.; Kumar, D. Handbook of Research on Machine Learning Innovations and Trends; IGI Global: Hershey, PA, USA, 2017; pp. 715–726. [Google Scholar] [CrossRef] [Green Version]
  24. Yıldız, B.S.; Yıldız, A.R. Comparison of grey wolf, whale, water cycle, ant lion and sine-cosine algorithms for the optimization of a vehicle engine connecting rod. Mater. Test. 2018, 60, 311–315. [Google Scholar] [CrossRef]
  25. Elfattah, M.A.; Abuelenin, S.; Hassanien, A.E.; Pan, J.S. Handwritten Arabic Manuscript Image Binarization Using Sine Cosine Optimization Algorithm. In Proceedings of the International Conference on Genetic and Evolutionary Computing, Fuzhou, Fujian, China, 7–9 November 2016; Volume 536, pp. 273–280. [Google Scholar]
  26. Mirjalili, S.M.; Mirjalili, S.Z.; Saremi, S.; Mirjalili, S. Studies in Computational Intelligence; Springer: Berlin, Germany, 2020; Volume 811, pp. 201–217. [Google Scholar] [CrossRef]
  27. Ewees, A.A.; Abd Elaziz, M.; Al-Qaness, M.A.A.; Khalil, H.A.; Kim, S. Improved Artificial Bee Colony Using Sine-Cosine Algorithm for Multi-Level Thresholding Image Segmentation. IEEE Access 2020, 8, 26304–26315. [Google Scholar] [CrossRef]
  28. Gupta, S.; Deep, K.; Mirjalili, S.; Kim, J.H. A modified sine cosine algorithm with novel transition parameter and mutation operator for global optimization. Expert Syst. Appl. 2020, 154, 113395. [Google Scholar] [CrossRef]
  29. Gupta, S.; Deep, K. A novel hybrid sine cosine algorithm for global optimization and its application to train multilayer perceptrons. Appl. Intell. 2020, 50, 993–1026. [Google Scholar] [CrossRef]
  30. Rizk-Allah, R.M. An improved sine–cosine algorithm based on orthogonal parallel information for global optimization. Soft Comput. 2019, 23, 7135–7161. [Google Scholar] [CrossRef]
  31. Belazzoug, M.; Touahria, M.; Nouioua, F.; Brahimi, M. An improved sine cosine algorithm to select features for text categorization. J. King Saud-Univ.-Comput. Inf. Sci. 2020, 32, 454–464. [Google Scholar] [CrossRef]
  32. Gupta, S.; Deep, K. Improved sine cosine algorithm with crossover scheme for global optimization. Knowl.-Based Syst. 2019, 165, 374–406. [Google Scholar] [CrossRef]
  33. Qu, C.; Zeng, Z.; Dai, J.; Yi, Z.; He, W. A modified sine-cosine algorithm based on neighborhood search and greedy levy mutation. Comput. Intell. Neurosci. 2018, 2018, 4231647. [Google Scholar] [CrossRef] [PubMed]
  34. Rosli, S.J.; Rahim, H.A.; Abdul Rani, K.N.; Ngadiran, R.; Ahmad, R.B.; Yahaya, N.Z.; Abdulmalek, M.; Jusoh, M.; Yasin, M.N.M.; Sabapathy, T.; et al. A Hybrid Modified Method of the Sine Cosine Algorithm Using Latin Hypercube Sampling with the Cuckoo Search Algorithm for Optimization Problems. Electronics 2020, 9, 1786. [Google Scholar] [CrossRef]
  35. Abd Elaziz, M.; Oliva, D.; Xiong, S. An improved opposition-based sine cosine algorithm for global optimization. Expert Syst. Appl. 2017, 90, 484–500. [Google Scholar] [CrossRef]
  36. Sindhu, R.; Ngadiran, R.; Yacob, Y.M.; Zahri, N.A.H.; Hariharan, M. Sine–cosine algorithm for feature selection with elitism strategy and new updating mechanism. Neural Comput. Appl. 2017, 28, 2947–2958. [Google Scholar] [CrossRef]
  37. Long, W.; Wu, T.; Liang, X.; Xu, S. Solving high-dimensional global optimization problems using an improved sine cosine algorithm. Expert Syst. Appl. 2019, 123, 108–126. [Google Scholar] [CrossRef]
  38. Issa, M.; Hassanien, A.E.; Oliva, D.; Helmi, A.; Ziedan, I.; Alzohairy, A. ASCA-PSO: Adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local sequence alignment. Expert Syst. Appl. 2018, 99, 56–70. [Google Scholar] [CrossRef]
  39. Chegini, S.N.; Bagheri, A.; Najafi, F. PSOSCALF: A new hybrid PSO based on Sine Cosine Algorithm and Levy flight for solving optimization problems. Appl. Soft Comput. 2018, 73, 697–726. [Google Scholar] [CrossRef]
  40. Nenavath, H.; Jatoth, R.K.; Das, S. A synergy of the sine-cosine algorithm and particle swarm optimizer for improved global optimization and object tracking. Swarm Evol. Comput. 2018, 43, 1–30. [Google Scholar] [CrossRef]
  41. Singh, N.; Singh, S. A novel hybrid GWO-SCA approach for optimization problems. Eng. Sci. Technol. Int. J. 2017, 20, 1586–1601. [Google Scholar] [CrossRef]
  42. Nenavath, H.; Jatoth, R.K. Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Appl. Soft Comput. 2018, 62, 1019–1043. [Google Scholar] [CrossRef]
  43. Migallón, H.; Jimeno-Morenilla, A.; Sánchez-Romero, J.L.; Rico, H.; Rao, R.V. Multipopulation-based multi-level parallel enhanced Jaya algorithms. J. Supercomput. 2019, 75, 1697–1716. [Google Scholar] [CrossRef] [Green Version]
  44. García-Monzó, A.; Migallón, H.; Jimeno-Morenilla, A.; Sánchez-Romero, J.L.; Rico, H.; Rao, R.V. Efficient Subpopulation Based Parallel TLBO Optimization Algorithms. Electronics 2018, 8, 19. [Google Scholar] [CrossRef] [Green Version]
  45. Free Software Foundation, Inc. GCC, the GNU Compiler Collection. Available online: https://www.gnu.org/software/gcc/index.html (accessed on 15 October 2021).
  46. OpenMP Architecture Review Board. OpenMP Application Program Interface, Version 3.1. 2011. Available online: http://www.openmp.org (accessed on 15 October 2021).
  47. Dimakopoulos, V.V.; Hadjidoukas, P.E.; Philos, G.C. A Microbenchmark Study of OpenMP Overheads under Nested Parallelism. In OpenMP in a New Era of Parallelism; Eigenmann, R., de Supinski, B.R., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–12. [Google Scholar] [CrossRef]
  48. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  49. Hodges, J.; Lehmann, E.L. Rank methods for combination of independent experiments in analysis of variance. In Selected Works of EL Lehmann; Springer: Berlin/Heidelberg, Germany, 2012; pp. 403–418. [Google Scholar]
  50. Quade, D. On analysis of variance for the k-sample problem. Ann. Math. Stat. 1966, 37, 1747–1758. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  52. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  53. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  54. García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  55. Chen, H.; Wang, M.; Zhao, X. A multi-strategy enhanced sine cosine algorithm for global optimization and constrained practical engineering problems. Appl. Math. Comput. 2020, 369, 124872. [Google Scholar] [CrossRef]
  56. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  57. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  58. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  59. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  60. Coello, C.A.C.; Montes, E.M. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inform. 2002, 16, 193–203. [Google Scholar] [CrossRef]
  61. Deb, K. GeneAS: A robust optimal design technique for mechanical component design. In Evolutionary Algorithms in Engineering Applications; Springer: Berlin/Heidelberg, Germany, 1997; pp. 497–514. [Google Scholar]
  62. Mezura-Montes, E.; Coello, C.A.C. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst. 2008, 37, 443–473. [Google Scholar] [CrossRef]
  63. Kaveh, A.; Talatahari, S. An improved ant colony optimization for constrained engineering design problems. Eng. Comput. 2010, 27, 155–182. [Google Scholar] [CrossRef]
  64. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray Optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  65. Rajeswara Rao, B.; Tiwari, R. Optimum design of rolling element bearings using genetic algorithms. Mech. Mach. Theory 2007, 42, 233–250. [Google Scholar] [CrossRef]
  66. Rao, R.V.; Savsani, V.; Vakharia, D. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  67. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  68. Zhao, W.; Wang, L.; Zhang, Z. Supply-Demand-Based Optimization: A Novel Economics-Inspired Algorithm for Global Optimization. IEEE Access 2019, 7, 73182–73206. [Google Scholar] [CrossRef]
Figure 1. Searching spaces of SCA depending on r 1 .
Figure 1. Searching spaces of SCA depending on r 1 .
Mathematics 10 01166 g001
Figure 2. Search maps of search agents when solving functions f 1 , f 3 , and f 4 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Figure 2. Search maps of search agents when solving functions f 1 , f 3 , and f 4 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Mathematics 10 01166 g002
Figure 3. Search maps of search agents when solving functions f 9 , f 10 , and f 12 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Figure 3. Search maps of search agents when solving functions f 9 , f 10 , and f 12 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Mathematics 10 01166 g003
Figure 4. Search maps of search agents when solving functions f 21 , f 24 , and f 25 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Figure 4. Search maps of search agents when solving functions f 21 , f 24 , and f 25 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Mathematics 10 01166 g004
Figure 5. Obtained solutions in the search space of functions f 1 , f 3 , and f 4 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Figure 5. Obtained solutions in the search space of functions f 1 , f 3 , and f 4 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Mathematics 10 01166 g005
Figure 6. Obtained solutions in the search space of functions f 9 , f 10 , and f 12 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Figure 6. Obtained solutions in the search space of functions f 9 , f 10 , and f 12 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Mathematics 10 01166 g006
Figure 7. Obtained solutions in the search space of functions f 21 , f 24 , and f 25 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Figure 7. Obtained solutions in the search space of functions f 21 , f 24 , and f 25 ; by the ESCA algorithm (first row); and the SCA algorithm (second row).
Mathematics 10 01166 g007
Figure 8. Pressure vessel design problem.
Figure 8. Pressure vessel design problem.
Mathematics 10 01166 g008
Figure 9. Welded beam design problem.
Figure 9. Welded beam design problem.
Mathematics 10 01166 g009
Figure 10. Rolling element bearing design problem.
Figure 10. Rolling element bearing design problem.
Mathematics 10 01166 g010
Figure 11. Thread placement when no affinity is used. (a) Platform’s architecture. (b) Example of thread placement without control.
Figure 11. Thread placement when no affinity is used. (a) Platform’s architecture. (b) Example of thread placement without control.
Mathematics 10 01166 g011
Figure 12. Optimal thread placement. (a) Example of thread group placement without control. (b) Example of thread group placement with affinity control.
Figure 12. Optimal thread placement. (a) Example of thread group placement without control. (b) Example of thread group placement with affinity control.
Mathematics 10 01166 g012
Figure 13. Convergence curves for the benchmark functions f 1 f 15 in row-major order. Optimization algorithms are SCA [∘], ESCA [∗].
Figure 13. Convergence curves for the benchmark functions f 1 f 15 in row-major order. Optimization algorithms are SCA [∘], ESCA [∗].
Mathematics 10 01166 g013aMathematics 10 01166 g013b
Figure 14. Convergence curves for the benchmark functions f 16 f 30 in row-major order. Optimization algorithms are SCA [∘], ESCA [∗].
Figure 14. Convergence curves for the benchmark functions f 16 f 30 in row-major order. Optimization algorithms are SCA [∘], ESCA [∗].
Mathematics 10 01166 g014
Table 1. Benchmark functions: dimensions and domain.
Table 1. Benchmark functions: dimensions and domain.
Id.NameDim. (V)Domain (Min, Max)
f 1 Sphere30 100 , 100
f 2 SumSquares30 10 , 10
f 3 Beale2 4.5 , 4.5
f 4 Easom2 100 , 100
f 5 Matyas2 10 , 10
f 6 Colville4 10 , 10
f 7 Trid 66 V 2 , V 2
f 8 Trid 1010 V 2 , V 2
f 9 Zakharov10 5 , 10
f 10 Schwefel_1.230 100 , 100
f 11 Rosenbrock30 30 , 30
f 12 Dixon-Price5 10 , 10
f 13 Foxholes2 2 16 , 2 16
f 14 Branin2 x 1 : 5 , 10
x 2 : 0 , 15
f 15 Bohachevsky_12 100 , 100
f 16 Booth2 10 , 10
f 17 Michalewicz_22 0 , π
f 18 Michalewicz_55 0 , π
f 19 Bohachevsky_22 100 , 100
f 20 Bohachevsky_32 100 , 100
f 21 GoldStein-Price2 2 , 2
f 22 Perm4 V , V
f 23 Hartman_33 0 , 1
f 24 Ackley30 32 , 32
f 25 Penalized_230 50 , 50
f 26 Langermann_22 0 , 10
f 27 Langermann_55 0 , 10
f 28 Langermann_1010 0 , 10
f 29 Fletcher-Powell_55 x i , α i : π , π
a i j , b i j : 100 , 100
f 30 Fletcher-Powell_1010 x i , α i : π , π
a i j , b i j : 100 , 100
Table 2. Benchmark functions: Definitions.
Table 2. Benchmark functions: Definitions.
Id.Function
f 1 f = i = 1 V x i 2  
f 2 f = i = 1 V i x i 2  
f 3 f = ( 1.5 x 1 + x 1 x 2 ) 2 + ( 2.25 x 1 + x 1 x 2 2 ) 2
+ ( 2.625 x 1 + x 1 x 2 3 ) 2
f 4 f = cos ( x 1 ) cos ( x 2 ) exp ( x 1 π ) 2 ( x 2 π ) 2  
f 5 f = 0.26 ( x 1 2 + x 2 2 ) 0.48 x 1 x 2  
f 6 f = 100 ( x 1 2 x 2 ) 2 + ( x 1 1 ) 2 + ( x 3 1 ) 2 + 90 ( x 3 2 x 4 ) 2
+ 10.1 ( x 2 1 ) 2 + ( x 4 1 ) 2 + 19.8 ( x 2 1 ) ( x 4 1 )
f 7 f = i = 1 V ( x i 1 ) 2 i = 2 V x i x i 1
f 8
f 9 f = i = 1 V x i 2 + i = 1 V 0.5 i x i 2 + i = 1 V 0.5 i x i 4  
f 10 f = i = 1 V j = 1 i x j 2  
f 11 f = i = 1 V 1 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2  
f 12 f = ( x 1 1 ) 2 + i = 2 V i 2 x i 2 x i 1 2  
f 13 f = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1  
f 14 f = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10  
f 15 f = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) 0.4 cos ( 4 π x 2 ) + 0.7  
f 16 f = ( x 1 2 x 2 7 ) 2 + ( 2 x 1 + x 2 5 ) 2  
f 17 f = i = 1 V sin x i sin i x i 2 π 20
f 18
f 19 f = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) cos ( 4 π x 2 ) + 0.3  
f 20 f = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 + 4 π x 2 ) + 0.3  
f 21 f = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 )
30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 )
f 22 f = j = 1 V i = 1 i ( i j + β ) x i i j 1 2  
f 23 f = i = 1 4 c i exp j = 1 3 a i j ( x j p i j ) 2  
f 24 f = 20 exp 0.2 1 V i = 1 V x i 2 exp 1 V i = 1 V cos ( 2 π x i ) + 20 + e  
f 25 f = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 V 1 ( x i 1 ) 2 1 + sin 2 ( 3 π x i + 1 ) + ( x V 1 ) 2 1 + sin 2 ( 2 π x V ) }
+ i = 1 V u ( x i , 5 , 100 , 4 ) ,
u ( x i , a , k , m ) = k ( x i a ) m , x i > a ; 0 , a x i a ; k ( x i a ) m , x i < a .
f 26 f = i = 1 5 c i exp 1 π j = 1 V ( x j a i j ) 2 cos π j = 1 V ( x j a i j ) 2
f 27
f 28
f 29 f = i = 1 V A i B i 2 ; A i = j = 1 V a i j sin α j + b i j cos α j , B i = j = 1 V a i j sin x j + b i j cos x j
f 30
Table 3. Computational times (s.) for sequential SCA and ESCA algorithms.
Table 3. Computational times (s.) for sequential SCA and ESCA algorithms.
Population Size
60120240
SCAESCASCAESCASCAESCA
f 1 349.3308.8683.5711.51432.51325.7
f 2 388.5312.9739.0668.41474.91405.0
f 3 25.223.450.446.8100.693.6
f 4 27.826.455.652.8111.2105.6
f 5 33.931.468.870.2131.7138.5
f 6 31.328.662.457.1124.7114.1
f 7 48.644.196.987.9193.7179.3
f 8 80.569.8160.0140.8321.4279.2
f 9 144.7132.7280.6268.0558.3530.3
f 10 374.5413.7773.6815.81562.51657.5
f 11 223.1208.3442.6416.4884.5833.1
f 12 38.436.477.972.5154.9145.5
f 13 461.7466.7923.8933.91845.81867.0
f 14 19.718.939.537.879.175.6
f 15 17.817.133.933.970.768.0
f 16 15.514.631.229.162.058.1
f 17 72.355.6144.7111.0291.4221.9
f 18 174.7125.1309.0280.3620.4493.1
f 19 18.716.536.132.272.270.2
f 20 17.616.835.231.269.959.7
f 21 16.315.332.630.665.161.0
f 22 105.5101.8212.0205.5419.7409.7
f 23 36.336.572.073.2146.1146.3
f 24 125.6123.2251.6246.3501.5493.8
f 25 406.4321.7812.3674.41707.81321.8
f 26 56.457.1113.3113.5225.6227.2
f 27 82.082.0164.3164.1331.8328.8
f 28 130.6118.5262.2236.1523.4473.2
f 29 174.0168.7346.9339.4700.0675.1
f 30 583.4568.91165.51134.62334.12290.9
Table 4. Number of function evaluations for an error < 1 × 10 3 (* < 1 × 10 2 ).
Table 4. Number of function evaluations for an error < 1 × 10 3 (* < 1 × 10 2 ).
Population Size
24012060
SCAESCASCAESCASCAESCA
f 1 3,639,14475,3841,842,86448,504971,80228,074
f 2 3,596,88073,4641,808,00443,500988,38024,888
f 3 24,000213624,888307213,8782082
f 4 306,9124152218,2203432239,1662088
f 5 1584840756564540312
f 6 9,627,2274,450,5772,654,280
f 7 *388896057246123222354
f 8 *5,031,792317,3762,684,760190,0531,565,184196,337
f 9 1,528,65616,848848,5449708490,8546420
f 10 5,048,616739,2962,623,800462,4561,400,712311,640
f 11 *3,677,16078,7201,906,38045,82832,424
f 12 6,186,2404,982,2402,624,640
f 13 571,00814,088547,3206288236,14836,126
f 14 70,3921920118,296225652,7821998
f 15 59282352296413802262762
f 16 187,5603120236,9522508131,7122400
f 17 401,6883888419,2201812236,4002910
f 18 *480480240240120120
f 19 61202448362413921896882
f 20 51602112456012962340834
f 21 26,856204028,596108015,924912
f 22 3,966,2643,528,9121,907,900
f 23 792057,0903,739,20081,345123,72027,760
f 24 2,290,46430,4081,207,95617,940668,7908304
f 25 *3,591,96046,0321,952,61633,756951,70817,022
f 26 20,400967221,744484810,1932208
f 27 6,840,5285,366,0402,930,112
f 28 480480252252120120
f 29 *1,127,83224,1681,148,60427,672840,28826,940
f 30 *9,787,4675,338,9602,939,910
Table 5. Parallel speed-up for parallel data sharing algorithm.
Table 5. Parallel speed-up for parallel data sharing algorithm.
Population Size
24012060
NoCs
261226122612
f 1 2.05.710.42.05.711.41.85.09.7
f 2 2.05.810.91.85.510.52.04.99.3
f 3 1.94.96.91.94.64.51.93.72.5
f 4 2.05.510.91.95.510.31.65.33.8
f 5 1.95.09.01.84.97.41.64.23.7
f 6 2.05.510.91.95.410.41.95.43.6
f 7 1.33.34.51.94.75.11.94.13.4
f 8 2.05.49.92.05.39.01.95.16.8
f 9 1.95.210.21.85.310.11.95.19.2
f 10 2.05.511.01.95.410.62.05.510.4
f 11 2.05.511.02.05.510.92.05.510.9
f 12 2.05.39.11.95.17.21.94.64.0
f 13 2.05.511.12.05.511.02.05.510.8
f 14 2.05.510.72.05.48.81.95.22.4
f 15 1.95.410.22.05.76.32.05.22.2
f 16 1.95.510.41.95.35.21.95.11.8
f 17 2.05.49.52.05.28.21.94.85.8
f 18 1.95.49.31.96.08.71.74.86.0
f 19 2.05.17.11.74.33.81.53.71.9
f 20 1.74.86.41.84.53.81.64.11.9
f 21 1.95.16.41.94.63.51.93.61.7
f 22 1.95.210.21.95.39.91.95.18.8
f 23 2.05.510.62.05.410.21.95.48.9
f 24 2.05.510.41.95.49.62.05.28.5
f 25 2.05.610.42.05.510.02.05.29.1
f 26 2.05.28.41.95.07.21.94.85.6
f 27 2.05.49.82.05.18.72.05.06.9
f 28 2.05.510.91.95.510.91.95.410.8
f 29 2.05.511.02.05.510.62.05.410.4
f 30 2.05.611.12.05.511.02.05.510.9
Table 6. Parallel speed-up for asynchronous parallel algorithm.
Table 6. Parallel speed-up for asynchronous parallel algorithm.
Population Size
24012060
NoCs
261226122612
f 1 2.05.711.62.05.811.61.95.711.2
f 2 1.95.511.41.95.611.21.95.210.5
f 3 1.95.511.01.95.511.01.95.58.2
f 4 1.95.511.02.05.511.02.05.511.0
f 5 1.95.511.02.05.511.11.85.511.0
f 6 1.95.511.02.05.511.12.05.511.0
f 7 1.33.67.32.05.511.01.95.511.0
f 8 1.95.511.02.05.511.12.05.511.1
f 9 1.95.611.12.15.310.61.95.210.7
f 10 1.95.410.92.05.611.12.05.510.9
f 11 1.95.510.92.05.511.11.95.511.1
f 12 1.95.510.72.05.511.02.05.510.9
f 13 1.95.511.02.05.511.02.05.511.1
f 14 1.95.510.92.05.511.01.95.510.9
f 15 1.95.410.91.85.210.31.85.410.9
f 16 1.95.511.01.95.510.91.95.510.8
f 17 1.95.511.12.05.510.72.05.510.9
f 18 1.95.611.32.05.611.12.05.611.3
f 19 1.85.19.91.95.410.51.95.09.9
f 20 1.95.310.42.05.511.01.95.510.9
f 21 1.95.511.02.05.510.21.95.510.9
f 22 1.95.210.41.95.210.51.95.310.3
f 23 1.95.410.72.05.510.92.05.510.8
f 24 1.95.511.02.05.611.22.05.511.0
f 25 1.95.410.92.05.511.11.95.611.0
f 26 1.95.511.02.05.511.01.95.511.0
f 27 1.95.510.92.05.411.02.05.511.0
f 28 1.95.511.02.05.511.11.95.611.1
f 29 1.95.511.02.05.511.02.05.511.0
f 30 1.95.510.92.05.611.02.05.611.1
Table 7. Parallel speed-up for the two-level parallel algorithm using groups of processes. Population size = 240.
Table 7. Parallel speed-up for the two-level parallel algorithm using groups of processes. Population size = 240.
16 Processes20 Processes
NoCs ; inCs 8;24;42;810;25;44;52;10
f 1 12.512.512.115.915.015.215.2
f 2 12.512.111.614.414.815.014.7
f 10 11.911.811.814.714.714.714.7
f 13 10.110.09.712.712.412.311.7
f 30 12.212.112.215.315.215.115.1
Table 8. Parallel speed-up for the two-level parallel algorithm using groups of processes. Population size = 240. Number of processes = 24.
Table 8. Parallel speed-up for the two-level parallel algorithm using groups of processes. Population size = 240. Number of processes = 24.
24 Processes
NoCs ; inCs 12;26;44;62;12
f 1 19.018.318.717.4
f 2 18.017.416.918.0
f 10 17.617.617.417.4
f 30 18.218.018.117.8
Table 9. Sharing data parallel algorithm: number of function evaluations for error <1 × 10 3 (* < 1 × 10 2 ). p o p I n i t S i z e = 240 .
Table 9. Sharing data parallel algorithm: number of function evaluations for error <1 × 10 3 (* < 1 × 10 2 ). p o p I n i t S i z e = 240 .
NoCs
12612
f 1 75,38480,65783,77676,385
f 2 73,46470,13573,03460,717
f 3 2136212021282200
f 4 4152488940053507
f 5 840842687312
f 6 9,627,2279,966,4019,430,1039,876,351
f 7 *960762722583
f 8 *317,376374,307255,284324,357
f 9 16,84816,51617,85317,829
f 10 739,296854,471780,928743,569
f 11 *78,72065,64375,12976,902
f 12 6,186,2407,359,4978,535,7935,457,901
f 13 14,0889603729411,392
f 14 1920204238312259
f 15 2352214424531722
f 16 3120334233284471
f 17 3888351732752470
f 18 *480456453373
f 19 2448225921921990
f 20 2112226220311892
f 21 204017321601974
f 22 3,966,2643,298,2335,086,2087,588,192
f 23 57,090313440693396
f 24 30,40828,98230,28130,248
f 25 *46,03256,97535,15741,468
f 26 967215,60513,57310,713
f 27 6,840,52810,618,5003,333,9408,731,516
f 28 480440462164
f 29 *24,16830,60425,40826,676
f 30 *9,787,4678,810,6618,232,26310,546,564
Table 10. Sharing data parallel algorithm: number of function evaluations for error <1 × 10 3 (* < 1 × 10 2 ). popInitSize = 60.
Table 10. Sharing data parallel algorithm: number of function evaluations for error <1 × 10 3 (* < 1 × 10 2 ). popInitSize = 60.
NoCs
12612
f 1 28,07432,62428,41928,128
f 2 24,88824,32325,03022,209
f 3 2082236118671670
f 4 2088231917622220
f 5 312314250173
f 6 2,654,2801,896,2522,914,2101,951,487
f 7 *354325430262
f 8 *196,337279,480347,191238,579
f 9 6420714368447113
f 10 311,640262,261308,376310,209
f 11 *32,42428,08128,73832,903
f 12 2,624,6402,353,7032,680,1742,202,838
f 13 36,12618,554634534,818
f 14 1998194419172689
f 15 762820753520
f 16 2400250333932781
f 17 2910127127452865
f 18 *12011410563
f 19 882762879604
f 20 834807812629
f 21 912691743543
f 22 1,907,9001,110,4901,874,5202,209,849
f 23 27,760395626333782
f 24 8304984010,1209041
f 25 *17,02221,35328,55017,609
f 26 2208262696851920
f 27 2,930,1122,842,2492,925,1932,806,709
f 28 12011311337
f 29 *26,94012,41517,10317,228
f 30 *2,939,9102,650,1492,863,3172,815,149
Table 11. Asynchronous parallel algorithm: number of function evaluations for error <1 × 10 3 (* < 1 × 10 2 ). p o p I n i t S i z e = 240 .
Table 11. Asynchronous parallel algorithm: number of function evaluations for error <1 × 10 3 (* < 1 × 10 2 ). p o p I n i t S i z e = 240 .
NoCs
12612
f 1 80,13683,27790,62684,218
f 2 73,82472,79273,92774,209
f 3 2568257829893313
f 4 3264579554365963
f 5 816633750410
f 6 8,974,65010,097,59510,025,66211,314,284
f 7 *1032759897481
f 8 *253,92034,746561,222186,7391
f 9 17,18416,64423,19325,164
f 10 71,433685,87361,078,1271,252,559
f 11 *64,65677,58288,17510,3109
f 12 8,937,6807,699,04510,565,35711,289,390
f 13 46,87210,34717,45220,750
f 14 1992355447025277
f 15 2256227726561922
f 16 4896486968816861
f 17 3264364044085952
f 18 *480411413306
f 19 2256220424522353
f 20 2304244126611826
f 21 1896159023472041
f 22 4,256,6107,094,9327,742,4226,726,769
f 23 10,164049,88414,75030,406
f 24 31,15232,21432,03933,949
f 25 *37,75247,88346,42741,350
f 26 8592326623863399
f 27 8,689,6809,215,37910,203,56710,787,817
f 28 528456444199
f 29 *21,62429,57258,60046,029
f 30 *10,729,47010,103,63210,519,5949,802,382
Table 12. Asynchronous parallel algorithm: number of function evaluations for error <1 × 10 3 (* < 1 × 10 2 ). p o p I n i t S i z e = 60 .
Table 12. Asynchronous parallel algorithm: number of function evaluations for error <1 × 10 3 (* < 1 × 10 2 ). p o p I n i t S i z e = 60 .
NoCs
12612
f 1 28,30825,75437,60533,407
f 2 24,68425,34626,98526,891
f 3 1644116318763188
f 4 2778223942944816
f 5 378307228308
f 6 2,742,8302,918,1382,936,681
f 7 *402415376156
f 8 *216,387314,711495,643602,778
f 9 68226827987311,370
f 10 314,562316,765415,446413,730
f 11 *28,33826,14329,28035,931
f 12 2,444,8352,056,4472,802,8782,993,886
f 13 52,84835,56530,40463,877
f 14 1680262579725786
f 15 732926851583
f 16 2736642948148359
f 17 2790302355889495
f 18 *12010110546
f 19 630846877563
f 20 792736878732
f 21 8589149301437
f 22 1,089,0001,850,2382,757,559
f 23 198011,29120,03021,210
f 24 89409557867611,540
f 25 *17,65221,63117,44717,945
f 26 3342115123773869
f 27 2,918,5802,865,6792,970,8692,779,579
f 28 12011610530
f 29 *23,58622,92142,96159,068
f 30 *2,782,1302,885,9352,631,5352,801,275
Table 13. Asynchronous parallel algorithm: number of function evaluations for error <1 × 10 3 , 6 processes and homogeneous and heterogeneous subpopulation sizes. p o p I n i t S i z e = 240 .
Table 13. Asynchronous parallel algorithm: number of function evaluations for error <1 × 10 3 , 6 processes and homogeneous and heterogeneous subpopulation sizes. p o p I n i t S i z e = 240 .
Thread Id.
012345
Subpopulation Sizes# FEs
f 6 40404040404010,025,662
8060403020108,365,248
f 22 4040404040407,742,422
8060403020106,341,866
f 27 40404040404010,203,567
8060403020109,941,450
Table 14. Convergence ratio for ESCA and SCA algorithms with different population sizes.
Table 14. Convergence ratio for ESCA and SCA algorithms with different population sizes.
Population Size
60120240
Pressure Vessel Problem
ESCA-100006060.20706060.74206059.9340
SCA-100006079.06106091.43406068.5540
ESCA-200006060.09506059.82906059.8000
SCA-200006065.74606066.95306069.2260
Welded beam problem
ESCA-100001.7288441.7266251.726300
SCA-100001.7481431.7493941.747236
ESCA-200001.7265851.7267041.725514
SCA-200001.7514801.7472071.738482
Rolling element bearing problem
ESCA-1000081,706.1781,798.3881,832.05
SCA-1000080,673.5881,333.6580,318.50
ESCA-2000081,803.8781,774.6081,836.77
SCA-2000080,224.4980,335.6081,086.44
Table 15. Average values for unconstrained and constrained problems obtained by ESCA and SCA.
Table 15. Average values for unconstrained and constrained problems obtained by ESCA and SCA.
Population Size
60120240
SCAESCASCAESCASCAESCA
f 1 2.757179 × 10 64 0.0000001.712496 × 10 79 0.0000004.457065 × 10 94 0.000000
f 2 8.616185 × 10 65 0.0000001.046044 × 10 80 0.0000001.112510 × 10 92 0.000000
f 3 6.811942 × 10 6 5.491076 × 10 9 4.783583 × 10 6 3.114413 × 10 9 1.401307 × 10 6 7.104723 × 10 10
f 4 −9.999516 × 10 1 −1.000000−9.999736 × 10 1 −1.000000−9.999892 × 10 1 −1.000000
f 5 0.0000000.0000000.0000000.0000000.0000000.000000
f 6 9.900274 × 10 2 8.185273 × 10 3 9.765063 × 10 2 2.929811 × 10 3 6.404594 × 10 2 2.334942 × 10 3
f 7 −4.845251 × 10 1 −4.990339 × 10 1 −4.877389 × 10 1 −4.995156 × 10 1 −4.896195 × 10 1 −4.996917 × 10 1
f 8 −1.262160 × 10 2 −1.539732 × 10 2 −1.339290 × 10 2 −1.787089 × 10 2 −1.501428 × 10 2 −1.862011 × 10 2
f 9 8.721680 × 10 202 0.0000003.156447 × 10 257 0.0000002.251127 × 10 315 0.000000
f 10 8.175285 × 10 1 0.0000001.361425 × 10 3 0.0000002.200964 × 10 8 0.000000
f 11 2.701419 × 10 1 2.643757 × 10 1 2.699064 × 10 1 2.614943 × 10 1 2.663097 × 10 1 2.585579 × 10 1
f 12 3.584155 × 10 1 5.114134 × 10 1 3.100522 × 10 1 4.890716 × 10 1 2.815470 × 10 1 4.889729 × 10 1
f 13 1.0641411.1964149.980039 × 10 1 1.0641419.980038 × 10 1 9.980038 × 10 1
f 14 3.979373 × 10 1 3.978874 × 10 1 3.979186 × 10 1 3.978874 × 10 1 3.979079 × 10 1 3.978874 × 10 1
f 15 0.0000000.0000000.0000000.0000000.0000000.000000
f 16 2.880073 × 10 5 3.813791 × 10 9 1.142770 × 10 5 7.944414 × 10 10 6.238591 × 10 6 1.858303 × 10 10
f 17 −1.774460−1.801303−1.801248−1.801303−1.801272−1.801303
f 18 −3.187932−3.700737−3.375650−4.044260−3.610325−4.071782
f 19 0.0000000.0000000.0000000.0000000.0000000.000000
f 20 0.0000000.0000000.0000000.0000000.0000000.000000
f 21 3.0000003.0000003.0000003.0000003.0000003.000000
f 22 3.214731 × 10 2 6.673788 × 10 3 1.718025 × 10 2 3.599778 × 10 3 1.316666 × 10 2 2.510527 × 10 3
f 23 −3.855633−3.858840−3.855658−3.859628−3.857749−3.860941
f 24 4.588922 × 10 15 3.996803 × 10 15 4.233650 × 10 15 3.878379 × 10 15 4.115227 × 10 15 3.996803 × 10 15
f 25 1.8889451.5858671.7877611.5153881.6983891.389630
f 26 −1.069455−1.080938−1.080930−1.080938−1.080936−1.080938
f 27 −5.685987 × 10 1 −6.957021 × 10 1 −6.135416 × 10 1 −8.465688 × 10 1 −7.018316 × 10 1 −8.571367 × 10 1
f 28 −3.945496 × 10 2 −1.893884 × 10 1 −8.203238 × 10 2 −2.315995 × 10 1 −1.025653 × 10 1 −2.631384 × 10 1
f 29 3.258800 × 10 1 3.2242371.912760 × 10 1 1.3642371.720010 × 10 1 7.855821 × 10 1
f 30 3.745441 × 10 1 2.5817682.071528 × 10 1 1.3596731.765572 × 10 1 9.444320 × 10 1
Vessel6.213857 × 10 3 6.097895 × 10 3 6.176765 × 10 3 6.067191 × 10 3 6.150466 × 10 3 6.062122 × 10 3
Beam1.7925321.7338331.7831721.7316251.7702351.729274
Bearing7.303758 × 10 4 8.116530 × 10 4 7.449770 × 10 4 8.147987 × 10 4 7.689757 × 10 4 8.162418 × 10 4
Table 16. Comparison of solution accuracy for ESCA and SCA algorithms. The average ranking results by Friedman, Friedman aligned, and Quade tests.
Table 16. Comparison of solution accuracy for ESCA and SCA algorithms. The average ranking results by Friedman, Friedman aligned, and Quade tests.
Population Size
60120240
Ranking
FriedmanF. alignedQuadFriedmanF. alignedQuadFriedmanF. alignedQuad
ESCA1.197022.71211.17381.227323.34851.19341.227322.83331.1783
SCA1.803044.28791.82621.772743.65151.80661.772744.16671.8217
Table 17. Number of function evaluations for error <1 × 10 3 (* <1 × 10 2 ).
Table 17. Number of function evaluations for error <1 × 10 3 (* <1 × 10 2 ).
ESCAGWOHHOWOA
f 1 14,502692026357567
f 2 12,399626120245877
f 3 10511461535829
f 4 1582635230202123
f 5 282400307346
f 6 1,121,0281,019,7461,404,2981,173,481
f 7 *307281214268
f 8 *17,545394613291165
f 9 754335012219149,835
f 10 77,36222,15866251,058,019
f 11 *10,727473210544077
f 12 853,3991,088,98311,3117280
f 13 204,227563,26246,08422,772
f 14 2043771840792882
f 15 856101212861701
f 16 1330275851916688
f 17 114228,24030731279
f 18 799,4831,198,7081,385,5611,015,650
f 19 880104612273460
f 20 866102916068337
f 21 1214196115971634
f 22 1,058,0231,142,8251,481,119228
f 23 141596,59420,37583,281
f 24 15,3228510492811,575
f 25 *753722756741708
f 26 10,03111,503421,650321,426
f 27 822,1311,128,604583,872927,182
f 28 962,612968,6371,171,682942,952
f 29 *695511,138131,84633,285
f 30 *765455,073158,83159,033
Table 18. Statistical data for 30 runs with a population of 120 and 10,000 iterations for f 1 to f 30 .
Table 18. Statistical data for 30 runs with a population of 120 and 10,000 iterations for f 1 to f 30 .
ESCAGWOHHOWOA
f 1 Best0.0000000.0000000.0000000.000000
Avg.0.0000000.0000000.0000000.000000
Worst0.0000000.0000000.0000000.000000
SD0.0000000.0000000.0000000.000000
f 2 Best0.0000000.0000000.0000000.000000
Avg.0.0000000.0000000.0000000.000000
Worst0.0000000.0000000.0000000.000000
SD0.0000000.0000000.0000000.000000
f 3 Best3.262152 × 10 18 5.547644 × 10 13 0.0000001.203238 × 10 19
Avg.8.895248 × 10 11 1.170037 × 10 10 0.0000006.586553 × 10 16
Worst6.298503 × 10 10 3.953307 × 10 10 0.0000001.233480 × 10 14
SD1.412547 × 10 10 9.315382 × 10 11 0.0000002.205548 × 10 15
f 4 Best−1.000000−1.000000−1.000000−1.000000
Avg.−1.000000−1.000000−1.000000−1.000000
Worst−1.000000−1.000000−1.000000−1.000000
SD6.943355 × 10 13 4.337546 × 10 10 8.599751 × 10 17 9.634141 × 10 13
f 5 Best0.0000000.0000000.0000000.000000
Avg.0.0000000.0000000.0000000.000000
Worst0.0000000.0000000.0000000.000000
SD0.0000000.0000000.0000000.000000
f 6 Best1.807347 × 10 6 4.686073 × 10 8 4.023087 × 10 5 8.462644 × 10 4
Avg.6.913877 × 10 4 4.435400 × 10 2 3.457026 × 10 3 1.179321 × 10 2
Worst2.241546 × 10 3 1.3306056.863861 × 10 3 2.110537 × 10 2
SD6.032768 × 10 4 2.388509 × 10 1 1.911670 × 10 3 5.070219 × 10 3
f 7 Best−5.000000 × 10 1 −5.000000 × 10 1 −5.000000 × 10 1 −5.000000 × 10 1
Avg.−4.999999 × 10 1 −5.000000 × 10 1 −5.000000 × 10 1 −5.000000 × 10 1
Worst−4.999997 × 10 1 −5.000000 × 10 1 −5.000000 × 10 1 −5.000000 × 10 1
SD7.486059 × 10 6 9.662948 × 10 8 5.492594 × 10 11 2.403252 × 10 10
f 8 Best−2.099980 × 10 2 −2.100000 × 10 2 −2.100000 × 10 2 −2.100000 × 10 2
Avg.−2.099872 × 10 2 −2.063305 × 10 2 −2.100000 × 10 2 −2.100000 × 10 2
Worst−2.099745 × 10 2 −1.549028 × 10 2 −2.100000 × 10 2 −2.100000 × 10 2
SD7.246266 × 10 3 1.372988 × 10 1 5.265073 × 10 8 2.197536 × 10 7
f 9 Best0.0000000.0000000.0000005.909506 × 10 178
Avg.0.0000000.0000000.0000004.294324 × 10 82
Worst0.0000000.0000000.0000006.977677 × 10 81
SD0.0000000.0000000.0000001.580556 × 10 81
f 10 Best0.0000002.470328 × 10 323 0.0000003.725891 × 10 8
Avg.0.0000007.905050 × 10 323 0.0000001.000874 × 10 2
Worst0.0000001.729230 × 10 322 0.0000002.032146 × 10 1
SD0.0000000.0000000.0000003.734209 × 10 2
f 11 Best2.481895 × 10 1 2.522460 × 10 1 2.489752 × 10 1 2.486321 × 10 1
Avg.4.935104 × 10 3 2.685818 × 10 1 4.932600 × 10 3 2.612374 × 10 4
Worst1.003584 × 10 4 2.889938 × 10 1 1.002894 × 10 4 9.002408 × 10 4
SD4.931226 × 10 3 7.683004 × 10 1 4.928556 × 10 3 3.000263 × 10 4
f 12 Best1.019230 × 10 8 4.395919 × 10 9 4.827285 × 10 17 6.442491 × 10 13
Avg.3.333334 × 10 1 4.000000 × 10 1 2.551869 × 10 12 3.430491 × 10 10
Worst6.666667 × 10 1 6.666667 × 10 1 2.321049 × 10 11 2.552220 × 10 9
SD3.333332 × 10 1 3.265986 × 10 1 5.095444 × 10 12 7.470906 × 10 10
f 13 Best9.980038 × 10 1 9.980038 × 10 1 9.980038 × 10 1 9.980038 × 10 1
Avg.1.5880571.9239189.980038 × 10 1 9.980038 × 10 1
Worst1.076318 × 10 1 2.9821059.980038 × 10 1 9.980038 × 10 1
SD1.8317619.898436 × 10 1 4.309420 × 10 16 6.214605 × 10 16
f 14 Best3.978874 × 10 1 3.978874 × 10 1 3.978874 × 10 1 3.978874 × 10 1
Avg.3.978874 × 10 1 3.978878 × 10 1 3.978874 × 10 1 3.978874 × 10 1
Worst3.978874 × 10 1 3.978987 × 10 1 3.978874 × 10 1 3.978874 × 10 1
SD2.664066 × 10 10 2.044411 × 10 6 3.707297 × 10 15 7.625589 × 10 12
f 15 Best0.0000000.0000000.0000000.000000
Avg.0.0000000.0000000.0000000.000000
Worst0.0000000.0000000.0000000.000000
SD0.0000000.0000000.0000000.000000
f 16 Best7.032691 × 10 16 4.176314 × 10 12 1.053336 × 10 17 1.518097 × 10 10
Avg.8.501715 × 10 11 2.991300 × 10 10 9.229996 × 10 16 1.061443 × 10 9
Worst6.188203 × 10 10 1.038909 × 10 9 1.010500 × 10 14 4.095840 × 10 9
SD1.224357 × 10 10 2.805821 × 10 10 2.016184 × 10 15 7.755945 × 10 10
f 17 Best−1.801303−1.801303−1.801303−1.801303
Avg.−1.801303−1.801303−1.801303−1.801303
Worst−1.801303−1.801303−1.801303−1.801303
SD3.984603 × 10 12 3.073081 × 10 9 1.314259 × 10 15 1.115984 × 10 12
f 18 Best−4.687657−4.687658−4.687658−4.687658
Avg.−4.687651−4.567539−4.599323−4.359473
Worst−4.687640−3.749195−4.332021−3.573593
SD3.945663 × 10 6 1.662246 × 10 1 7.870435 × 10 2 3.986633 × 10 1
f 19 Best0.0000000.0000000.0000000.000000
Avg.0.0000000.0000000.0000000.000000
Worst0.0000000.0000000.0000000.000000
SD0.0000000.0000000.0000000.000000
f 20 Best0.0000000.0000000.0000000.000000
Avg.0.0000000.0000000.0000000.000000
Worst0.0000000.0000000.0000000.000000
SD0.0000000.0000000.0000000.000000
f 21 Best3.0000003.0000003.0000003.000000
Avg.3.0000003.0000003.0000003.000000
Worst3.0000003.0000003.0000003.000000
SD3.827852 × 10 13 5.764236 × 10 9 1.924979 × 10 14 9.407358 × 10 11
f 22 Best2.100529 × 10 5 6.233180 × 10 7 2.762363 × 10 4 2.471215 × 10 3
Avg.1.123810 × 10 3 1.300996 × 10 1 6.401639 × 10 3 6.126825 × 10 2
Worst2.974472 × 10 3 1.0359303.782117 × 10 2 3.768746 × 10 1
SD1.050413 × 10 3 3.277276 × 10 1 9.226831 × 10 3 7.362338 × 10 2
f 23 Best−3.862780−3.862780−3.862780−3.862780
Avg.−3.862780−3.862255−3.862780−3.862254
Worst−3.862780−3.854902−3.862780−3.854902
SD1.061189 × 10 10 1.965115 × 10 3 5.382464 × 10 15 1.965074 × 10 3
f 24 Best3.996803 × 10 15 3.996803 × 10 15 4.440892 × 10 16 4.440892 × 10 16
Avg.3.996803 × 10 15 7.312669 × 10 15 4.440892 × 10 16 2.575717 × 10 15
Worst3.996803 × 10 15 7.549517 × 10 15 4.440892 × 10 16 7.549517 × 10 15
SD0.0000008.862025 × 10 16 0.0000001.967404 × 10 15
f 25 Best1.099003 × 10 3 4.167573 × 10 8 2.110681 × 10 7 1.097794 × 10 7
Avg.9.811139 × 10 2 9.347600 × 10 2 4.397173 × 10 3 3.273148 × 10 7
Worst3.014981 × 10 1 3.999622 × 10 1 1.098999 × 10 2 1.083257 × 10 6
SD1.046370 × 10 1 9.982078 × 10 2 5.381619 × 10 3 2.209943 × 10 7
f 26 Best−1.080938−1.080938−1.080938−1.080938
Avg.−1.080938−1.080938−1.075192−1.075192
Worst−1.080938−1.080938−1.056311−1.056311
SD1.216749 × 10 10 4.717320 × 10 10 1.041639 × 10 2 1.041639 × 10 2
f 27 Best−9.649998 × 10 1 −9.649999 × 10 1 −9.649999 × 10 1 −9.649999 × 10 1
Avg.−9.426906 × 10 1 −9.350842 × 10 1 −9.355537 × 10 1 −7.696397 × 10 1
Worst−9.079998 × 10 1 −7.367849 × 10 1 −7.035660 × 10 1 −4.828707 × 10 1
SD2.065763 × 10 2 4.201816 × 10 2 6.553091 × 10 2 1.953920 × 10 1
f 28 Best−9.649623 × 10 1 −9.649673 × 10 1 −5.170000 × 10 1 −9.079987 × 10 1
Avg.−5.700238 × 10 1 −4.854299 × 10 1 −3.504035 × 10 1 −3.186518 × 10 1
Worst−5.317959 × 10 2 −5.317959 × 10 2 −5.317959 × 10 2 −2.813614 × 10 2
SD2.867891 × 10 1 2.743351 × 10 1 1.736198 × 10 1 2.090066 × 10 1
f 29 Best2.498726 × 10 4 1.093726 × 10 5 9.178611 × 10 13 4.883815 × 10 8
Avg.5.554048 × 10 3 1.419868 × 10 1 2.459967 × 10 1 1.769584 × 10 1
Worst5.564318 × 10 2 3.4345013.684844 × 10 2 3.925457
SD9.949693 × 10 3 6.288349 × 10 1 9.190723 × 10 1 7.128277 × 10 1
f 30 Best1.696582 × 10 4 9.049588 × 10 6 5.440372 × 10 11 4.601300 × 10 8
Avg.4.315053 × 10 3 1.263696 × 10 1 3.685149 × 10 1 2.656585 × 10 1
Worst2.657023 × 10 2 3.684844 × 10 2 3.684844 × 10 2 7.966935 × 10 2
SD5.023689 × 10 3 6.609445 × 10 1 1.105443 × 10 2 1.430091 × 10 2
Table 19. Comparison of convergence speed for the assessed algorithms. The average ranking outcomes through Friedman, Friedman aligned, and Quade tests.
Table 19. Comparison of convergence speed for the assessed algorithms. The average ranking outcomes through Friedman, Friedman aligned, and Quade tests.
Ranking
AlgorithmFriedmanFriedman AlignedQuade
ESCA2.166754.46672.2000
GWO2.900065.80002.8387
HHO2.366760.76672.5376
WOA2.566760.96672.4237
Table 20. Comparison of solution accuracy for the assessed algorithms. The average ranking outcomes through Friedman, Friedman aligned, and Quade tests.
Table 20. Comparison of solution accuracy for the assessed algorithms. The average ranking outcomes through Friedman, Friedman aligned, and Quade tests.
Ranking
AlgorithmFriedmanFriedman AlignedQuade
ESCA2.200053.70002.0613
GWO2.833366.20002.8828
HHO2.200053.00002.2065
WOA2.766769.10002.8495
Table 21. Comparison of convergence speed for the assessed algorithms. Contrast Estimation based on medians.
Table 21. Comparison of convergence speed for the assessed algorithms. Contrast Estimation based on medians.
ESCAGWOHHOWOA
ESCA0865.5159.6398.9
GWO−865.50−705.9−466.6
HHO−159.6705.90239.3
WOA−398.9466.6−239.30
Table 22. Comparison of solution accuracy for the assessed algorithms. Contrast Estimation based on medians.
Table 22. Comparison of solution accuracy for the assessed algorithms. Contrast Estimation based on medians.
ESCAGWOHHOWOA
ESCA08.290 × 10 16 4.145 × 10 16 4.145 × 10 16
GWO−8.290 × 10 16 0−4.145 × 10 16 −4.145 × 10 16
HHO−4.145 × 10 16 4.145 × 10 16 00
WOA−4.145 × 10 16 4.145 × 10 16 00
Table 23. Statistical data for 30 runs with a population of 120 and 10,000 iterations for high-dimensional functions.
Table 23. Statistical data for 30 runs with a population of 120 and 10,000 iterations for high-dimensional functions.
# N. var. ESCAGWOHHOWOA
f 1 100Best0.0000000.0000000.0000000.000000
Avg.0.0000000.0000000.0000000.000000
Worst0.0000000.0000000.0000000.000000
SD0.0000000.0000000.0000000.000000
300Best0.0000001.472678 × 10 269 0.0000000.000000
Avg.0.0000001.195492 × 10 267 0.0000000.000000
Worst0.0000009.890031 × 10 267 0.0000000.000000
SD0.0000000.0000000.0000000.000000
500Best0.0000006.492796 × 10 216 0.0000000.000000
Avg.0.0000002.937464 × 10 214 0.0000000.000000
Worst0.0000005.611286 × 10 213 0.0000000.000000
SD0.0000000.0000000.0000000.000000
f 2 100Best0.0000000.0000000.0000000.000000
Avg.0.0000000.0000000.0000000.000000
Worst0.0000000.0000000.0000000.000000
SD0.0000000.0000000.0000000.000000
300Best0.0000002.664037 × 10 269 0.0000000.000000
Avg.0.0000001.078682 × 10 267 0.0000000.000000
Worst0.0000001.117063 × 10 266 0.0000000.000000
SD0.0000000.0000000.0000000.000000
500Best0.0000004.427418 × 10 216 0.0000000.000000
Avg.0.0000003.940111 × 10 214 0.0000000.000000
Worst0.0000002.032876 × 10 213 0.0000000.000000
SD0.0000000.0000000.0000000.000000
f 10 100Best8.324341 × 10 149 1.977371 × 10 107 0.0000002.360440 × 10 2
Avg.7.974122 × 10 116 5.051906 × 10 92 0.0000007.941213 × 10 3
Worst2.391764 × 10 114 1.513262 × 10 90 0.0000003.263596 × 10 4
SD4.293318 × 10 115 2.716247 × 10 91 0.0000007.423773 × 10 3
300Best1.527654 × 10 82 9.975385 × 10 35 0.0000005.113928 × 10 5
Avg.1.566300 × 10 55 4.698595 × 10 7 0.0000002.324814 × 10 6
Worst4.212831 × 10 54 1.409572 × 10 5 0.0000003.178554 × 10 6
SD7.582484 × 10 55 2.530258 × 10 6 0.0000005.939182 × 10 5
500Best1.617417 × 10 70 4.140184 × 10 14 0.0000008.236708 × 10 6
Avg.2.137194 × 10 27 1.223006 × 10 2 0.0000001.223004 × 10 7
Worst6.411583 × 10 26 3.383713 × 10 1 0.0000001.470168 × 10 7
SD1.150914 × 10 26 6.061667 × 10 2 0.0000001.446876 × 10 6
f 11 100Best9.417182 × 10 1 9.409247 × 10 1 9.460401 × 10 1 9.267136 × 10 1
Avg.9.690864 × 10 1 9.618143 × 10 1 9.501840 × 10 1 9.309575 × 10 1
Worst9.839476 × 10 1 9.827330 × 10 1 9.538590 × 10 1 9.337289 × 10 1
SD1.2146208.735442 × 10 1 1.739057 × 10 1 1.907979 × 10 1
300Best2.958073 × 10 2 2.957236 × 10 2 2.951796 × 10 2 5.714967
Avg.2.976425 × 10 2 2.970865 × 10 2 2.957244 × 10 2 2.828332 × 10 2
Worst2.981833 × 10 2 2.978485 × 10 2 2.959295 × 10 2 2.928548 × 10 2
SD6.213829 × 10 1 7.024913 × 10 1 1.326191 × 10 1 5.145973 × 10 1
500Best4.973285 × 10 2 4.950355 × 10 2 4.935614 × 10 2 4.904825 × 10 2
Avg.4.978877 × 10 2 4.969489 × 10 2 4.939061 × 10 2 4.910578 × 10 2
Worst4.981244 × 10 2 4.976162 × 10 2 4.939489 × 10 2 4.913822 × 10 2
SD2.206418 × 10 1 6.608382 × 10 1 9.846602 × 10 2 2.614029 × 10 1
f 24 100Best3.996803 × 10 15 1.110223 × 10 14 4.440892 × 10 16 4.440892 × 10 16
Avg.3.996803 × 10 15 1.453652 × 10 14 4.440892 × 10 16 2.338870 × 10 15
Worst3.996803 × 10 15 1.820766 × 10 14 4.440892 × 10 16 7.549517 × 10 15
SD0.0000001.117208 × 10 15 0.0000001.995713 × 10 15
300Best3.996803 × 10 15 2.176037 × 10 14 4.440892 × 10 16 4.440892 × 10 16
Avg.3.996803 × 10 15 2.614205 × 10 14 4.440892 × 10 16 2.457294 × 10 15
Worst3.996803 × 10 15 2.886580 × 10 14 4.440892 × 10 16 7.549517 × 10 15
SD0.0000003.135436 × 10 15 0.0000002.186836 × 10 15
500Best3.996803 × 10 15 2.886580 × 10 14 4.440892 × 10 16 4.440892 × 10 16
Avg.4.825769 × 10 15 3.158955 × 10 14 4.440892 × 10 16 2.457294 × 10 15
Worst7.549517 × 10 15 3.597123 × 10 14 4.440892 × 10 16 7.549517 × 10 15
SD1.502629 × 10 15 1.985144 × 10 15 0.0000002.371435 × 10 15
f 25 100Best4.9550852.6246744.766665 × 10 5 3.631265 × 10 5
Avg.6.1696434.1231674.122016 × 10 3 1.910465 × 10 3
Worst7.3159615.1845712.124806 × 10 2 1.105270 × 10 2
SD5.111456 × 10 1 5.124575 × 10 1 5.953081 × 10 3 4.083615 × 10 3
300Best2.643887 × 10 1 2.282209 × 10 1 2.212745 × 10 3 3.726640 × 10 3
Avg.2.697167 × 10 1 2.376806 × 10 1 8.795785 × 10 3 8.291253 × 10 3
Worst2.757999 × 10 1 2.474629 × 10 1 1.782640 × 10 2 2.452580 × 10 2
SD3.359007 × 10 1 4.596044 × 10 1 4.937780 × 10 3 5.300295 × 10 3
500Best4.658280 × 10 1 4.243303 × 10 1 8.669046 × 10 3 2.174499 × 10 2
Avg.4.724426 × 10 1 4.391827 × 10 1 2.089506 × 10 2 3.314949 × 10 2
Worst4.807043 × 10 1 4.471744 × 10 1 2.799315 × 10 2 5.016311 × 10 2
SD3.648753 × 10 1 5.330605 × 10 1 4.269011 × 10 3 7.169631 × 10 3
Table 24. Design variables and comparison of the best solutions obtained for pressure vessel problem.
Table 24. Design variables and comparison of the best solutions obtained for pressure vessel problem.
Variables
AlgorithmdsdhRLFunction Cost
ESCA0.81250.437542.0983176.63856059.7344
SCA0.81250.437542.0799177.04656066.1710
MSCA0.77930.399640.3255199.92135935.7161
IHS1.12500.625058.290243.69277197.7300
GSA1.12500.625055.988784.45428538.8359
PSO0.81250.437542.0913176.74656061.0777
GA_10.81250.434540.3239200.00006288.7445
GA_20.81250.437542.0974176.65416059.9463
GA_30.93750.500048.3290112.67906410.3811
ES0.81250.437542.0981176.64056059.7456
DE0.81250.437542.0984176.63776059.7340
ACO0.81250.437542.1036176.57276059.0888
GWO0.81250.437542.0892176.75876061.0135
HHO0.81760.407342.0917176.71966000.4626
WOA0.81250.437542.0983176.63906059.7410
Table 25. Constraints of the best solutions obtained for the pressure vessel problem.
Table 25. Constraints of the best solutions obtained for the pressure vessel problem.
Constraints
Algorithm g 1 g 2 g 3 g 4
ESCA−2.81 × 10 6 −3.59 × 10 2 −5.57 × 10 1 −6.34 × 10 1
SCA−3.59 × 10 4 −3.61 × 10 2 −9.97 × 10 2 −6.30 × 10 1
MSCA−9.75 × 10 4 −1.49 × 10 2 −1.26 × 10 1 −4.01 × 10 1
IHS−1.05 × 10 7 −6.89 × 10 2 6.57 × 10 2 −1.96 × 10 2
GSA−4.44 × 10 2 −9.09 × 10 2 −2.71 × 10 5 −1.56 × 10 2
PSO−1.39 × 10 4 −3.59 × 10 2 −1.16 × 10 2 −6.33 × 10 1
GA_1−3.42 × 10 2 −4.98 × 10 2 −3.04 × 10 2 −4.00 × 10 1
GA_2−2.02 × 10 5 −3.59 × 10 2 −2.49 × 10 1 −6.33 × 10 1
GA_3−4.75 × 10 3 −3.89 × 10 2 −3.65 × 10 3 −1.27 × 10 2
ES−6.92 × 10 6 −3.59 × 10 2 2.90−6.34 × 10 1
DE−6.68 × 10 7 −3.59 × 10 2 −3.71−6.34 × 10 1
ACO9.99 × 10 5 −3.58 × 10 2 −1.22−6.34 × 10 1
GWO−1.79 × 10 4 −3.60 × 10 2 −4.06 × 10 1 −6.32 × 10 1
HHO−5.21 × 10 3 −5.74 × 10 3 −6.57 × 10 6 −6.33 × 10 1
WOA−3.39 × 10 6 −3.59 × 10 2 −1.25−6.34 × 10 1
Table 26. Welded beam problem. Function cost and variables.
Table 26. Welded beam problem. Function cost and variables.
Variables
AlgorithmhltbFunction Cost
ESCA0.2057273.4705709.0366250.2057301.724862
SCA0.2056613.4717319.0378170.2057421.725213
GSA0.1821293.85697910.0000000.2023761.879952
RO0.2036873.5284679.0042330.2072411.735344
IHS0.2036873.5284679.0042330.2072411.735344
GA_30.2489006.1730008.1789000.2533002.433100
GWO0.2056763.4783779.036810.2057781.726240
HHO0.2040393.5310619.0274630.2061471.731990
WOA0.2053963.4842939.0374260.2062761.730499
Table 27. Welded Beam problem. Constraints.
Table 27. Welded Beam problem. Constraints.
Constraints
Algorithm g 1 g 2 g 3 g 4 g 5 g 6 g 7
ESCA−7.80 × 10 2 −5.98 × 10 2 −3.00 × 10 6 −3.43−8.07 × 10 2 −2.36 × 10 1 −3.20 × 10 2
SCA−0.699753−9.721939−0.000081−3.432575−0.080661−0.235547−1.602377
GSA−5.35 × 10 2 −5.10 × 10 3 −2.02 × 10 2 −3.26−5.71 × 10 2 −2.39 × 10 1 −1.33 × 10 4
RO−2.24−4.13−3.55 × 10 3 −3.42−7.87 × 10 2 −2.35 × 10 1 −1.24 × 10 4
IHS−2.24−4.13−3.55 × 10 3 −3.42−7.87 × 10 2 −2.35 × 10 1 −1.24 × 10 4
GA_3−5.76 × 10 3 −2.56 × 10 2 −4.40 × 10 3 −2.98−1.24 × 10 1 −2.34 × 10 1 −2.39 × 10 4
GWO−2.12 × 10 1 −8.29−1.02 × 10 4 −3.43−8.07 × 10 2 −2.36 × 10 1 −4.31
HHO−6.21 × 10 1 5.72 × 10 2 −2.11 × 10 3 −3.43−7.90 × 10 2 −2.36 × 10 1 −3.26 × 10 1
WOA−2.15 × 10 1 −8.48 × 10 1 −8.80 × 10 4 −3.43−8.04 × 10 2 −2.36 × 10 1 −4.83 × 10 1
Table 28. Design variables and comparison of the best solutions obtained for the rolling element bearing design problem.
Table 28. Design variables and comparison of the best solutions obtained for the rolling element bearing design problem.
Algorithm
Design VariablesSCAGA_4TLBOMBASDOHHOESCA
D m 125.719015125.717100125.719100125.715300125.700000125.000000125.718960
D b 21.42555721.42300021.42559021.42330021.42490521.00000021.425563
Z11.00000011.00000011.00000011.00000011.00000011.09000011.000000
f i 0.5150000.5150000.5150000.5150000.5150020.5150000.515000
f o 0.5150000.5150000.5150000.5150000.5159300.5150000.515000
K D m i n 0.4902130.4159000.4242660.4888050.4877550.4000000.465124
K D m a x 0.6724510.6510000.6339480.6278290.6299920.6000000.653542
ϵ 0.3000000.3000430.3000000.3001490.3000390.3000000.300000
e0.0707630.0223000.0688580.0973050.0535100.0504740.020149
ψ 0.7600580.7510000.7994980.6460950.6659820.6000000.736634
Function cost81,859.50881,841.51181,859.73881,843.68681,575.18583,011.88381,859.552
Table 29. Constraints of the best solutions obtained for the rolling element bearing design problem.
Table 29. Constraints of the best solutions obtained for the rolling element bearing design problem.
Algorithm
ConstraintsSCAGA_4TLBOMBASDOHHOESCA
g 1 0.0000090.0008220.0000040.000564−0.0012720.0134770.000003
g 2 8.53620413.73300013.1525608.6302508.70696014.00000010.292446
g 3 4.2204562.7240001.5251801.1014301.2496300.0000002.896814
g 4 1.3761831.1070002.559350−2.040450−1.445445−3.0000000.673457
g 5 0.7190150.7171000.7191000.7153000.7000000.0000000.718960
g 6 16.9717354.85790016.49540023.61095012.67750012.6185004.318290
g 7 0.0000470.002129−0.0000220.0005180.0092400.7000000.000070
g 8 0.0000000.0000000.0000000.0000000.0000020.0000000.000000
g 9 0.0000000.0000000.0000000.0000000.0009300.0000000.000000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Belazi, A.; Migallón, H.; Gónzalez-Sánchez, D.; Gónzalez-García, J.; Jimeno-Morenilla, A.; Sánchez-Romero, J.-L. Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization. Mathematics 2022, 10, 1166. https://doi.org/10.3390/math10071166

AMA Style

Belazi A, Migallón H, Gónzalez-Sánchez D, Gónzalez-García J, Jimeno-Morenilla A, Sánchez-Romero J-L. Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization. Mathematics. 2022; 10(7):1166. https://doi.org/10.3390/math10071166

Chicago/Turabian Style

Belazi, Akram, Héctor Migallón, Daniel Gónzalez-Sánchez, Jorge Gónzalez-García, Antonio Jimeno-Morenilla, and José-Luis Sánchez-Romero. 2022. "Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization" Mathematics 10, no. 7: 1166. https://doi.org/10.3390/math10071166

APA Style

Belazi, A., Migallón, H., Gónzalez-Sánchez, D., Gónzalez-García, J., Jimeno-Morenilla, A., & Sánchez-Romero, J. -L. (2022). Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization. Mathematics, 10(7), 1166. https://doi.org/10.3390/math10071166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop