Next Article in Journal
Causal Discovery of Stochastic Dynamical Systems: A Markov Chain Approach
Next Article in Special Issue
Multimodal Movie Recommendation System Using Deep Learning
Previous Article in Journal
Shortfall-Based Wasserstein Distributionally Robust Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EJS: Multi-Strategy Enhanced Jellyfish Search Algorithm for Engineering Applications

1
Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054, China
2
School of Mechanical and Precision Instrument Engineering, Xi’an University of Technology, Xi’an 710048, China
3
Department of Computer and Information Science, Linköping University, 581 83 Linköping, Sweden
4
Faculty of Science, Fayoum University, Faiyum 63514, Egypt
5
Department of Mathematics, University of Sargodha, Sargodha 40100, Pakistan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(4), 851; https://doi.org/10.3390/math11040851
Submission received: 12 December 2022 / Revised: 16 January 2023 / Accepted: 29 January 2023 / Published: 7 February 2023
(This article belongs to the Special Issue Nature Inspired Computing and Optimisation)

Abstract

:
The jellyfish search (JS) algorithm impersonates the foraging behavior of jellyfish in the ocean. It is a newly developed metaheuristic algorithm that solves complex and real-world optimization problems. The global exploration capability and robustness of the JS algorithm are strong, but the JS algorithm still has significant development space for solving complex optimization problems with high dimensions and multiple local optima. Therefore, in this study, an enhanced jellyfish search (EJS) algorithm is developed, and three improvements are made: (i) By adding a sine and cosine learning factors strategy, the jellyfish can learn from both random individuals and the best individual during Type B motion in the swarm to enhance optimization capability and accelerate convergence speed. (ii) By adding a local escape operator, the algorithm can skip the trap of local optimization, and thereby, can enhance the exploitation ability of the JS algorithm. (iii) By applying an opposition-based learning and quasi-opposition learning strategy, the population distribution is increased, strengthened, and more diversified, and better individuals are selected from the present and the new opposition solution to participate in the next iteration, which can enhance the solution’s quality, meanwhile, convergence speed is faster and the algorithm’s precision is increased. In addition, the performance of the developed EJS algorithm was compared with those of the incomplete improved algorithms, and some previously outstanding and advanced methods were evaluated on the CEC2019 test set as well as six examples of real engineering cases. The results demonstrate that the EJS algorithm can skip the trap of local optimization, can enhance the solution’s quality, and can increase the calculation speed. In addition, the practical engineering applications of the EJS algorithm also verify its superiority and effectiveness in solving both constrained and unconstrained optimization problems, and therefore, suggests future possible applications for solving such optimization problems.

1. Introduction

Challenging optimization problems with highly nonlinear objective requests, intricate constraints, and large-scale decision variables are becoming more and more common in today’s rapidly developing real world. Especially, when solving an optimization problem with multiple peaks, the global optimization methods using traditional methods become less powerful and easily converge to local optimization. Meanwhile, traditional exact optimization methods work on a single feasible region and need gradient information, whereas the metaheuristic algorithm, in this paper, works on a set (population) composed of multiple feasible solutions, using the fitness value of the objective function (or fitness function), without using gradient and other auxiliary information, and therefore, it can provide high-quality solutions (optimal solution or near-optimal solution) with reasonable computing resources. Traditional exact methods may not obtain accurate solutions, but metaheuristic algorithms pursue satisfactory approximate solutions, and can provide satisfactory and high-quality solutions for challenging practical problems [1].
Metaheuristic algorithms [2] have the following advantages: (i) The structure and implementation process is simple; satisfactory solutions can be found by modifying the structure and parameters of the method. (ii) Gradient information is not required to solve optimization problems using metaheuristic algorithms, since the output data are obtained according to the input data in a given optimization problem. (iii) The algorithm randomly explores the whole search region because of randomness, which effectively avoids the algorithm plunging into a local optimal solution. (iv) Metaheuristic algorithms can by applied for solving various types of optimization problems with non-differentiable, nonlinear, and complex multiple local solutions.
There are two main components of metaheuristic algorithms: exploration and exploitation [3]. The search for promising areas is the goal of the exploration phase; without this ability, the algorithm may be premature and plunges into a local peak. Searching the promising areas found is called exploitation; without this ability, the algorithm may not even converge. The coordination point between the exploration and development stages is always the goal of researchers, and is also the major work for performance testing of a metaheuristic algorithm. Owing to the randomness of a metaheuristic algorithm, this is still a problem worth exploring.
As the name implies, evolution is certainly a simulation of natural evolution, such as the genetic algorithm (GA) [4], which is the most popular and widely used evolutionary algorithm. It evolves the individual by simulating the principle of survival of the fittest in nature, and can obtain high-quality solutions while avoiding local optimal solutions. Ever since, many other evolutionary algorithms have emerged, including differential evolution (DE) [5], etc.
Similarly, constrained by different laws of physics in the universe, the most classic example is the simulated annealing (SA) [6] algorithm. Since then, for instance, the gravitational search algorithm (GSA) [7], the Big Bang–Big Crunch (BB-BC) algorithm [8], the multi-verse optimization (MVO) algorithm [9], an improved version of the sooty tern optimization (ST) algorithm (ST-AL) [10], and the Archimedes optimization algorithm (AOA) [11], etc. have also been named and put forward one by one.
The collective behavior of simulated species is summarized as a group algorithm. Swarm-based methods impersonate the collective behavior of species. Two prominent examples are the particle swarm optimization (PSO) algorithm [12] and the ant colony optimization (ACO) [13] algorithm. Additional examples include the whale optimization algorithm (WOA) [14,15], grey wolf optimization (GWO) algorithm [16], ant lion optimization (ALO) algorithm [17], grasshopper optimization algorithm (GOA) [18], Harris hawks optimization (HHO) algorithm [19], barnacles mating optimization (BMO) algorithm [20], seagull optimization algorithm (SOA) [21], and jellyfish search (JS) algorithm [22,23]. At the same time, some improved algorithms have also been studied, one after another, such as the enhanced chimp optimization algorithm (SOCSChOA) [24], the enhanced manta ray foraging optimization (WMQIMRFO) algorithm [25], the integrated variant of MRFO with the triangular mutation operator and orthogonal learning strategy (MRTM) algorithm [26], the enhanced hybrid arithmetic optimization algorithm (CSOAOA) [27], the improved whale optimization algorithm (IWOA) [28], the improved salp swarm optimization algorithm [29], the boosting chameleon swarm algorithm [30], the hybrid firefly algorithm–particle swarm optimization (FFAPSO) algorithm [31], etc.
The last one is a developed algorithm according to some specific social groups behaviors of humans. Teaching and learning-based optimization (TLBO) [32], harmony search (HS) [33], social learning optimization (SLO) [34], social group optimization (SGO) [35], social evolution and learning optimization (SELO) [36], etc. are some famous algorithms.
The jellyfish search (JS) algorithm is a novel swarm-based method, put forward by Chou et al. in 2021, which mainly impersonates searching for food behavior of jellyfish in the ocean. The better global hunting capability, strong robustness, few parameters involved, and so on are excellent merits of the JS algorithm, and therefore, we have conducted further and more in-depth research on this topic. In the JS algorithm, jellyfish have two ways of moving: (1) moving in the ocean current; and (2) within the group. Jellyfish can transform among these moving modes according to the principle of time control, which can enhance the optimization performance of the JS algorithm. In view of its good superiority, many practical engineering problems have been solved and studied using this method. Gouda et al. [37] applied the JS method to pick up undiscovered parameters in a former PEM fuel cell. Youssef et al. [38] used the JS method for a parameter estimation of a single phase transformer. The JS method was able to deal with the optimal volt coordination problem in automated distribution systems [39]. Subsequently, multi-objective JS methods [40] combined with quasi-reflected learning have been studied. At present, some scholars have improved the jellyfish search algorithm [41,42,43,44].
In the research process, the global exploration capability and robustness of the JS algorithm are strong, but the JS algorithm still has significant development space for solving complex optimization problems with high dimensions and multiple local optima. We found that the JS algorithm had some errors with the theoretical optimal value when solving some benchmark test functions, due to defects of the JS algorithm on some benchmark test functions, such as low calculation precision and easily getting stuck at a local optimal value. Therefore, we proposed an enhanced jellyfish search (EJS) algorithm by adding sine and cosine learning factors, as well as a local escape operator and learning strategy. Based on the original JS algorithm, the main contributions include the following four points:
(a) The introduction of sine and cosine learning factors enable jellyfish to learn from the position of the optimal individual when they are moving in Type B shape within the jellyfish group, which is able to improve the optimization capability, and further accelerate convergence rate.
(b) By adding the local escape operator, the algorithm is able to skip the trap of local optimization, which can increase the exploitation capability of the JS algorithm.
(c) By applying an opposition-based learning and quasi-opposition learning strategy, the result is a more diverse distribution of candidate individuals, thereby, enhancing the quality of solution, accelerating convergence speed, and improving the algorithm’s precision.
(d) The comparison tests of the EJS algorithm with the incomplete improved algorithms and with other famous optimization algorithms, the exploration and development balance test of the EJS algorithm on the CEC2019 benchmark test set, and six engineering practical applications verify that the EJS algorithm has strong competitiveness.
The framework of this article is arranged as follows: In Section 2, the basic law of the jellyfish search algorithm is briefly described, and its steps and pseudo-code are given. The enhanced jellyfish search (EJS) algorithm is developed by adding sine and cosine learning factors, a local escape operator, and a learning strategy to the original JS algorithm, and the steps, pseudo-code, flow chart, and time complexity of the proposed EJS are given in Section 3. The test is carried out on CEC2019 respect to EJS and several famous previous algorithms in Section 4. Meanwhile, in order to verify the performance of the EJS algorithm, an exploration and development balance test of the EJS algorithm is also carried outed, compared, and analyzed. Six practical engineering cases are resolved using the EJS algorithm in Section 5. Finally, a brief summary and forecast are outlined in the Conclusions.

2. Overview of the Basic Jellyfish Search Algorithm

The jellyfish search (JS) algorithm imitates the pattern of jellyfish looking for food. This mathematical model can be described as follows: A large number of nutritious foods exists in the ocean current, which attracts the jellyfish into a group. Therefore, first, the jellyfish go after the ocean current, and then move within the group. When moving within the group, jellyfish have two ways of motion: Types A and B, which represent the passive and active behaviors, respectively. For the convenience of description, the following statements are based on A and B. In order to determine the time-varying motion types, the time control principle plays a role as an involved parameter in the process of controlling the transformation between Types A and B.

2.1. Population Initialization

Generally speaking, the solution’s quality of intelligence method is influenced by the quality of the initial candidate individuals. Increased diversity of the initial candidate individuals helps to enhance the optimization performance. However, the populations of general optimization algorithms are usually randomly initialized. This method may lead to the exploration space not being exhaustively searched, and therefore, the algorithm has low precision and the limitation of running into a local optimal value. Therefore, to increase the diversity of the initial candidate individuals, the JS algorithm adopts logistic maps to initialize the population according to the ergodicity and randomness of chaotic mapping, which ensures that the search region is fully researched to a certain degree. Logistic maps can be described by the following Equation (1):
P i + 1 = η P i ( 1 P i )
where η is a parameter that is set to 4. The logistic chaos value corresponding to the position of the i-th candidate individuals is recorded as Pi, the initial value of Pi is called P0, and satisfy P 0 ( 0 ,   1 ) , meanwhile, P0∉{0, 0.25, 0.5, 0.75, 1}.

2.2. Jellyfish Follow the Ocean Current

All directional variables for each candidate from their own position to the optimal position can be called the direction of the current ( D i r e c t i o n ). In other words, the ocean current can be expressed by Equation (2):
D i r e c t i o n = 1 N D i r e c t i o n i = 1 N ( P e c P i ) = P e c P i N = P e c μ
Let d f = e c μ , then D i r e c t i o n can be shortened as follows:
D i r e c t i o n = P d f ,
where N, ec, and μ are the amount of candidate individuals (population), the attraction factor, and the average position of all jellyfish, respectively. P is the best position of candidate individuals in the present solution. Here, df is defined as the difference between the optimal and average location.
Due to the assumption of normal distribution of candidate individuals, the distance near the average location ± β σ may include all candidate individuals, thus, df can be reduced to the following form:
d f = β × r 1 × μ .
Here, e c = β × r 1 , r 1 = r a n d ( 0 ,   1 ) . Thus, the mathematical Equation (3) of the ocean current can be described by Equation (5):
D i r e c t i o n = P β × r 1 × μ
Now, the updated equation for each candidate individual that goes after the ocean current is represented by Equation (6):
P i ( t + 1 ) = P i ( t ) + r 2 × D i r e c t i o n
Combining Equation (5), the above Equation (6) can be transformed into:
P i ( t + 1 ) = P i ( t ) + r 2 × ( P β × r 1 ) × μ
Here, β > 0 ,   β = 3 ,   r 2 = rand ( 0 ,   1 ) .

2.3. Jellyfish Move within a Swarm

In a jellyfish swarm, there are two behavior movements of the jellyfish: Types A and B movements, and candidates switch between Types A and B. At first, the jellyfish group forms and has no active ability; most candidate individuals show Type A movements. With the passage of time, Type B movements begin.
(1)
Type A movement:
In passive movement, the candidate individual moves around its own position, and its position can be updated using Equation (8):
P i ( t + 1 ) = P i ( t ) + γ × r 3 × ( U b L b ) .
where Ub and Lb are the upper and lower limits of search region, respectively. γ > 0 is the movement factor, and γ = 0.1 ,   r 3 = rand ( 0 ,   1 ) .
(2)
Type B movement:
In active movement, the candidate individual (j) is randomly selected, when the amount of food at the selected candidate location Pj exceeds its own candidate location Pi, Pi moves in the direction of Pj. Otherwise, pi moves in the opposite direction of Pj. Therefore, each candidate migrates in a favorable direction to search for a food source in the colony. At this time, the location update formula of each candidate is:
P i ( t + 1 ) = P i ( t ) + s t e p
where
s t e p = rand ( 0 , 1 ) × D D i r e c t i o n
D D i r e c t i o n = P j ( t ) P i ( t ) i f f ( P i ) f ( P j ) P i ( t ) P j ( t ) i f f ( P i ) < f ( P j )

2.4. Time Control Mechanism

In order to capture the type of movement that changes with time, the time control theory needs to be introduced. It controls the passive and active movements of candidate individuals in the colony, and also the movements of candidates going after ocean currents.
In order to adjust different movements of candidate individuals, the time control function C(t) and constant C0 need to be considered. Figure 1 displays the change trend of the time control function. C(t) is the random value that fluctuates between 0 and 1 from Figure 1, therefore, C0 is set to 0.5. The candidate individuals follow the ocean current when C(t) > 0.5; otherwise, candidates move within the swarm.
C ( t ) = ( 1 t / T ) × ( 2 × rand ( 0 ,   1 ) 1 )
where t and T are the current and maximum iterations, respectively.
Similarly, function 1 − C(t) is also applied to regulate A and B movements of candidates within a swarm. The candidates show passive moving if rand(0, 1) > 1 − C(t), Otherwise, candidates exhibit active moving. Since the value of 1 − C(t) increases from 0 to 1 at the beginning of iteration, then rand(0, 1) > 1 − C(t), at which time, passive moving of candidates takes precedence over active movement of candidates.

2.5. Boundary Conditions

Since the earth is spherical, it returns to the phase when the jellyfish movements exceed the bounded search zone. The process is shown in Equation (13):
P i d = ( P i d U b d ) + L b d i f P i d > U b d P i d = ( P i d L b d ) + U b d i f P i d < L b d
where P i d is location of the i-th candidate with the d-th dimension, P i d is the renewed position after checking boundary constraints, U b d and L b d are the upper and lower limits of the d-th dimension in finding space, respectively.

2.6. Steps of the Jellyfish Search Algorithm

For the JS algorithm, the moving of candidate individuals chasing after the ocean current is called exploration, and the movement of candidate individuals within the swarm is defined as exploitation, and time control parameters control the switch between these two phases. The JS algorithm focuses on exploration to find potential areas at the beginning of iteration; the JS algorithm prefers exploitation to determine the best position in the determined area at the end of the iteration. To summarize the above phases, the exhaustive steps of the JS algorithm are summarized in the following description; meanwhile, the pseudo-code of the JS algorithm is displayed in Algorithm 1:
Step 1. Define the fitness function, set N and T, generate initial positions of N jellyfish individuals in solution search space through logistic maps defined by Equation (1), and let t = 1.
Step 2. Evaluate and compare the objective value of each candidate, and save the optimal location found so far and the corresponding optimal objective value.
Step 3. Compute the time control function C(t) using Equation (12). If C(t) > 0.5, the candidate individual tracts ocean current, and the new location is renewed using Equation (7), otherwise, perform Step 4;
Step 4. Jellyfish move within swarm. If rand(0, 1) > 1−C(t), the candidate individual carries out Type A movement, and the new position is calculated using Equation (8). Otherwise, the candidate carried out Type B movement, and the new position is update using Equation (9).
Step 5. Check the updated individual position whether go beyond boundary condition. If it is out of the search area, Equation (13) is used to return to the opposite boundary.
Step 6. Compare the objective cost of the current location before and after updating. If the fitness value of the updated position is better, replace the current location and corresponding fitness value, and then compare the objective value of the current position with the optimal fitness value. If the objective value of the present location is better, renew the best location found so far and corresponding optimal objective value.
Step 7. If t < T, go back to Step 3, otherwise, perform Step 8.
Step 8. Outlet the best location and corresponding target cost value.
Algorithm 1: JS algorithm
Begin
   Step 1: Initialization. Define the objective function, set N and T, initialize population of jellyfish using Logistic map according to Equation (1), and set t = 1 .
   Step 2: Objective calculation. Calculate quantity of food at each candidate location, and pick up the optimal location of candidate.
   Step 3: While t < T do
       for i = 1 to N do
          Implement c(t) with Equation (12)
         if  C ( t ) > 0.5 then
          Update location with Equation (7)
          else
          if rand(0, 1) > 1 − C(t) then
           Update location with Equation (8)
          else
           Update location with Equation (9)
          end if
        end if
        Check whether the boundary is out of bounds and and replace the optimal position.
        end for
       end while
   Step 4: Return. Return the global best position and corresponding optimal objective cost fitness value.
End

3. Enhanced Jellyfish Search Algorithm

Due to defects of the JS algorithm on some benchmark test functions, such as low calculation precision and easily getting stuck at a local optimal value. in this section, we introduce the following improvements to the JS algorithm, which can enhance the quality of the solution, accelerate convergence, and improve the algorithm’s precision: (i) By adding sine and cosine learning factors, jellyfish learn from the best individual at the same time when moving in Type B motion, which enhances the solution’s quality, and accelerates convergence. (ii) The addition of the local escape operator prevents the JS algorithm from getting stuck at a local optimal value, which improves the exploitation capability of the JS algorithm. (iii) An opposition-based learning and quasi-opposition learning strategy is applied to increase the diversity distribution of candidate populations, and the better individuals in the present solution and the new solution are executed in the next iteration, which enhances the solution’s quality and improves the convergence speed and accuracy of the JS algorithm.

3.1. Sine and Cosine Learning Factors

In the exploration phase of the JS algorithm, when jellyfish move in Type B motion within the jellyfish swarm, the updated position of the jellyfish is only related to another jellyfish randomly selected. In other words, the jellyfish randomly learn from the jellyfish individuals in the current population, which has certain blindness and lacks effective information exchange within the population. This process may lead the algorithm to move away from the orientation of the optimal candidate solution, and at the same time, the convergence speed could be slowed down. In order to ameliorate these deficiencies, the sine and cosine learning factors, i.e., ω1 and ω2, are introduced to make jellyfish learn from both random individuals and the best individual in the search range. This strategy improves the quality of the candidate solution during the exploration phase by seeking the best location more quickly and accelerating the convergence speed.
ω 1 = 2 sin [ ( 1 t / T ) π / 2 ]
ω 2 = 2 cos [ ( 1 t / T ) π / 2 ]
In Type B movement, Equation (16) describes the location update mode of jellyfish:
P i ( t + 1 ) = ω 1 ( P i ( t ) + s t e p ) + ω 2 ( P P i ( t ) )
where s t e p can see Equations (10) and (11).
The original JS algorithm adopts a random strategy to learn, which makes jellyfish randomly learn from the current individual. Poor fitness values of the learned jellyfish individuals will lead to limited convergence speed. Therefore, the sine and cosine learning factors that are introduced into the JS algorithm make the jellyfish learn from random solutions and follow the optimal solution within the search range, and therefore, quickly improves the quality of the solution and accelerates convergence speed.

3.2. Local Escape Operator

The core of swarm intelligence algorithms is to effectively judge and weigh the exploration and exploitation capability of an algorithm. The added sine and cosine learning factors can increase the JS algorithm’s local exploration capability, but the global exploitation capability is weakened. The local escape operator (LEO) is a local search operator based on a gradient-based optimizer (GBO) [45], which aims to find new areas and to enhance the exploitation capability. Therefore, it is implemented in the phase of jellyfish following the ocean current. Notably, the local search operator can update the position of the candidate P i ( t + 1 ) . This helps the algorithm skip any local optimal solutions. Because of this, it can expand the diversity of candidate individuals to search for the global optimal solution. In other words, it makes the algorithm skip the trap of local optimization.
By using multiple solutions such as optimal individual P , two randomly candidate solutions P 1 i ( t ) and P 2 i ( t ) , two randomly selected candidate solutions P r 1 ( t ) and P r 2 ( t ) , a new candidate position P k ( t ) , the LEO gives the alternative solution P L E O ( t ) of the current solution P i ( t + 1 ) , and the generated solution can explore the search space around the optimal solution. See Equations (17) and (18) for a specific mathematical description:
if   rand < 0.5 P L E O ( t ) = P i ( t + 1 ) + f 1 ( u 1 P u 2 P k ( t ) ) + f 2 ρ 1 ( u 3 ( P 2 i ( t ) P 1 i ( t ) ) ) + u 2 ( P r 1 ( t ) P r 2 ( t ) ) / 2 P i ( t + 1 ) = P L E O ( t )  
else P L E O ( t ) = P + f 1 ( u 1 P u 2 P k ( t ) ) + f 2 ρ 1 ( u 3 ( P 2 i ( t ) P 1 i ( t ) ) ) + u 2 ( P r 1 ( t ) P r 2 ( t ) ) / 2 P i ( t + 1 ) = P L E O ( t ) End
where f1 is any number uniformly distributed in [−1, 1], f2 ~N (0, 1). u1, u2, and u3 are three random numbers with the following mathematical formulae:
u 1 = L 1 × 2 × R 1 + ( 1 L 1 )
u 2 = L 1 × R 2 + ( 1 L 1 )
u 3 = L 1 × R 3 + ( 1 L 1 )
where L1 is a binary parameter, and L1 = 0 or 1. If μ 1 < 0.5, L1 = 1, otherwise, L1 = 0, and μ 1 is defined as an arbitrary number between 0 and 1. In addition, ρ 1 is the adaptive coefficient; R1 = rand(0, 1), R2 = rand(0, 1), and R3 = rand(0, 1) are three random numbers between 0 and 1. Specific definitions are as follows:
ρ 1 = 2 × rand ( 0 , 1 ) × α α
α = | χ × sin ( 3 π / 2 + sin ( β × 3 π / 2 ) ) |
χ = χ min + ( χ max χ min ) × ( 1 ( t / T ) 3 ) 2
where χ min = 0.2 , χ max = 1.2 .
In addition, the following mathematical formulae give two randomly generated solutions P 1 i ( t ) and P 2 i ( t ) :
P 1 i ( t ) = L b + R 4 × ( U b L b )
P 2 i ( t ) = L b + R 5 × ( U b L b )
The meaning of parameter representation is described above. R4 = rand(1, D) and R5 = rand(1, D). The mathematical formula of solution P k ( t ) is defined in Equation (27):
P k ( t ) = L 2 × P p ( t ) + ( 1 L 2 ) × P r a n d
P r a n d = L b + R 6 × ( U b L b )
where Pp, p ∈ {1,2, …, N} is an arbitrarily solution, R6 = rand(0, 1) and L2 is a binary parameter. If μ 2 < 0.5, L2 = 1, otherwise, L2 = 0. μ 2 = rand 0 ,   1 .

3.3. Learning Strategy

Opposition-based learning (OBL) [46] and quasi-opposition learning (QOL) [47] are both effective methods to increase the diversity of the candidate individuals, the coverage space of solutions, and the performance of the algorithm. After completing the exploration and exploitation phases of the algorithm, with the aim to further increase the solution precision of the JS algorithm, the OBL and QOL strategy is used to update jellyfish individuals according to probability p, and the solution’s quality in the population is enhanced, and then the optimization competence is magnified.
By implementing the OBL and QOL strategy for the i-th candidate individual Pi in the present population, the opposition-based solution and the quasi-opposition solution can be obtained, recorded as P ˜ i = ( P ˜ i 1 , P ˜ i 2 , , P ˜ i D ) and P i = ( P i 1 , P i 2 , , P i D ) . The specific expression of the component is shown in Equations (29) and (30).
P ˜ i d = L b d + U b d P i d
P i d = r a n d L b d + U b d 2 , L b d + U b d P i d
where P i d is location of the i-th candidate with the d-th dimension, U b d and L b d are the upper and lower limits with the d-th dimension in solution space, respectively.
To sum up, the renewed equation of the i-th candidate jellyfish Pi is defined as the following equation:
P i n e w = P ˜ i i f rand < p P i i f rand p , i = 1 , 2 , , N
where p is selection probability, and p = 0.5.
The algorithm generates N new jellyfish individuals through Equation (31), and then calculates the objective values of the present and new candidates. According to the calculation results, the 2N candidate individuals are sorted, and choose the better N jellyfish individuals to participate in the next iteration process.

3.4. Steps of Enhanced Jellyfish Search Algorithm

The sine and cosine learning factors strategy improves the local exploration capability and the convergence performance of the algorithm. The local escape operator strategy enhances the global exploitation capability and the capability to skip the trap of local optimization. The OBL and QOL strategy increases the diversity of the candidate individuals, the solution’s quality in population is enhanced, thus magnifying the optimization competence. By combining these three strategies with the JS algorithm, an enhanced jellyfish search algorithm is developed (called the EJS algorithm). The detailed steps of the EJS algorithm are similar to the JS algorithm in Section 2.6. The main difference is that the opposition-based learning and quasi-opposition learning strategy is implemented between Step 4 and Step 5. Figure 2 shows the flow chart of the EJS algorithm to facilitate understanding the entire process. Meanwhile, the pseudo-code of the EJS algorithm is displayed in Algorithm 2.

3.5. Time Complexity of the EJS Algorithm

An algorithm refers to a group of methods used to operate data and to solve program problems. For the same problem, using different algorithms may result in the same result, but the resources and time consumed in the process will vary greatly. So how should we measure the advantages and disadvantages of different algorithms? This article will use time complexity to illustrate. That is, estimate the execution times (execution time) of program instructions.
The time complexity of the EJS algorithm lies on N, D and T. For all iteration period, the EJS algorithm performs the following procedure: candidates follow the ocean current, then the local escape operator is implemented, candidates move in active motion with sine and cosine learning factors or passive motion within a swarm, new individuals are generated through an opposition-based learning and quasi-opposition learning strategy, and better candidate individuals are selected to participate in the iterative process of the next generation. Combined with the above analysis, the time complexity can be calculated. The meaning of T, N, and D can see above content.
O ( EJS ) = O ( T ( O ( candidate follow ocean current + local escaping operator ) + O ( passive motion + active motion ) + O ( Learning   strategy ) ) )
O ( EJS ) = O ( T ( N D + N D + N D ) ) = O ( T N D ) .
Algorithm 2: EJS algorithm
Begin
  Step 1: Initialization. Define the fitness function, set N and T, initialize with Logistic map P i + 1 = η P i ( 1 P i ) , 0 P 0 1 for i = 1 , , N , and set t = 1 .
  Step 2: Fitness calculation. Calculate quantity of food at each jellyfish position f i = f ( P i ) , and pick up the best position P b e s t
   Step 3: While t < T do
        for I = 1 to N do
           C ( t ) = ( 1 t / T ) × ( 2 × r a n d ( 0 ,   1 ) 1 )
         if C ( t ) > 0.5 do
               P i ( t + 1 ) = P i ( t ) + r 2 × ( P β × r 2 × μ )
           //Local escaping operator(LEO)
           if rand < 0.5
             P L E O ( t ) = P i ( t + 1 ) + f 1 ( u 1 P u 2 P k ( t ) ) + f 2 ρ 1 ( u 3 ( P 2 i ( t ) P 1 i ( t ) ) ) + u 2 ( P r 1 ( t ) P r 2 ( t ) ) / 2
            P i ( t + 1 ) = P L E O ( t )
         else
             P L E O ( t ) = P + f 1 ( u 1 P u 2 P k ( t ) ) + f 2 ρ 1 ( u 3 ( P 2 i ( t ) P 1 i ( t ) ) ) + u 2 ( P r 1 ( t ) P r 2 ( t ) ) / 2
          Pi(t + 1) = PLEO(t)
         end
        else
          if r a n d ( 0 , 1 ) > ( 1 C ( t ) ) Do   //Type A
              P i ( t + 1 ) = P i ( t ) + γ × r 3 × ( U b L b )
          else   //Type B
            //Sine and cosine learning factors
                ω 1 = 2 sin [ ( 1 t / T ) π / 2 ]
                ω 2 = 2 cos [ ( 1 t / T ) π / 2 ]
                P i ( t + 1 ) = ω 1 ( P i ( t ) + step ) + ω 2 ( P P i ( t ) )
          end if
         end if
         //Learning strategy
             P ˜ i d = L b d + U b d P i d
             P i d = r a n d L b d + U b d 2 , L b d + U b d P i d
             P i n e w = P ˜ i i f rand < p P i i f rand p , i = 1 , 2 , , N
          Check whether the boundary is out of bounds. If it out of search region, and replace the location;
        end for
       end while
    Step 4: Return. Return the global optimal solution.
        End

4. Numerical Experiment and Result Analysis Based on a Benchmark Test Set

To benchmark the performance of the proposed EJS algorithm, 29 benchmark functions from the standard CEC2017 test set and ten benchmark functions from the legal CEC2019 test set are used to execute the experimental sequence. The EJS algorithm is compared with other famous optimization methods. Aiming at unbiased experimental results, all tests are conducted in the same Windows 10 environment; all tests are implemented on Matlab-2018a installed on an Intel(R) Core(TM) i5-8625u CPU @ 1.60 GHz, 1.80 GHz, and 8.00 GB. For all optimization algorithms, set N = 50. In addition, all algorithms are implemented 20 times independently, T = 1000 is the termination condition.
The test sets are both discussed. Due to space limitations, in this article, we only show the CEC2019 test functions in detail. Ten CEC2019 benchmark functions [48] are employed to evaluate the algorithm’s execution. The test functions F4–F10 can be shifted and rotated with the boundary range [−100, 100], while the test functions F1–F3 with different boundary ranges and dimensions cannot be moved and rotated. Table 1 gives the details of the CEC2019 test functions.

4.1. Performance Indicators

Here, we give six evaluation indicators to accurately analyze the performance of the EJS algorithm [49].
(i) Best value
Best = min { b e s t 1 , b e s t 2 , , b e s t m }
where b e s t i represents the best value of the i-th independent run.
(ii) Worst value
Worst = max { b e s t 1 , b e s t 2 , , b e s t m }
(iii) Mean value
Mean = 1 m i = 1 m b e s t i
(iv) Standard deviation
Std = 1 m 1 i = 1 m ( b e s t i Mean ) 2
(v) Rank
The average value of all compared methods are sorted in order, and the corresponding serial number of each algorithm is defined as the rank. If mean values are the same, the standard deviations are further compared. The algorithm with the lowest ranking possesses outstanding performances. Conversely, it indicates that the EJS algorithm is worse than the other compared methods. The average ranking, in this paper, refers to the result obtained by summing the rankings of all test functions of the specified algorithm and dividing it by the total number of test functions.
The median is the number in the middle of a group of data arranged in order, representing a value in a sample population or probability distribution, which can divide the set of values into equal upper and lower parts. Therefore, we also believe that the median can more fairly reflect the ranking of the algorithm. The final ranking is determined by combining the median and the average of the rank. First, the median is used for ranking. When the median is the same, the average rank is considered for the final ranking of all the compared algorithms.
(vi) Wilcoxon rank sum test result
Taking the EJS algorithm as the benchmark, p-values are computed by running other methods for m times, and the statistical results are given at 95% significance level ( α = 0 . 05 ). +/=/− are the number of test functions that the EJS algorithm is obviously inferior/equal/superior, respectively, to some famous methods.

4.2. Comparison between the EJS Algorithm and Other Optimization Algorithms

In order to verify the contribution of each main improvement strategy (the contribution of the individual-based updating strategy) to the performance of the EJS algorithm, the EJS algorithm is compared with its six incomplete algorithms and the JS algorithm. To evaluate the impact of the strategies on convergence speed and accuracy, the incomplete algorithms include the sine and cosine learning factors strategy, the local escape operator strategy, or the learning strategy, corresponding to the JSI, JSII, and JSIII algorithms, respectively, and the select combination strategies corresponding to the JSIV algorithm (the sine and cosine learning factors and the local escape operation), the JSV algorithm (the local escape operator and the learning strategy), and the JSVI algorithm (the learning strategy and the sine and cosine learning factors) algorithm. The selected CEC2019 test set shows references as test function sets. Different function types represent different performances of the algorithms. For details, please refer to Table 1.
The parameter settings are the same as the above. The results including best value, average value, standard deviation, and rank after 20 runs on each test function are summarized in Table 2. Numbers marked with black font represent the best results on each evaluation index.
From Table 2, all the evaluating indicators of the EJS algorithm and most of the incomplete algorithms are better than the JS algorithm, except for function test F10; therefore, this is adequate to show that the three strategies added in this paper improve the original algorithm to a certain extent. The JS algorithm can obtain the optimal solution except for on test functions F2, F4, and F8. On the individual functions, the improvement of the three strategies is not as good as the improvement of the original algorithm using a single strategy. For example, for test function F2, the effect of the JSVI algorithm is better than that of the EJS algorithm, that is, the local escape operator may cause the accuracy of the algorithm to be reduced on test function F2. The effect of the EJS algorithm is inferior to the JSI and JSIII algorithms on test function F7, that is, the sine and cosine learning factors strategy and the learning strategy may cause the accuracy of the algorithm to be reduced on test function F7. The effects of the JSII and JSIII algorithms are superior to the EJS algorithm on test function F10, that is, the local escape operator strategy and the learning strategy may cause the accuracy of the algorithm to be reduced on function F10, but the difference is not large; the best values are obtained using JSIV and JSIII, however, they all fluctuate slightly near the optimal solution. From the overall ranking, the ranking order of the eight algorithms is EJS > JSVI > JSI > JSIV > JSIII > JSV > JSII > JS. These results show that the EJS algorithm is able to avoid local optimal solutions and find better solutions by using the local escape operator, which is consistent with the analysis in Section 3.2. The performance of the EJS algorithm in solving accuracy and stability is comparable to the JS algorithm on all functions, and therefore, the theoretical optimal value has been found.
Due to article space constraints, in this paper, we only present the convergence curves of some test sets in Figure 3. It can be seen from the figure that the convergence speed and accuracy of the JSI algorithm is improved, the convergence speed and accuracy of the JSII, JSIII, JSV, and JSVI algorithms are improved, and the JSIII algorithm is relatively obvious, that is to say, the learning strategy can significantly improve the calculation accuracy, accelerate convergence speed, and avoid falling into local optimization.
The EJS algorithm is compared with some other recognized optimization methods (such methods include JS [23], HHO [19], GBO [45], WOA [14], AOA [11], SCA [50], BMO [20], SSA [51], SOA [21], PSO [12], and MTDE [52]) to further prove the performance of the EJS algorithm. Table 3 provides related parameter settings of these recognized methods.
Each benchmark function is carried out 20 times on the CEC2019 test set. Table 4 summarizes the evaluation index results for the EJS algorithm and other methods, including the best, worst, mean, standard devation, and rank; the optimal value is highlighted in bold among all the compared algorithms. Based on the data in Table 4, the optimization capability of the EJS algorithm is significantly better than the original JS algorithm on all test function, which may be due to the introduction of the sine and cosine learning factors, local escape operator, and learning strategies that have significantly accelerated the calculation speed and enhanced the calculation precision.
Among eight optimization algorithms, the EJS algorithm has more significant advantages in solving the CEC2019 test set. From the rank of algorithms, except for test functions F4, F5, and F7, the EJS algorithm ranks first on the remaining test functions. Especially on test function F1, the EJS algorithm is significantly superior to the other algorithms; the theoretical optimal value is obtained, and has a very small standard deviation. However, the other optimization algorithms, including the original JS algorithm, are far from the theoretical optimal value. The MTDE algorithm is in line with the theoretical optimal value, but its stability is not as good as the EJS algorithm. In conclusion, the EJS algorithm can significantly accelerate convergence speed, and can improve the calculation precision.
The final ranking of the last row in Table 4 shows a research phenomenon: The performance ranking of the compared algorithms is WOA < SCA < SOA < SSA < PSO < JS < MTDE < EJS, which fully shows that the three strategies introduced in the EJS algorithm significantly accelerate convergence speed and improve the calculation precision of the JS algorithm. This verifys the effectiveness and applicability of the EJS algorithm, which are further confirmed on the CEC2019 test set.
It can be seen from Table 4 that the evaluation times of the fitness value of the original algorithms (JS, SSA, PSO, WOA, SCA, and MTDE) are the same, but the meanFE values of the test functions are different for the EJS algorithm. The average FEs value of 20 times is given in Table 4. It can also be seen that the EJS algorithm has improved the accuracy and convergence speed, but its required space storage and calculation time have increased, which further verifies the complexity highlighted in Section 3.5.
Under a 95% significance level ( α = 0 . 05 ) with the EJS algorithm as the benchmark, the Wilcoxon rank sum test values and statistical data of the other compared methods, implemented 20 times on the CEC2019 test set, are listed in Table 5. Wilcoxon rank sum test values that exceed 0.05 are highlighted in bold, which means that the EJS algorithm and the compared algorithms have competitiveness, and they are roughly the same. Combined with the ranking in Table 4, the statistical results of the last line in Table 5 are 0/0/10, 0/1/9, 0/1/9, 0/0/10, 0/1/9, 3/1/6, and 0/3/7. The numbers of functions that the EJS algorithm is significantly better than the SSA, WOA, SOA, PSO, SCA, MTDE, and JS algorithms are 10, 10, 9, 9, 9, 6, and 7, respectively. Therefore, for the CEC2019 test set, the computational accuracy of the EJS algorithm is significantly improved on seven test functions as compared with the original JS algorithm, and the EJS algorithm also has strong competitiveness as compared with the other compared algorithms.
The convergence curves of the EJS algorithm on the test functions are shown in Figure 4 in order to better evaluate the EJS algorithm. As indicated in the figure, for the CEC2019 test set, the EJS algorithm has obviously improved convergence characteristics as compared with the JS algorithm. On test functions F1, F2, F6, and F9, the EJS algorithm accelerates convergence speed, and also increases calculation precision. On test functions F3, F7, and F8, although the convergence speed of the EJS algorithm is not better than that of the PSO algorithm, its convergence does not stop at the late iteration, skipping the trap of local optimization, and its calculation precision is obviously better than the PSO algorithm. The solution’s precision of the EJS algorithm is obviously better than the MTDE algorithm on test functions F4 and F7, but the calculation rate of the MTDE algorithm is slightly slower on each test function, and the EJS algorithm performs well on other test functions. On balance, the convergence curves show that the proposed EJS algorithm has remarkably improved convergence characteristics as compared with the JS algorithm and the additional compared methods, it accelerates convergence speed and correspondingly improves the calculation precision.
The box plots can help researchers to understand and to explain the distribution characteristics obtained with all the algorithms’ solutions. The box plots of the EJS algorithm and the other eight optimization methods on the CEC2019 test set are shown in Figure 5. They show that the median of the EJS algorithm running 20 times is small, except for function tests F4 and F7, which verifies the superiority and effectiveness of the EJS algorithm. At the same time, the rectangular area of the EJS algorithm is clearly narrower than the other methods on function tests F1~F3, F5, and F6, which illustrates that the EJS algorithm has strong stability. The approximate solution can be obtained on almost all time runs. In addition, the height between the upper and the lower quartile is low, indicating that the solution of the EJS algorithm has high consistency. In terms of the EJS algorithm outliers, function test F3, F8, and F9 have fewer outliers, which shows that the EJS algorithm avoids the existence of contingency, and the solution obtained in each iteration is slightly affected by the random strategy. In general, the EJS algorithm is more stable and more accurate than the other compared algorithms.
The radar graphs drawn according to the ranking of all the compared algorithms on the CEC2019 test set are displayed in Figure 6. The radar graphs illustrate that the EJS algorithm has the smallest shadow area, and the algorithm has the smallest comprehensive ranking on the test function. Therefore, the stability of the EJS algorithm is improved. In general, the performance of the EJS algorithm is more valuable than the other compared algorithms on the CEC2019 benchmark.
The difference between individual dimensions can infer whether the group is divergent or clustered in the centralized space. On the one hand, when the algorithm diverges, the difference of the dimension of the group body increases, that is, the group individuals disperse in the search environment. This is called exploration or diversification in metaheuristic research. On the other hand, when the population converges, the difference is minimized and the population individuals gather in a concentrated region. This is called development or reinforcement. In the iterative process, different metaheuristic algorithms adopt different strategies to explore and develop within the group. The concept of exploration and development is ubiquitous in any metaheuristic algorithm. Through exploration, an algorithm can access the invisible community in the search environment to maximize the efficiency of finding the global optimal location. On the contrary, development is the use of neighbors that allow individuals to successfully converge to a potential global optimal solution. The balance between these two capabilities is a trade-off. Neither of these two algorithms can produce effective results. Therefore, maintaining the correct collaboration between the exploration and development modes among algorithms is a necessary condition to ensure the optimization ability. In this paper, we use the dimensional diversity measure proposed by Hussain et al. in [53] for reference to calculate the corresponding exploration and production ratio. Figure 7 shows the exploration and development analysis diagrams of some CEC2019 test functions.
From the figure, we can see that the EJS algorithm starts from the exploration on all test functions, and then gradually transitions to the development stage, i.e., the EJS algorithm with average exploration above 80% and exploitation below 20% on all functions. On test functions F1,F2, F5, and F9, the EJS algorithm can still maintain an effective investigation rate in the middle and early stages of the iteration. On test function F7, the EJS algorithm quickly turns to an effective exploration rate in the middle stage, and ends the iteration with an effective exploitation situation. This discovery process shows that the more effective exploration speed of the EJS algorithm in the early stage ensures reasonable full-range discovery capability to prevent falling into the current local solution. In contrast, the more effective development speed in the later stage ensures that the mining can be carried out with higher accuracy after high exploration.

5. Engineering Application

As more and better intelligent algorithms are proposed, we need to verify their universality and applicability in solving practical engineering problems. All engineering problems can ultimately be summed up as optimization problems. We need to select various design and optimization engineering problems to verify the effectiveness and wide applicability of the algorithm in this paper. Therefore, six different engineering examples are selected, which represent different types of optimization problems, including constrained, unconstrained, discrete variable, continuous variable, mixed variable, implicit constraint, strong constraint, and weak constraint.
The EJS algorithm and some previous methods with the ability to solve practical problems are verified in this section. Here, six engineering cases consist of tension/compression spring design, pressure vessel design, gear set design, cantilever beam design, 3-bar truss design, and 25 bar truss tower design to illustrate the EJS algorithm’s applicability and effectiveness in solving practical engineering problems. The calculation indexes can reflect the practical application effect of the EJS algorithm. In addition to the gear train design, the other five engineering optimization problems are nonlinear constrained optimization problems, which have strong nonlinear objective function and constraint conditions. In this paper, the penalty function is selected to deal with the constraints; it is a technique to deal with nonlinear constraints effectively. Its basic principle is to impose a penalty term on the original, goal expression equation, and then transform the constrained into an unconstrained optimization problem, which is easy to solve with intelligent algorithms including the EJS algorithm. In all experiments, the running environment is the same as in Section 4.1, and set D = 50, T = 1000.

5.1. Tension/Compression Spring Design Problem

The tension/compression spring design problem is a nonlinear constrained optimization issue. The objective is to determine the minimum weight, and the variables that can participate in the optimization are mean coil diameter (D), wire diameter (d), and number of effective coils (N). Figure 8 gives the sketch map of this case. Consider Z = [ z 1 , z 2 , z 3 ] = [ d , D , N ] , the mathematical expressions of the spring design problem are shown in Equation (38). z1∈[0.05, 2], z2∈[0.25, 1.3], and z3∈[2, 15] are the search areas of this issue.
Minimize   W ( Z ) = ( z 3 + 2 ) z 2 z 1 2
    Subject   to     h 1 ( Z ) = 4 z 2 2 z 1 z 2 12566 ( z 2 z 1 3 z 1 4 ) + 1 5108 z 1 2 1 0 ,   h 2 ( Z ) = 1 140.45 z 1 z 2 2 z 3 0     h 3 ( Z ) = 1 z 2 3 z 3 71785 z 1 4 0 ,   h 4 ( Z ) = z 1 + z 2 1.5 1 0
All the statistical results of the JS [23], ALO [17], GOA [18], GWO [23], MFO [54], MVO [9], WOA [14], SCA [50], HHO [19] and EJS algorithms for the design problem shown in Figure 8 are displayed in Table 6. Table 6 summarizes the variable values and the evaluation indicators consisting of the minimum, mean, worst, and standard deviation of spring weight after all algorithms have been run 20 times. The optimal values of the evaluation indicators are highlighted in bold. As described in Table 6, the EJS algorithm obviously outperforms the above methods on each statistical indicator. The applicability and superiority of the EJS algorithm are further verified. The EJS algorithm can provide the best design variables at the lowest cost as compared with competitors.

5.2. Pressure Vessels Design Problem

Minimizing the total cost of pressure vessels is the first priority of pressure vessel design. The variables that can participate in the optimization are shell thickness (Ts), head thickness (Th), inner radius (R), and length of cylindrical part without head (L). Figure 9 shows a sketch map of this case. Consider R = [ r 1 , r 2 , r 3 , r 4 ] = [ T s , T h , R , L ] . The corresponding mathematical model is simplified in Equation (39). Here, we can set r1, r2∈[0, 99] and r3, r4∈[10, 200] in this problem.
Minimize   W ( R ) = 0.6224 r 1 r 3 r 4 + 1.7781 r 2 r 3 2 + 3.1661 r 1 2 r 4 + 19.84 r 1 2 r 3
Subject   to     h 1 ( R ) = r 1 + 0.0193 r 3 0 ,   h 2 ( R ) = r 2 + 0.00954 r 3 0 , h 3 ( R ) = π r 3 2 r 4 4 3 π r 3 3 + 1296000 0 ,   h 4 ( R ) = r 4 240 0 .
All statistical results of the JS [23], ALO [17], GOA [18], GWO [23], MFO [54], MVO [9], WOA [14], SCA [50], HHO [19] and EJS algorithms for the design problem shown in Figure 9 are displayed in Table 7. Table 7 summarizes the variable values and the evaluation indicators consisting of the optimal, mean, worst, and standard deviation of the total cost after all algorithms have been run 20 times. The optimal values of the evaluation indicators are highlighted in bold. As described in Table 7, the EJS algorithm is prominently ahead of the other algorithms on each statistical indicator, and it can provide a higher quality solution. The GWO algorithm ranked second, and the JS algorithm was third. The applicability and superiority of the EJS algorithm for solving tension/compression spring design are further verified. The EJS algorithm can provided the optimal solution in this case.

5.3. Gear Train Design Problem

The gear train design problem is a nonlinear unconstrained case; its purpose is to minimize the cost of gear ratio, and the four integer variables (number of teeth on each gear) that can participate in the optimization are denoted by TA, TB, TC, and TD. Here, we give a mark, let Z = [ z 1 , z 2 , z 3 , z 4 ] = [ T A , T B , T C , T D ] , and z 1 , z 2 , z 3 , z 4 [ 12 ,   60 ] . The mathematical expression of the minimum objective function is shown as Equation (40):
W ( Z ) = ( 1 6.931 z 1 z 2 z 3 z 4 ) 2 .
All statistical results of the JS [23], ALO [17], GOA [18], GWO [23], MFO [54], MVO [9], WOA [14], SCA [50], HHO [19] and EJS algorithms for the design problem shown in Figure 10 are displayed in Table 8. Table 8 summarizes the variable values and the evaluation indicators consisting of the optimal, mean, worst, and standard devation of the gear ratio cost after all algorithms have been run 20 times. The optimal values of the evaluation indicators are highlighted in bold. As observed in Table 8, the data of the EJS algorithm is optimal among the ten optimization algorithms, which fully demonstrates that the proposed EJS algorithm performs well in solving a gear train design problem. It can outperform the design effect as compared with the other algorithms.

5.4. Cantilever Beam Design Problem

Similarly, the design problem of a cantilever beam is also a classic representative of nonlinear constraint optimization. The final requirement is to lighten its weight. The required variables of five people’s design departments have been marked in Figure 11. In other words, the cross-section parameters of five hollow square elements are ( z 1 , z 2 , z 3 , z 4 , z 5 ), and all parameters belong to the range [0.01, 100]. Professionals have given their specific expressions in Equation (41) as follows:
Minimize   W ( Z ) = 0.6224 ( z 1 + z 2 + z 3 + z 4 + z 5 )
    Subject   to   h ( Z ) = 61 z 1 3 + 37 z 2 3 + 19 z 3 3 + 7 z 4 3 + 1 z 5 3 0
All statistical results of the JS [23], ALO [17], GOA [18], GWO [23], MFO [54], MVO [9], WOA [14], SCA [50], HHO [19] and EJS algorithms for the design problem shown in Figure 11 are listed in Table 9. Table 9 summarizes the variable values and the evaluation indicators consisting of the best, mean, worst, and standard deviation of cantilever beam weight after all algorithms have been run 20 times. The optimal values of the evaluation indicators are highlighted in bold. As can be observed, Table 9 shows that the average values of the EJS algorithm, the JS algorithm, and the ALO algorithm are the same, and are the smallest after running 20 times, indicating that they all have good superiority in dealing with this case. However, the EJS algorithm has the smallest standard deviation, which means the EJS algorithm is more stable. The statistical table demonstrates that the EJS algorithm possesses significant competitiveness as compared with the other optimization methods, and therefore, the optimal variables can be obtained by using the EJS algorithm.

5.5. Planar Three-Bar Truss Design Problem

The lightest mass of a three-bar truss is a typical problem, and it can be simplified into an optimization problem with two variables (recorded as z A 1 and z A 2 ). This model is indicated in Figure 12. z A 1 and z A 2 represent the cross-sectional areas of the bar trusses. Consider Z = [ z 1 , z 2 ] = [ z A 1 , z A 2 ] and z 1 , z 2 [ 0 ,   1 ] , the mathematical equation of Figure 12 is set out in Equation (42).
Minimize   W ( Z ) = ( 2 2 z 1 + z 2 ) l
      Subject   to   h 1 ( Z ) = 2 z 1 + z 2 2 z 1 2 + 2 z 1 z 2 P σ 0 ,   h 2 ( Z ) = z 2 2 z 1 2 + 2 z 1 z 2 P σ 0 ,       h 3 ( Z ) = 1 2 z 2 + z 1 P σ 0
where l = 100   cm , P = 2   KN / cm 2 , and σ = 2   KN / cm 2 . All statistical results of the JS [23], ALO [17], GOA [18], GWO [23], MFO [54], MVO [9], WOA [14], SCA [50], HHO [19] and EJS algorithms for the design problem shown in Figure 12 are displayed in Table 10. Table 10 summarizes the variable values and the evaluation indicators consisting of the minimum, mean, worst, and standard deviation of truss weight after all algorithms have been run 20 times. The optimal results of the evaluation indicators are highlighted in bold. Table 10 shows that the mean value of the EJS algorithm is the same as the JS algorithm, but the standard deviation indicator of the EJS algorithm is relatively small, which indicates that the EJS algorithm has certain advantages. At the same time, the proposed EJS algorithm has excellent performance. The statistical table demonstrates that the EJS algorithm possesses significant superiority as compared with the other optimization methods. At the same time, this algorithm can effectively solve this case and has a more pleasing design effect than the other algorithms.

5.6. Spatial 25-Bar Truss Design Problem

Under the conditions of stress and node displacement constraints, lightweight design of a 25-bar truss is also a topic that structural researchers have been studying. The goal is to minimize the mass of 25 rods.This engineering structure has 25 elements and 10 nodes. To sum up, 25 member elements are summarized into 8 different units, and the sectional area of member elements in each group is the same. The units are recorded as U1 = S1, U2 = {S1~S5}, U3 = {S6~S9}, U4 = {S10, S11}, U5 = {S12, S13}, U6 = {S14~S17}, U7 = {S18~S21}, and U8 = {S22~S25}, as displayed in Figure 13. The material density of all elements is defined as 0.1 lb/in3, the elastic modulus is set as 10,000 ksi, and the stress is [−40,000, 40,000] psi. The displacement of all nodes in three coordinates X, Y, and Z are governed by [−0.35, 0.35] (in), and the node loads are given as P1x = 1 kips, P3x = 0.5 kips, P6x = 0.6 kips, and P1y = P1z = P2y = P2z = −10 kips. The member sectional area belongs to any number of D = {0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4} (in2).
All statistical results of the JS [23], ALO [17], GOA [18], GWO [23], MFO [54], MVO [9], WOA [14], SCA [50], HHO [19] and EJS algorithms for the design problem shown in Figure 13 are displayed in Table 11 and Table 12. Since one table displaying all the results would be somewhat crowded, the results are divided into two tables. Table 11 summarizes the variable values and the minimum weight of spatial 25-bar truss. Table 12 summarizes the evaluation indicators consisting of the minimum, mean, worst, and standard deviation of truss mass after all algorithms have been run 20 times. The best results of the evaluation indicators are highlighted in bold. The results demonstrate that the solution obtained by the EJS algorithm is optimal in all evaluating indicator values such as minimum, worst, mean, and standard deviation, which further demonstrates that the EJS algorithm possesses significant superiority, validity, and applicability in dealing with truss size design problem.

6. Conclusions

This paper proposes an enhanced jellyfish search (EJS) algorithm, which has the advantages of better calculation precision and faster convergence speed. The following three improvements have been applied based on the JS algorithm: (i) The addition of a sine and cosine learning factors strategy can enhance the solution’s quality, and accelerate convergence speed. (ii) The introduction of a local escape operator strategy can prevent the JS algorithm from getting stuck at a local optimal solution and can improve the exploitation capability. (iii) By applying an opposition-based learning and quasi-opposition learning strategy in probability the diversity distribution of candidate populations can be increased. The comparison test between the incomplete improved algorithms of individual strategies and the original algorithm further visualizes the impact of each strategy on the algorithm. By comparing other popular optimization algorithms on the CEC2019 test set, it is verified that the EJS algorithm has strong competitiveness. In order to verify the performance of the EJS algorithm, an exploration and development balance test of the EJS algorithm was also carried out. For example, quick convergence rate, high calculation precision, strong robustness, and so on are excellent characteristics of the EJS algorithm. As compared with the JS algorithm, the EJS algorithm escaped the trap of local optimization, enhances the solution’s quality, and accelerates the calculation speed of the algorithm. In addition, the practical engineering application of the EJS algorithm also shows its superiority in solving both constrained and unconstrained real optimization problems, and therefore, provides a way to solve such problems.

Author Contributions

Conceptualization, G.H., A.G.H. and M.A.; Methodology, G.H., J.W., M.L., A.G.H. and M.A.; Software, J.W. and M.L.; Validation, J.W. and M.L.; Formal analysis, G.H.; Investigation, G.H., J.W., M.L., A.G.H. and M.A.; Resources, G.H. and A.G.H.; Data curation, J.W. and M.L.; Writing—original draft, G.H., J.W., M.L., A.G.H. and M.A.; Writing—review & editing, G.H., J.W., M.L., A.G.H. and M.A.; Visualization, M.L., A.G.H. and M.A.; Supervision, G.H. and M.A.; Project administration, G.H. and M.A.; Funding acquisition, G.H. and A.G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Project Supported by Natural Science Basic Research Plan in Shaanxi Province of China (No.2021JM320).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, G.; Du, B.; Wang, X.; Wei, G. An enhanced black widow optimization algorithm for feature selection. Knowl.-Based Syst. 2022, 235, 107638. [Google Scholar] [CrossRef]
  2. Glover, F. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  3. Fausto, F.; Reyna-Orta, A.; Cuevas, E.; Andrade, Á.G.; Perez-Cisneros, M. From ants to whales: Metaheuristics for all tastes. Artif. Intell. Rev. 2020, 53, 753–810. [Google Scholar] [CrossRef]
  4. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  5. Storn, R.; Price, K. Differential evolution-A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  6. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  7. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  8. Erol, O.K.; Eksin, I. A new optimization method: Big Bang–Big Crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  9. Abualigah, L. Multi-verse optimizer algorithm: A comprehensive survey of its results, variants, and applications. Neural Comput. Appl. 2020, 32, 12381–12401. [Google Scholar] [CrossRef]
  10. Mostafa, R.R.; El-Attar, N.E.; Sabbeh, S.F.; Ankit, V.; Fatma, A.H. ST-AL: A hybridized search based metaheuristic computational algorithm towards optimization of high dimensional industrial datasets. Soft Comput. 2022, 1–29. [Google Scholar] [CrossRef]
  11. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  13. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999; pp. 1470–1477. [Google Scholar]
  14. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  15. Ashraf, N.N.; Mostafa, R.R.; Sakr, R.H.; Rashad, M.Z. Optimizing hyperparameters of deep reinforcement learning for autonomous driving based on whale optimization algorithm. PLoS ONE 2021, 16, e0252754. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  17. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  18. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimization algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  19. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H.L. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  20. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H. Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103330. [Google Scholar] [CrossRef]
  21. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  22. Chou, J.-S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  23. Elkabbash, E.T.; Mostafa, R.R.; Barakat, S.I. Android malware classification based on random vector functional link and artificial Jellyfish Search optimizer. PLoS ONE 2011, 16, e0260232. [Google Scholar] [CrossRef]
  24. Hu, G.; Dou, W.; Wang, X.; Abbas, M. An enhanced chimp optimization algorithm for optimal degree reduction of Said-ball curves. Math. Compu. Simulat. 2022, 197, 207–252. [Google Scholar] [CrossRef]
  25. Hu, G.; Li, M.; Wang, X.F.; Guo, W.; Ching-Ter, C. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl.-Based Syst. 2022, 240, 108071. [Google Scholar] [CrossRef]
  26. Elaziz, M.A.; Abualigah, L.; Ewees, A.A.; Al-qaness, M.A.; Mostafa, R.R.; Yousri, D.; Ibrahim, R.A. Triangular mutation-based manta-ray foraging optimization and orthogonal learning for global optimization and engineering problems. Appl. Intell. 2022, 2022, 1–30. [Google Scholar] [CrossRef]
  27. Hu, G.; Zhong, J.; Du, B.; Wei, G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  28. Chaabane, S.B.; Kharbech, S.; Belazi, A.; Bouallegue, A. Improved Whale optimization Algorithm for SVM Model Selection: Application in Medical Diagnosis. In Proceedings of the 2020 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 17–19 September 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  29. Ben Chaabane, S.; Belazi, A.; Kharbech, S.; Bouallegue, A.; Clavier, L. Improved Salp Swarm Optimization Algorithm: Application in Feature Weighting for Blind Modulation Identification. Electronics 2021, 10, 2002. [Google Scholar] [CrossRef]
  30. Mostafa, R.R.; Ewees, A.A.; Ghoniem, R.M.; Abualigah, L.; Hashim, F.A. Boosting chameleon swarm algorithm with consumption AEO operator for global optimization and feature selection. Knowl.-Based Syst. 2022, 21, 246. [Google Scholar] [CrossRef]
  31. Adnan, R.M.; Dai, H.L.; Mostafa, R.R.; Parmar, K.S.; Heddam, S.; Kisi, O. Modeling Multistep Ahead Dissolved Oxygen Concentration Using Improved Support Vector Machines by a Hybrid Metaheuristic Algorithm. Sustainability 2022, 14, 3470. [Google Scholar] [CrossRef]
  32. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 42, 303–315. [Google Scholar] [CrossRef]
  33. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 2, 60–68. [Google Scholar] [CrossRef]
  34. Liu, Z.Z.; Chu, D.H.; Song, C.; Xue, X.; Lu, B.Y. Social learning optimization (SLO) algorithm paradigm and its application in QoS-aware cloud service composition. Inf. Sci. 2016, 326, 315–333. [Google Scholar] [CrossRef]
  35. Satapathy, S.; Naik, A. Social group optimization (SGO): A new population evolutionary optimization technique. Complex Intell. Syst. 2016, 2, 173–203. [Google Scholar] [CrossRef]
  36. Kumar, M.; Kulkarni, A.J.; Satapathy, S.C. Socio evolution & learning optimization algorithm: A socio-inspired optimization methodology. Future Gener. Comput. Syst. 2018, 81, 252–272. [Google Scholar]
  37. Gouda, E.A.; Kotb, M.F.; El-Fergany, A.A. Jellyfish search algorithm for extracting unknown parameters of PEM fuel cell models: Steady-state performance and analysis. Energy 2021, 221, 119836. [Google Scholar] [CrossRef]
  38. Youssef, H.; Hassan, M.H.; Kamel, S.; Elsayed, S.K. Parameter estimation of single phase transformer using jellyfish search optimizer algorithm. In Proceedings of the 2021 IEEE International Conference on Automation/XXIV Congress of the Chilean Association of Automatic Control (ICA-ACCA), Online, 22–26 March 2021; pp. 1–4. [Google Scholar]
  39. Shaheen, A.M.; Elsayed, A.M.; Ginidi, A.R.; Elattar, E.E.; El-Sehiemy, R.A. Effective automation of distribution systems with joint integration of DGs/ SVCs considering reconfiguration capability by jellyfish search algorithm. IEEE Access 2021, 9, 92053–92069. [Google Scholar] [CrossRef]
  40. Shaheen, A.M.; El-Sehiemy, R.A.; Alharthi, M.M.; Ghoneim, S.S.; Ginidi, A.R. Multi-objective jellyfish search optimizer for efficient power system operation based on multi-dimensional OPF framework. Energy 2021, 237, 121478. [Google Scholar] [CrossRef]
  41. Barshandeh, S.; Dana, R.; Eskandarian, P. A learning automata-based hybrid MPA and JS algorithm for numerical optimization problems and its application on data clustering. Knowl.-Based Syst. 2021, 236, 107682. [Google Scholar] [CrossRef]
  42. Manita, G.; Zermani, A. A modified jellyfish search optimizer with orthogonal learning strategy. Procedia Comput. Sci. 2021, 192, 697–708. [Google Scholar] [CrossRef]
  43. Abdel-Basset, M.; Mohamed, R.; Chakrabortty, R.; Ryan, M.; El-Fergany, A. An improved artificial jellyfish search optimizer for parameter identification of photovoltaic models. Energies 2021, 14, 1867. [Google Scholar] [CrossRef]
  44. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M.; Chakrabortty, R.K.; Ryan, M.J.; Nam, Y. An improved jellyfish algorithm for multilevel thresholding of magnetic resonance brain image segmentations. Comput. Mater. Con. 2021, 68, 2961–2977. [Google Scholar] [CrossRef]
  45. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inform. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  46. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  47. Hu, G.; Zhu, X.N.; Wei, G.; Chang, C.T. An improved marine predators algorithm for shape optimization of developable Ball surfaces. Eng. Appl. Artif. Intell. 2021, 105, 104417. [Google Scholar] [CrossRef]
  48. Brest, J.; Maučec, M.S.; Bošković, B. The 100-digit challenge: Algorithm jde100. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation, CEC, Wellington, New Zealand, 10–13 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 19–26. [Google Scholar]
  49. Hu, G.; Yang, R.; Qin, X.Q.; Wei, G. MCSA: Multi-strategy boosted chameleon-inspired optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2023, 403, 115676. [Google Scholar] [CrossRef]
  50. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  52. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  53. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
  54. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  55. Gupta, S.; Deep, K.; Mirjalili, S.; Kim, J.H. A modified sine cosine algorithm with novel transition parameter and mutation operator for global optimization. Expert Syst. Appl. 2020, 154, 113395. [Google Scholar] [CrossRef]
  56. Nematollahi, A.F.; Rahiminejad, A.; Vahidi, B. A novel meta-heuristic optimization method based on golden ratio in nature. Soft Comput. 2020, 24, 1117–1151. [Google Scholar] [CrossRef]
Figure 1. Time control function [23].
Figure 1. Time control function [23].
Mathematics 11 00851 g001
Figure 2. Flow chart of the EJS algorithm.
Figure 2. Flow chart of the EJS algorithm.
Mathematics 11 00851 g002
Figure 3. Convergence curves of the incomplete improved algorithms (some test sets).
Figure 3. Convergence curves of the incomplete improved algorithms (some test sets).
Mathematics 11 00851 g003
Figure 4. Convergence curves of all algorithms on the CEC2019 test set.
Figure 4. Convergence curves of all algorithms on the CEC2019 test set.
Mathematics 11 00851 g004
Figure 5. Box plots of all algorithms on the CEC2019 test set.
Figure 5. Box plots of all algorithms on the CEC2019 test set.
Mathematics 11 00851 g005
Figure 6. Radar graphs of all optimization algorithms on the CEC2019 test set. (a) SSA. (b) SOA. (c) PSO. (d) WOA. (e) SCA. (f) MTDE. (g) JS. (h) EJS.
Figure 6. Radar graphs of all optimization algorithms on the CEC2019 test set. (a) SSA. (b) SOA. (c) PSO. (d) WOA. (e) SCA. (f) MTDE. (g) JS. (h) EJS.
Mathematics 11 00851 g006
Figure 7. The exploration and exploitation diagrams of the EJS algorithm.
Figure 7. The exploration and exploitation diagrams of the EJS algorithm.
Mathematics 11 00851 g007
Figure 8. Sketch map of the tension/compression spring.
Figure 8. Sketch map of the tension/compression spring.
Mathematics 11 00851 g008
Figure 9. Sketch map of the pressure vessel design.
Figure 9. Sketch map of the pressure vessel design.
Mathematics 11 00851 g009
Figure 10. Sketch map of the gear train design problem [55].
Figure 10. Sketch map of the gear train design problem [55].
Mathematics 11 00851 g010
Figure 11. Sketch map of the cantilever beam design problem [56].
Figure 11. Sketch map of the cantilever beam design problem [56].
Mathematics 11 00851 g011
Figure 12. Sketch map of the 3-bar truss design problem.
Figure 12. Sketch map of the 3-bar truss design problem.
Mathematics 11 00851 g012
Figure 13. Sketch map of the spatial 25-bar truss design problem [21].
Figure 13. Sketch map of the spatial 25-bar truss design problem [21].
Mathematics 11 00851 g013
Table 1. The details of the CEC2019 test functions.
Table 1. The details of the CEC2019 test functions.
NoFunction NameOptimal ValueDimSearch Range
F1Storn’s Chebyshev Polynomial Fitting Problem19[−8192, 8192]
F2Inverse Hilbert Matrix Problem116[−16,384, 16,384]
F3Lennard-Jones Minimum Energy Cluster118[−4, 4]
F4Rastrigin’s Function110[−100, 100]
F5Griewangk’s Function110[−100, 100]
F6Weierstrass Function110[−100, 100]
F7Modified Schwefel’s Function110[−100, 100]
F8Expanded Schaffer’s F6 Function110[−100, 100]
F9Happy Cat Function110[−100, 100]
F10Ackley Function110[−100, 100]
Table 2. Comparison results of different improvement strategies on the CEC2019 benchmark test functions.
Table 2. Comparison results of different improvement strategies on the CEC2019 benchmark test functions.
No.ResultAlgorithm
JSJSIJSIIJSIIIJSIVJSVJSVIEJS
F1Best1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000
Worst107.874719 4.518274 2005.953718 1.000000 34.033461 1.000000 1.000000 1.000000
Mean25.353117 1.714772 730.661957 1.000000 7.867176 1.000000 1.000000 1.000000
Std4.6579 × 1011.5674 × 1008.7582 × 1028.1288 × 10−81.4638 × 1004.0521 × 10−92.2238 × 10−114.0951 × 10−13
Rank75846321
F2Best4.246899 4.198636 4.266541 4.186653 3.908319 4.222719 4.0965434.225043
Worst26.384846 5.010327 8.670419 4.548559 11.681408 4.358863 4.2690764.274394
Mean9.976218 4.455787 5.476164 4.317989 6.827503 4.274081 4.2463124.265880
Std9.3415 × 1003.2849 × 10−11.8428 × 1001.2939 × 10−13.1694 × 1004.8525 × 10−21.4885 × 10−25.7127 × 10−3
Rank85647312
F3Best1.409205 1.409135 1.423200 1.419679 1.000000 2.133738 1.409135 1.000001
Worst5.9568 1.4140 5.1663 5.1481 4.6081 5.6611 2.2787 1.4497
Mean3.829589 1.409379 3.541565 3.371766 1.567780 3.861664 1.462579 1.390706
Std1.4241 × 1001.0867 × 10−31.0588 × 1001.1612 × 1007.2708 × 10−11.0251 × 1001.9616 × 10−19.2406 × 10−2
Rank72654831
F4Best5.974795 4.979836 7.964708 8.959667 5.974795 7.965020 3.984877 1.994959
Worst19.904187 20.899141 22.579489 24.878957 21.894100 27.720452 19.90418722.889059
Mean13.571367 10.651094 14.364349 16.351025 10.253112 16.154598 10.601347 10.203363
Std4.2744 × 1004.5320 × 1004.4084 × 1004.4952 × 1004.1478 × 1004.4344 × 1004.5683 × 1005.2338 × 100
Rank54682731
F5Best1.000391 1.009865 1.019678 1.003905 1.007396 1.009858 1.007396 1.000001
Worst1.164923 1.256066 1.127889 1.201756 1.130397 1.129320 1.132895 1.120643
Mean1.062980 1.067922 1.065941 1.058357 1.064226 1.0593251.059564 1.002496
Std4.3754 × 10−25.8209 × 10−22.8223 × 10−25.5578 × 10−23.7415 × 10−23.0438 × 10−23.2596 × 10−23.4416 × 10−2
Rank58726341
F6Best1.010457 1.000000 1.008890 1.033805 1.000000 1.030205 1.000000 1.000000
Worst3.125804 2.576493 3.071817 4.085525 2.576352 4.234450 1.008229 1.002320
Mean1.799196 1.140360 1.629071 1.900689 1.154138 2.045131 1.000851 1.000247
Std6.3932 × 10−13.9503 × 10−16.3335 × 10−11.0009 × 1004.7352 × 10−18.5399 × 10−12.3623 × 10−37.0205 × 10−4
Rank63574821
F7Best263.387643 119.875516 475.665511 24.567441432.363813 165.724634 134.682820 123.243229
Worst1.1952 × 1031.2673 × 1031.3974 × 1031.1286 × 1031.1644 × 1031.3881 × 1031.2171 × 1031.2086 × 103
Mean745.119061 615.341713 889.860067 711.903040 757.697432 874.636013 577.351162702.483287
Std2.3229 × 1023.0147 × 1022.4962 × 1022.8263 × 1022.0552 × 1023.2378 × 1022.5476 × 1022.8796 × 102
Rank52846713
F8Best3.110874 2.197454 2.839690 2.043254 1.758220 3.274288 2.566743 1.717564
Worst4.101536 3.813682 4.261395 4.032497 3.6470343.938084 4.097482 3.809025
Mean3.677661 2.928565 3.614662 3.518221 2.927741 3.586926 3.179098 2.871277
Std2.4798 × 10−14.3209 × 10−14.0743 × 10−14.2721 × 10−15.0958 × 10−11.7740 × 10−14.2090 × 10−14.8730 × 10−1
Rank83752641
F9Best1.108133 1.047001 1.170710 1.081691 1.040930 1.133197 1.035531 1.040001
Worst1.385159 1.157980 1.294128 1.379456 1.144856 1.376869 1.149305 1.128768
Mean1.209967 1.096928 1.235022 1.202045 1.0809901.223293 1.090168 1.080195
Std6.9719 × 10−22.7865 × 10−23.9300 × 10−27.4747 × 10−22.8149 × 10−26.2415 × 10−23.2839 × 10−22.8916 × 10−2
Rank64852731
F10Best11.61857.491409 1.000001 1.000000 1.000000 3.013315 3.013315 1.000000
Worst21.507121.511923 21.452565 21.496805 21.501699 21.534074 21.539023 21.500175
Mean20.039520.701859 16.406949 15.824920 18.436521 18.611377 19.590365 17.416985
Std1.05 × 1013.1102 × 1008.0406 × 1008.3796 × 1007.1870 × 1006.0449 × 1005.5736 × 1008.1379 × 100
Rank78214563
Mean Rank6.54.26.34.54.35.72.91.5
Median Rank6.546.54.546.531
Result83754621
Table 3. Related parameters of other recognized algorithms.
Table 3. Related parameters of other recognized algorithms.
AlgorithmParameterValue
JSC00.5
EJSC00.5
Selection probability p0.5
HHOInitial energy E0[−1, 1]
GBOConstant parametersβmin = 0.2, βmax = 1/2
Probability parameter pr0.5
WOAa, bDecreasing from 2 to 0 with linearly 1
AOACC1 = 2, C2 = 6, C3 = 1, C4 = 2
SCAa2
BMOpl7
SSAInitial speed v00
SOAControl Parameter ADecreasing from 2 to 0 with linearly
The value of fc0
PSOCognitive coefficient2
Social coefficient2
Inertia constantdecreases from 0.8 to 0.2 linearly
MTDEConstant parametersWinIter = 20, H = 5, initial = 0.001, final = 2, Mu = log(D), μf = 0.5, σ = 0.2
Table 4. Results of the EJS algorithm and other optimization algorithms on the CEC2019 test set.
Table 4. Results of the EJS algorithm and other optimization algorithms on the CEC2019 test set.
No.ResultAlgorithm
SSASOAPSOWOASCAMTDEJSEJS
F1Best2.03 × 10317.46 × 1031.57 × 1021111
Worst3.39 × 1062.38 × 1022.30 × 1052.06 × 1073.60 × 1061.00019.62 × 1031
Mean7.55 × 1052.28 × 1017.18 × 1044.22 × 1063.87 × 10515.61 × 1021
Std6.94 × 10113.33 × 1033.61 × 1093.72 × 10139.30 × 10114.33 × 10−14.57 × 1061.56 × 10−24
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,269,450
Rank73586241
F2Best1.37 × 1024.25781.51 × 1022.36 × 1032.81 × 1013.65984.09524.1721
Worst2.33 × 1032.02 × 1024.40 × 1021.94 × 1044.13 × 1031.63 × 1014.05 × 1014.2865
Mean5.86 × 1023.39 × 1012.61 × 1027.21 × 1032.42 × 1036.78718.31734.2474
Std2.95 × 1052.99 × 1037.68 × 1031.52 × 1071.09 × 1068.52416.44 × 1018.34 × 10−4
MeanFEs50,05050,00050,00050,00050,00050,050500504,278,650
Rank64587231
F3Best15.52271.40911.01144.96621.40921.41901
Worst7.387111.71286.71208.633511.18732.92065.06631.4101
Mean3.56249.69192.09934.39728.71191.61123.07391.3683
Std3.48872.34482.99665.03633.0211.80 × 10−11.35271.59 × 10−2
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,288,670
Rank58367241
F4Best10.949612.84338.959711.026724.21441.33118.95971.9950
Worst55.722243.238025.873997.572255.30168.960332.838616.9193
Mean25.277824.580416.642750.006241.78375.755114.19749.1587
Std153.389288.288022.2697508.042184.65014.681629.570016.9436
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,287,350
Rank65487132
F5Best1.05661.488511.29664.505511.01721.0099
Worst1.683515.67871.24373.306510.57261.03191.18461.1454
Mean1.26533.47431.11692.04096.84611.00591.07281.0625
Std2.98 × 10−29.43154.71 × 10−32.52 × 10−12.36729.52 × 10−51.81 × 10−31.55 × 10−3
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,282,650
Rank57468132
F6Best1.50315.571715.97434.952211.0151
Worst7.60489.92225.608711.81409.12512.5003.59321.0596
Mean4.40527.49452.41198.54416.98211.12391.6741.0034
Std3.90271.82431.92152.03661.14571.28 × 10−14.30 × 10−11.78 × 10−4
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,280,150
Rank57486231
F7Best5.16 × 1024.86 × 1022.38 × 1025.33 × 1021.17 × 1031.25753.57 × 1021.3747
Worst1.67 × 1031.39 × 1031.17 × 1031.74 × 1031.74 × 1031.57 × 1021.35 × 1031.10 × 103
Mean8.93 × 1029.36 × 1027.26 × 1021.23 × 1031.45 × 1036.77 × 1017.93 × 1025.81 × 102
Std1.02 × 1051.01 × 1057.03 × 1049.80 × 1042.14 × 1043.04 × 1037.60 × 1041.20 × 105
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,271,650
Rank56378142
F8Best2.84063.38271.45774.08853.81072.30482.26071.8870
Worst4.57615.01744.48255.00424.69903.69794.12023.6695
Mean3.86344.32803.45104.54524.26843.06183.66812.8739
Std2.10 × 10−11.29 × 10−13.96 × 10−18.09 × 10−27.10 × 10−21.46 × 10−11.69 × 10−11.96 × 10−1
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,282,450
Rank57386241
F9Best1.11791.13421.03531.12151.36901.10011.10841.0222
Worst1.92141.52621.28291.69791.79381.21561.30491.1698
Mean1.38121.32161.11081.35521.51821.14401.19811.0788
Std4.82 × 10−21.26 × 10−23.11 × 10−32.22 × 10−21.44 × 10−28.23 × 10−43.68 × 10−31.57 × 10−3
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,289,150
Rank75268341
F10Best20.996521.177121.043121.007315.035021.089911.61852.1551
Worst21.102921.510821.466221.363021.515521.246921.507121.5214
Mean21.013021.365121.215921.125221.037621.172220.039518.6298
Std1.10 × 10−38.41 × 10−31.04 × 10−21.04 × 10−22.00422.42 × 10−31.05 × 1014.58 × 101
MeanFEs50,05050,00050,00050,00050,00050,05050,0504,287,350
Rank38754621
Mean Rank5.46.04.07.06.72.23.41.3
Medial Rank56.547.5723.51
Result56487231
Table 5. p-value results on the CEC2019 test set with the EJS algorithm as the benchmark.
Table 5. p-value results on the CEC2019 test set with the EJS algorithm as the benchmark.
FunctionAlgorithm
SSASOAPSOWOASCAMTDEJS
F16.791 × 10−86.791 × 10−86.791 × 10−86.791 × 10−86.791 × 10−86.791 × 10−86.791 × 10−8
F26.791 × 10−82.56 × 10−76.791 × 10−86.791 × 10−86.791 × 10−81.60 × 10−51.20 × 10−6
F31.35 × 10−36.791 × 10−84.20 × 10−39.13 × 10−76.791 × 10−81.66 × 10−76.791 × 10−8
F41.37 × 10−67.93 × 10−74.15 × 10−51.65 × 10−76.78 × 10−82.56 × 10−22.04 × 10−3
F52.06 × 10−66.791 × 10−86.04 × 10−36.791 × 10−86.791 × 10−81.92 × 10−74.90 × 10−1
F64.001 × 10−84.001 × 10−81.14 × 10−64.001 × 10−84.001 × 10−82.15 × 10−25.45 × 10−8
F71.33 × 10−23.64 × 10−31.99 × 10−15.17 × 10−66.791 × 10−85.90 × 10−58.10 × 10−2
F82.06 × 10−61.06 × 10−75.631 × 10−46.791 × 10−86.791 × 10−81.48 × 10−11.25 × 10−5
F91.92 × 10−71.06 × 10−74.68 × 10−29.17 × 10−86.791 × 10−82.04 × 10−59.13 × 10−7
F101.61 × 10−49.68 × 10−18.35 × 10−43.05 × 10−43.512 × 10−11.614 × 10−43.94 × 10−1
+/=/−0/0/100/1/90/1/90/0/100/1/93/1/60/3/7
Table 6. Evaluation indicators and variable values for Figure 8.
Table 6. Evaluation indicators and variable values for Figure 8.
AlgorithmDesign VariablesEvaluation Indicators (Weight)
dDNMinimumMeanStdWorst
JS0.05166560.35589711.35460.0126660.0127106.0819 × 10−100.012761
EJS0.05207380.36604510.76240.0126650.0126683.4221 × 10−120.012671
ALO0.0500000.31742514.02780.0126700.0130011.7155 × 10−70.014091
GOA0.0673400.8631002.29600.0127190.0159664.2678 × 10−60.019652
GWO0.0536580.4058908.90140.0126780.0127202.4396 × 10−90.012919
MFO0.0589790.5587904.97830.0126660.0129692.2056 × 10−70.014735
MVO0.0690940.9375402.01810.0128780.0171672.4197 × 10−60.018036
WOA0.0606490.6130404.21570.0126870.0138131.4231 × 10−60.017329
SCA0.0500000.31731614.31550.0127230.0129009.9693 × 10−90.013100
HHO0.0575400.5145105.77760.0126790.0138721.1585 × 10−60.017644
Table 7. Evaluation indicators and variable values for Figure 9.
Table 7. Evaluation indicators and variable values for Figure 9.
AlgorithmDesign VariablesEvaluation Indicators (Cost)
TsThRLOptimalMeanStdWorst
JS0.77703960.384814040.42532198.57065870.12505871.10563.32665877.8328
EJS0.77454910.383203940.31962200.00005870.12405870.12406.6383 × 10−225870.1240
ALO1.10271000.543302057.2543049.50715870.12996334.3010254,190.12887301.0969
GOA0.86650651.179295045.19656141.68816664.31498115.76272,663,313.778713,589.6419
GWO0.77417320.383318740.31964200.00005870.39035961.971881,459.16467019.5910
MFO0.78276610.387213640.74312194.18745870.12406241.3384294,817.89497301.1955
MVO1.22638000.603160063.7598017.41116024.76686680.0326207,592.65897550.9419
WOA0.85191450.560377243.42803160.82936314.92677300.9278478,781.64228662.6477
SCA0.80469460.399335441.28378196.37656103.27956618.5766199,596.98227746.5638
HHO1.08608000.521551054.9925063.08755972.45476715.7933175,488.77147306.5959
Table 8. Evaluation indicators and variable values for Figure 10.
Table 8. Evaluation indicators and variable values for Figure 10.
AlgorithmDesign VariablesEvaluation Indicators (Cost)
TATBTCTDOptimal MeanStdWorst
JS532615512.3078 × 10−115.8263 × 10−115.9403 × 10−201.0936 × 10−9
EJS431619492.7009 × 10−122.9871 × 10−114.7338 × 10−213.0676 × 10−10
ALO271212371.8274 × 10−83.8599 × 10−93.1347 × 10−171.8274 × 10−8
GOA592115373.0676 × 10−101.8504 × 10−93.5997 × 10−172.7265 × 10−8
GWO491619432.7009 × 10−121.2263 × 10−108.8927 × 10−209.9216 × 10−10
MFO543712578.8876 × 10−104.8239 × 10−96.9029 × 10−172.7265 × 10−8
MVO573712548.8876 × 10−104.8240 × 10−103.6788 × 10−192.3576 × 10−9
WOA531320342.3078 × 10−111.0561 × 10−98.0578 × 10−192.3576 × 10−9
SCA592115373.0676 × 10−101.4669 × 10−91.2268 × 10−171.6200 × 10−8
HHO601515262.3576 × 10−91.6465 × 10−91.6339 × 10−171.8274 × 10−8
Table 9. Evaluation indicators and variable values for Figure 11.
Table 9. Evaluation indicators and variable values for Figure 11.
AlgorithmDesign VariablesEvaluation Indicators (Weight)
z 1 z 2 z 3 z 4 z 5 BestMeanStdWorst
JS6.01125.31554.49043.50122.15541.33651.33654.7910 × 10−121.3365
EJS6.01605.30924.49433.50152.15271.33651.33653.0445 × 10−151.3365
ALO6.02105.31214.48443.50272.15351.33651.33651.0989 × 10−101.3366
GOA5.94515.36734.53453.51242.11911.33661.33702.2100 × 10−71.3381
GWO6.02515.31714.47903.49242.16061.33651.33664.0520 × 10−101.3366
MFO5.98505.36104.47943.51372.13641.33661.33695.6538 × 10−81.3375
MVO6.09005.24984.50823.49082.13841.33671.33701.9942 × 10−71.3382
WOA6.57885.36484.72804.04431.56571.34891.44677.4364 × 10−31.6955
SCA5.76915.42454.71143.27312.80911.34941.37802.0906 × 10−41.4005
HHO6.31775.26924.34443.43162.15281.33681.33871.5729 × 10−61.3413
Table 10. Evaluation indicators and variable values for Figure 12.
Table 10. Evaluation indicators and variable values for Figure 12.
AlgorithmDesign VariablesEvaluation Indicators (Weight)
z A 1 z A 2 MinimumMeanStdWorst
JS0.788620.40841263.8958263.89582.7666 × 10−11263.8958
EJS0.788670.40825263.8958263.89582.3809 × 10−26263.8958
ALO0.787960.41027263.8962263.89593.9186 × 10−8263.8967
GOA0.789720.40529263.8966263.99625.2969 × 10−2264.7909
GWO0.789990.40457263.8992263.89772.5911 × 10−6263.9010
MFO0.785600.41702263.9028263.93052.6756 × 10−3264.0610
MVO0.787620.41125263.8966263.89698.2328 × 10−7263.8990
WOA0.791800.39949263.9029264.06234.9253 × 10−2264.7084
SCA0.795820.38879263.9704264.92531.7790 × 101282.8427
HHO0.772580.45580264.0975264.00891.6864 × 10−2264.3323
Table 11. Evaluation indicators and the variable values for Figure 13.
Table 11. Evaluation indicators and the variable values for Figure 13.
AlgorithmDesign Variables Minimum Mass
U 1 U 2 U 3 U 4 U 5 U 6 U 7 U 8
JS0.00663750.0453193.63030.00125691.97730.785420.163273.9084464.5255
EJS0.00882420.0405093.61380.00102991.99410.774520.157173.9438464.5177
ALO3.59400000.0285653.49830.00100074.56480.770500.133633.7717464.6441
GOA0.00100000.0520983.43720.01176204.97530.709380.119533.8916464.5766
GWO0.03528400.1014003.64330.01865401.98270.772680.135973.9080464.8678
MFO0.00100000.0542393.49710.00100001.96240.786020.155054.0293464.6413
MVO0.06313100.0314783.69630.00188942.11640.786970.147663.8506464.5775
WOA0.01606900.6593604.38020.14965003.68781.517601.258702.2564481.5535
SCA0.08907500.1418503.58270.00100002.54810.668400.309843.8077468.2995
HHO0.00100000.1626903.42980.03483801.83630.745990.181964.0619468.0012
Table 12. Evaluation indicators of all the algorithms for Figure 13.
Table 12. Evaluation indicators of all the algorithms for Figure 13.
AlgorithmMinimumWorstMeanStd
JS464.5255464.6061464.55380.00043794
EJS464.5177464.5437464.52554.7167 × 10−5
ALO464.6441566.3295483.1816.5387
GOA464.5766553.7468483.3067817.8789
GWO464.8678466.1551465.33560.13529
MFO464.6413521.802467.8903161.3347
MVO464.5775467.4785464.96830.38278
WOA481.5535629.2815534.50161999.6866
SCA468.2995533.837507.8849685.4388
HHO468.0012508.5609475.741683.3678
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, G.; Wang, J.; Li, M.; Hussien, A.G.; Abbas, M. EJS: Multi-Strategy Enhanced Jellyfish Search Algorithm for Engineering Applications. Mathematics 2023, 11, 851. https://doi.org/10.3390/math11040851

AMA Style

Hu G, Wang J, Li M, Hussien AG, Abbas M. EJS: Multi-Strategy Enhanced Jellyfish Search Algorithm for Engineering Applications. Mathematics. 2023; 11(4):851. https://doi.org/10.3390/math11040851

Chicago/Turabian Style

Hu, Gang, Jiao Wang, Min Li, Abdelazim G. Hussien, and Muhammad Abbas. 2023. "EJS: Multi-Strategy Enhanced Jellyfish Search Algorithm for Engineering Applications" Mathematics 11, no. 4: 851. https://doi.org/10.3390/math11040851

APA Style

Hu, G., Wang, J., Li, M., Hussien, A. G., & Abbas, M. (2023). EJS: Multi-Strategy Enhanced Jellyfish Search Algorithm for Engineering Applications. Mathematics, 11(4), 851. https://doi.org/10.3390/math11040851

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop