Next Article in Journal
Emulating Non-Hermitian Dynamics in a Finite Non-Dissipative Quantum System
Next Article in Special Issue
CNN-HT: A Two-Stage Algorithm Selection Framework
Previous Article in Journal
Simultaneous Measurements of Noncommuting Observables: Positive Transformations and Instrumental Lie Groups
Previous Article in Special Issue
A Many-Objective Evolutionary Algorithm Based on Dual Selection Strategy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Neural Network Algorithm with Quasi-Oppositional-Based and Chaotic Sine-Cosine Learning Strategies

1
State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
2
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
*
Authors to whom correspondence should be addressed.
Entropy 2023, 25(9), 1255; https://doi.org/10.3390/e25091255
Submission received: 30 July 2023 / Revised: 17 August 2023 / Accepted: 22 August 2023 / Published: 24 August 2023
(This article belongs to the Special Issue Swarm Intelligence Optimization: Algorithms and Applications)

Abstract

:
Global optimization problems have been a research topic of great interest in various engineering applications among which neural network algorithm (NNA) is one of the most widely used methods. However, it is inevitable for neural network algorithms to plunge into poor local optima and convergence when tackling complex optimization problems. To overcome these problems, an improved neural network algorithm with quasi-oppositional-based and chaotic sine-cosine learning strategies is proposed, that speeds up convergence and avoids trapping in a local optimum. Firstly, quasi-oppositional-based learning facilitated the exploration and exploitation of the search space by the improved algorithm. Meanwhile, a new logistic chaotic sine-cosine learning strategy by integrating the logistic chaotic mapping and sine-cosine strategy enhances the ability that jumps out of the local optimum. Moreover, a dynamic tuning factor of piecewise linear chaotic mapping is utilized for the adjustment of the exploration space to improve the convergence performance. Finally, the validity and applicability of the proposed improved algorithm are evaluated by the challenging CEC 2017 function and three engineering optimization problems. The experimental comparative results of average, standard deviation, and Wilcoxon rank-sum tests reveal that the presented algorithm has excellent global optimality and convergence speed for most functions and engineering problems.

1. Introduction

In contemporary practical applications, it is significant to imperative tackle a wide variety of optimization problems. These encompass the optimization of route planning [1,2], production scheduling [3,4], energy system [5], nonlinear programming [6], supply chain [7], facility layout [8], medical registration [9], and unmanned system [10], among others. These projects typically involve an enormous amount of information and constraints where conventional algorithms would struggle to find an optimal solution within a reasonable timeframe. Consequently, investigating efficient approaches to these intricate optimization processes has become an extremely challenging research domain. After relentless efforts, there are numerous optimization methods exploited by researchers, commonly employing deterministic and meta-heuristic approaches over intricate optimization issues.
Deterministic methods can be described as problem-solving approaches that rely on rigorous logic and mathematical models, effectively utilizing extensive gradient information to search for optimal or near-optimal solutions [11]. However, the strong dependence on the initial starting point makes it easy to produce identical results. In the real world, optimization problems are often highly intricate and exhibit nonlinear characteristics [12], which frequently involve multiple local optima within the objective function. Consequently, deterministic methods often encounter difficulties in escaping local minima when dealing with complex optimization problems [13,14]. Instead, metaheuristics are inspired by phenomena observed in nature and simulate these phenomena to efficiently optimize and solve problems without relying on complex gradient information and mathematical principles thereby better exploring optimal solutions [15,16,17]. For instance, the grey wolf optimization (GWO) [18] replicates the social behavior of grey wolves during the search for optimization; the artificial immune algorithm (AIA) [19] mimics the evolutionary process of the human immune system to adaptively adjust the solution quality; the ant colony optimization (ACO) [20] emulates the pheromone-based foraging behavior of ants. It is noteworthy that the parameters of metaheuristic algorithms can be classified into two categories [21]: common parameters and special parameters. Common parameters encompass the foundational principles that govern the behavior of an algorithm, such as population size and termination criteria. On the other hand, specific parameters are tailored to the unique characteristics of individual algorithms. For instance, in simulated annealing (SA) [22] configuring the initial temperature and cooling rate is crucial for achieving optimal outcomes. Given the sensitivity of the algorithms for input data, any improper tuning of specific parameters may contribute to an augmented computational effort or the conundrum of local optimality when treating varying sorts of projects.
It is for heuristic algorithms featuring non-specific parameters that have gained immense relevance. Neural network algorithm (NNA) [23], which draws inspiration from artificial neural networks and biological nervous systems, emerged in 2018 as a promising method towards achieving globally optimal solutions. Additionally, a distinguishing trait of NNA from many famous heuristic algorithms is that it relies only on common parameters; hence, no extra parameters are required. This universality dramatically enhances its superior adaptability across a range of engineering applications. Nevertheless, NNA is confronted with two notable constraints: susceptibility to local optima and sluggish convergence speed. Therefore, a lot of improved optimization algorithms based on the scientific method have been offered to ameliorate the defects of NNA. For example, the competitive learning chaos neural network algorithm (CCLNNA) [24] is proposed by integrating NNA with competitive mechanisms and chaotic mapping; an effective hybrid algorithm TLNNA based on TLBO algorithm and NNA is proposed [25]; the gray wolf optimization neural network algorithm (GNNA) was created by combining GWO with NNA [26]; and the dropout strategy in the neural network was introduced and the elite selection strategy was proposed as a neural network algorithm with dropout using elite selection [27]. Moreover, by the no free lunch theorem [28], no one algorithm can be applied to all optimization questions. Thereby, it is fundamental for the ongoing refinement of existing to develop novel algorithms along with the integration of multiple algorithms for better results under practical applications. In this paper, the quasi-oppositional and chaotic sine-cosine neural network algorithm to boost the global search capability and refine the convergence performance of NNA is proposed. The main contributions of this work are listed below:
  • To maintain the QOCSCNNA diversity of populations, a quasi-oppositional-based learning (QOBL) [29] is introduced, where quasi-opposite populations are randomly generated between the centers of solution space and opposite space, which contributes to better balance exploration and exploitation that make these populations closer to the most optimal ones more likely.
  • By integrating logistic chaotic mapping [30] and sine-cosine strategy [31], a new logistic chaotic sine-cosine learning strategy (LCSC) is proposed that helps to escape from local optimum in the bias strategy phase.
  • To improve the QOCSCNNA convergence performance, a dynamic tuning factor of piecewise linear chaotic mapping [32] is employed to adjust the chances of operation for the bias and transfer operators.
  • The optimization performance of QOCSCNNA was verified through 29 numerical optimization problems based on the CEC 2017 test suite [33], as well as two real-world engineering constraint problems.
The remainder of this paper follows the following structure: a brief introduction of the original NNA is given in Section 2. Section 3 describes the proposed QOCSCNNA in detail. Section 4 validates the performance of the QOCSCNNA as well as explores the application of the QOCSCNNA to real-world engineering design problems using the CEC 2017 test suite. Finally, the main conclusions of this paper are summarized in Section 5 and further research directions are proposed.

2. NNA

Artificial neural networks (ANNs) are mathematical models that are based on the principles of biological neural networks, aiming to simulate the mechanisms of information processing in the human brain. ANNs are used for prediction primarily by receiving input data and output data which infer the relationship between these. The input data for ANN are typically obtained through experiments, computations, and other means, and the weights are iteratively adjusted to minimize the error between the predicted solution and the target solution, as shown in Figure 1. However, it might sometimes be unknown what the target solution is. Aiming to solve in this way, the authors of NNA treat the current best solution as the target solution and keep adjusting the weights of each neuron to achieve it. The NNA is a population-based evolutionary algorithm, which involves initializing the population, updating the weight matrices, and setting bias operators, and transferring operators.

2.1. Initial Population

In the NNA algorithm, the population is updated using a neural network model-like approach. In the search space, the initial population X r = [ x 1 r , x 2 r , , x N r ] is updated through the weight matrix W r = [ w 1 r , w 2 r , , w N r ] , for any generation (r). Here, x i r represents the i t h individual vector and w i r represents the i t h weight vector, both with D dimensions. Thus, x i r = [ x i , 1 r , x i . 2 r , , x i , D r ] and w i r = [ w i , 1 r , w i . 2 r , , w i , D r ] , where i = 1 , 2 , , N p .
It is desirable to impose constraints on the weights associated with new model solutions so that significant biases are prevented in the generation and transmission of these solutions. In this way, NNA was equipped to regulate its behavior through subtle deviations. After initializing the weights, the one corresponding to the desired solution ( X t a r g e t ), i.e., the target weight ( W t a r g e t ), that is chosen from the weight matrix W. Therefore, the summation of the weight matrix must adhere to the following conditions:
j = 1 N w i , j r = 1 ,     i = 1 , 2 , , N p
where
w i , j U 0 , 1 ,     i , j = 1 , 2 , , N p
In addition, the formula of generating a new population at the ( r + 1 ) t h iteration can be expressed by:
x i , n e w r + 1 = i = 1 N w i , j r × x i r ,     i = 1 , 2 , , N ,     j = 1 , 2 , , N p
x i r + 1 = x i r + x i , n e w r + 1 ,   i = 1 , 2 , N p
where N p is the population size, r is the current number of iterations, and   x i , n e w r is the weighted solution of the i t h individual at time r.

2.2. Update Weight Matrix

The weight matrix is then adjusted based on the desired target weight ( W t a r g e t ) using the following formula:
w i r + 1 = w i r + 2 × r a n d ( 0 , 1 ) × w t a r g e t r w i r ,     i = 1 , 2 , , N p
where w t a r g e t r is the vector of optimal target weights obtained in each iteration.

2.3. Bias Operator

To enhance the global search capability of NNA, a bias operator has been incorporated to fine-tune the probabilities of pattern solutions generated using the new population and updated weight matrices. A correction factor β is utilized to precisely define the probability of the adjusted pattern solution. Initially, β is initialized to 1 and progressively decreased in each iteration. The update process can be outlined as follows:
β r + 1 = β r × 0.99 ,     r = 1 , 2 , , T m a x
The bias operator encompasses two components: the bias population and the bias weight matrix. To begin, a random number N P and a set P are generated, where N P is D multiplied by β r . Let L = ( l 1 , l 2 , , l D ) and U = u 1 , u 2 , , u D ) be the lower and upper limits of the variables Additionally, P denotes a set of N P integers that are randomly selected from the range of 0 to D. Consequently, the definition of the bias population can be formulated as follows:
x i , P ( s ) r = l P ( s ) + u P s l P s × α 1 ,     s = 1 , 2 , , N P
where α 1 is a random number between 0 and 1 that obeys a uniform distribution. The bias weight matrix also involves two variables: a random number P w , a stochastic number determined by the formula N × β r , and Q, a set of P w integers randomly chosen between 0 and N. Therefore, the scientific representation for defining the bias weight matrix can be formulated as follows:
w i , Q t r = α 2 , t = 1 , 2 , , P w
where α 2 is a random number between 0 and 1, following a uniform distribution.

2.4. Transfer Operator

There is an introduced transfer function operator (TF) that transfers the new mode solution at the current position to a new position in the search space proximal to the target solution ( x t a r g e t r ). This operator can be denoted as:
x i r + 1 = x i r + 2 × α 3 × ( x t a r g e t r x i r ) ,     i = 1 , 2 , , N P
where α 3 is a random number between 0 and 1 that follows a uniform distribution. Based on the above statements, the overall NNA framework can be seen in the pseudocode in Algorithm 1.
Algorithm 1: The pseudocode of the NNA algorithm
  • Initialize the population X r and the weight matrix W r .
  • Calculate the fitness value of each solution and then set X t a r g e t and W t a r g e t
  • for i = 1 : N P
  •   Generate the new solution x i r by Equation (3) and new weight matrix w i r by Equation (5)
  •   if β r r a n d
  •   Perform the bias operator for x i r + 1 by Equation (7) and the weight matrix w i r + 1 by Equation (8)
  •   else
  •   Perform the transfer function operator for x i r via Equation (9)
  •   end if
  • end for
  • Generate the new modification factor β r + 1 by Equation (6)
  • Calculate the fitness value of each solution and find the optimal solution and the optimal weight
  • Until(stop condition = false)
  • Post process results and visualization

3. Quasi-Oppositional-Based Chaotic Sine-Cosine Neural Network Algorithm

3.1. Quasi-Oppositional-Based Learning Strategy

Opposites-based learning (OBL) theory [34] has been proposed by Tizhoosh to synthesize the selection of existing solutions and their opposites to improve the quality of candidate solutions. The OBL strategy can provide more accurate candidate solutions. Moreover, the OBL theory evolved into quasi-oppositional-based learning (QOBL) approaches, which show a higher probability of approaching the unknown optimal solution compared to the candidate solutions generated by OBL in terms of achieving the global optimum [29]. To enhance the quality and convergence speed of the solutions, researchers integrated QOBL into metaheuristic methods.
The opposite point is the symmetric point of a given point concerning the center point in the solution space. Figure 2 shows the positions of the current point, the opposite point X ~ , and the quasi-opposite Q X ~ within the one-dimensional space [A, B]. IF X = ( x 1 , x 2 ,   , x n ) represents a point in an n-dimensional space, where each coordinate x i [ a i , b i ] for i = 1, 2, …, n. The opposite point X ~ = ( x 1 , ~ x 2 ,   ~ x n ~ ) corresponding to the generated X is as follows:
x i ~ = a i + b i x i
Furthermore, the quasi-opposite point Q X ~ = ( q x 1 , ~ q x 2 ,   ~ q x n ~ ) is randomly generated between the inverse point and the center point M = ( A + B ) / 2 of the solution space. The quasi-opposite point Q X ~ of X ~ can be generated as follows [29]:
q x i ~ = m i + ( x i ~ m i ) × k ,   m i < x i ~ x i ~ + ( m i x i ~ ) × k ,   m i > x i ~
where k is a uniformly distributed random number between 0 and 1.
In this study, QOBL performs the initialization and generation of jumps for QOCSCNNA. The initialization phase through which randomly generated initial populations of quasi-opposite populations is created. The ridiculously generated initial population is taken to define the optimal solution of the inception phase; the generation jumping phase drives the algorithm jumps during the selection process to the solution with a better fitness function value. In this process, a greedy strategy is used to decide whether to keep the current solution or leap to a quasi-opposite solution. The pseudocode for the QOBL strategy is presented in Algorithm 2.
Algorithm 2: QOBL Strategy
for i = 1 :   N P
 for j = 1: N
   x i ~ = α i + γ i x i
   m i = ( α i + γ i ) / 2
    If m i < x i ~
     q x i ~ = m i + ( x i ~ m i ) × k
    else
     q x i ~ = x i ~ + ( m i x i ~ ) × k
    end if
 end for
end for

3.2. Chaotic Sine-Cosine Learning Strategy

3.2.1. Sine-Cosine Learning Strategy

For the performance improvement of meta-heuristic algorithms, Mirjalili introduced the sine-cosine learning strategy (SCLS) in his research [31]. It is the core idea of this strategy that the current solution is updated using the sine and cosine functions which effectively refrain the algorithm from falling into a local optimum. The definition of the algorithm is given below [31]:
x i , j r + 1 = x i , j r + u 1 × sin u 2 × u 3 × x t a r g e t r x i , j r ,   u 4 < 0.5 x i , j r + u 1 × cos ( u 2 ) × u 3 × x t a r g e t r x i , j r ,   u 4 0.5
where r is the current iteration number; x i , j r is the position of the i t h individual in the j t h dimension in the r iteration; and x t a r g e t r is the optimal solution of the previous generation. u 2 is a range greater than 0 and less than 2 as the radius of the circles. u 3 is set to be a random number between 0 and 2, to control the distance of the optimal solution and maintain the diversity of the population. The value of u 4 is a random number between 0 and 1. u1 is the cosine amplitude adjustment factor, set as follows:
u 1 = r R
where R is the maximum number of iterations.

3.2.2. Logistic Chaos Mapping

The exploratory potential of chaos optimization algorithms can be further enhanced by leveraging the traversing traits and stochastic attributes of chaotic variables to optimize the diversity within the population [35]. In this work, the well-known logistic chaos mapping (LCM) was chosen to generate chaotic candidate solutions. Logistic mapping is formulated as follows [30]:
c z + 1 = p × c z × ( 1 c z )
where c z (0, 1) ∀ z∈{0, 1, … N P } and p = 3.8.
Based on the candidate solutions generated by the logistic chaos, the generation is as follows:
δ z = l z + u z l z × c z
In biased operators, a novel strategy, i.e., logistic chaotic sine-cosine learning strategy (LCSC), is generated by integrating the LCM with the SCLS to align the candidate solutions to be that more chaotic to explore the design space. This mechanism serves as a preventive measure against premature convergence in subsequent iterations. The new solution is generated as follows:
x i , j r + 1 =   δ z + u 1 × sin u 2 × u 3 × c z x i , j r ,     u 4 < 0.5 δ z + u 1 × cos u 2 × u 3 × c z x i , j r ,   u 4 0.5
where r is the current iteration number; x i , j r is the position of the i t h individual in the r t h iteration. u 1 is the positive cosine amplitude adjustment factor, defined as shown in Equation (12). u 2 is set to be more than 0 and smaller than a circle with a radius of 2 , u 3 is set to be a random number between 0 and 2, and u 4 is set to be a random number between 0 and 1. The pseudocode of the bias operator changed by the CSCL is given in Algorithm 3.
Algorithm 3: The Bias Operator
P n signifies the number of biased variables in the population of the new pattern solution
P w signifies the number of biased variables in the updated weight matrix
for i = 1: N P
 if r a n d β
  %Bias for new pattern solution %
   P n = r o u n d ( N × β )
  Update the chaotic sequence δ z using Equation (14)
  for j = 1: P n
    Update the new pattern solution x i , I n t e g e r   r a n d [ 0 , N ] r + 1 by Equation (16)
  end for
  %Bias for updated weight %
   P w = r o u n d ( N P × β )
  for j = 1: j = 1 : P w
   Update the weight w j , I n t e g e r   r a n d [ 0 , N P ] r by Equation (8)
  end for
 end if
end for

3.3. The Dynamic Tuning Factor

Since the bias operator decreases as the number of iterations increases, a piecewise linear chaotic map (PWLCM) [32] is introduced, for which the chances of running different learning strategies are dynamically tuned to help QOCSCNNA converge faster as more iterations are added. As well, the definition of PWLCM is denoted in Equation (17):
Z r + 1 =   Z r / k   ,     Z r ϵ ( 0 , k )   1 Z r / 1 k ,     Z r ϵ [ k , 1 )
where r represents the function mapping value for the r t h iteration; k is a control parameter, with k between 0 and 1.
In this study, the improved algorithm by fusing the original NNA with the bias operators of QOBL, LCSC strategy, and PWLCM factor is called QOCSCNNA. The detailed flowchart as shown in Figure 3 and the pseudocode for QOCSCNNA can be found in Algorithm 4.
Algorithm 4: QOCSCNNA algorithm
  • Initialize the number of iterations r (r = 1), the dynamic tuning factor β r ( β 1 = 1 )
  • Randomly generate an initial population X
  • Generate quasi-opposite solutions Q X ~ i using algorithm 2
  • Calculate the fitness value of combined set { X , Q X ~ } then make the greedy selection to obtain Q H ~
  • Randomly generate the weight matrix considering the imposed constraints in Equations (1) and (2)
  • Set the optimal solution x t a r g e t r and the optimal weight w t a r g e t r
  • While r < r m a x
  •  Generate new pattern solution x i r by Equations (3)–(4), new weight matrix w i r by Equation (5)
  •  Calculate the fitness value of X
  •  if f i t ( Q H ~ i ) < f i t ( X i )
  •    X i = Q H ~ i
  •    f i t ( X i ) = f i t ( Q H ~ )
  •  end if
  •  if r a n d β r
  •   Perform the bias operator using algorithm 3
  •  else
  •   Perform the transfer function operator for x i r via Equation (7)
  •  end if
  • Calculate the fitness value of each solution and find the optimal solution x t a r g e t r + 1 and the optimal weight w t a r g e t r + 1
  • Update the current number of iterations by r = r + 1
  • Update the dynamic tuning factor β r + 1 by Equation (17)
  • End while

4. Numerical Experiments and Result Analysis

This section examines the properties of the proposed QOCSCNNA numerical optimization problems. This chapter is divided into three subsections. Section 4.1 details the CEC 2017 test function and the experimental environment that ensures the reliability of the experimental results. Section 4.2 provides a comparative analysis between QOCSCNNA and eight other metaheuristics on the CEC 2017 function which validates the effectiveness of the improved algorithms. Finally, the performance of the algorithm is compared with other algorithms through three engineering projects of practical significance in Section 4.3.

4.1. Experiment Setup

It is a broadly used CEC 2017 test suite [33] specifically dedicated to evaluating the performance of complex optimization algorithms. The test suite consists of 30 test functions covering a wide range of test requirements to obtain a more comprehensive insight into the performance characteristics of optimization algorithms. Unfortunately, for unavoidable reasons, the F2 test functions could not be tested, resulting in only 29 functions being tested. These functions could be categorized into four types, each with diverse levels of complexity and characteristics. Firstly, there are the single-peaked functions (F1,F3), which have a clear optimal solution and are suitable for assessing the behavior of the algorithm when dealing with simple problems. Secondly, there are simple multimodal functions (F4–F9), which have multiple partial optimal solutions and can be used to test the robustness and convergence of the algorithm during local search. The third category is hybrid functions (F11–F20), which combine the characteristics of single-peak and multimodal and are closer to the situation of complex problems in reality, enabling a comprehensive assessment of the overall global and local search capability of algorithms. Finally, the synthesized functions (F21–F30) are combined with other functions. The specific functions are shown in Table 1.
Furthermore, it was necessary to place all algorithms under the same test conditions to ensure fairness, and experiments were conducted using MATLAB R2022a software under MacOS 12.3 M1. In the CEC 2017 suite, the population size was set to 50 and the dimensionality was set to 10 D. To fully evaluate the performance of the algorithms, the maximum number of function evaluations was set to 20,000 times the population size. This setup ensures a thorough exploration of the search space, thus improving the optimization results. It is noted that the other parameters required to compare the algorithms were extracted directly from the original references to keep the consistency of the results. Moreover, there were 30 independent runs of each algorithm execution to get reliable results, and the average value (AVG) and standard deviation (STD) of the obtained results were logged.

4.2. QOCSCNNA for Unconstrained Benchmark Functions

To evaluate the performance of the improved algorithm, QOCSCNNA was compared with eight other well-known optimization algorithms, including NNA, CSO [36], SA [22], HHO [37], WOA [38], SCA [31], WDE [39], and RSA [40]. Based on the experimental settings outlined in Section 4.1, the average (AVG) and standard deviation (STD) of the minimum fitness values obtained on the CEC 2017 benchmark functions are presented in Table A1, with the smallest average and standard deviation highlighted in bold. When compared to other algorithms, QOCSCNNA demonstrated significant superiority in terms of both AVG and STD results in the 2017 CEC functions. Moreover, given the limited evaluation budget, the QOCSCNNA algorithm had relatively minor means and standard deviations for a range of functions including F1, F4, F5, F7, F8, F10-F17, F19-F21, F27, F29, and F30. These results highlight QOCSCNNA’s superior ability to effectively tackle optimization problems characterized by complexity and hybridity.
The results of the Wilcoxon rank-sum test (“+”, “=”, and “−” indicate that QOCSCNNA performs better, the same, or worse, respectively, compared to the other algorithms) are shown in Table A1 to better compare the performance of the different algorithms. As can be seen in the last row of Table A2, QOCSCNNA achieved significantly superior results to SA, SCA, RSA, and CSO on more than 28 test functions, while QOCSCNNA beats HHO and WOA for more than 26 functions and exceeds WDE and NNA for 23 functions. In other words, the average superiority rate of QOCSCNNA over 29 functions is 92.24% ( i = 1 8 + i 29 × 8 × 100 % ). These results indicate that adopting the CSCL can effectively improve the optimization capability of NNA.
Nine convergence plots of QOCSCNNA with the comparison algorithm on the CEC 2017 test set including F1, F8, F10, F12, F16, F21, F24, F29, and F30 are given in Figure 4, where the vertical axis takes the logarithm of the function’s minimum value, and the horizontal axis denotes the number of times the function was evaluated. It can be noticed that although sometimes QOCSCNNA does not perform the best in the initial phase, as the number of function iterations increases, smaller fitness values can be searched for by constantly jumping out of the local optimum. The good performance of this algorithm is because the exploration of QOBL enhances the global search capability.

4.3. Real-World Engineering Design Problems

Furthermore, to validate the feasibility of the QOCSCNNA for actual engineering applications, multiple algorithms were utilized to address the critical engineering design problems of cantilever beam structures (CB) [41], car side impact (CSI) [41], and tension spring (TS) [41]. For three problems, a population size of 50 was set with an iteration count of 2000 times the population size. Moreover, each algorithm was independently run 30 times to obtain reliable results. Such settings ensured thorough exploration of the search space, leading to improved optimization results. Additionally, the solution provided by QOCSCNNA was compared to well-known algorithms to better evaluate its performance.

4.3.1. CB Engineering Design Problem

The weight optimization of a square cross-section cantilever beam is involved in the CB structural engineering design. The beam has a rigid support at one extremity, while vertical forces act on the free nodes of the cantilever. A model of the CB design problem is illustrated in Figure 5. The beam consists of five hollow squares of equal thickness, with the height (or width) of each square being the decision variable. Meanwhile, the thickness of these squares remains constant at 2/3. The objective function of this design problem can be represented by Equation (18).
F ( x ) m i n = 0.0624   ( x 1 + x 2 + x 3 + x 4 + x 5 )
Subject to:
G x = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
Variable range:
0.01 x i 100 ,   i = 1 ,   , 5 .
This problem is being solved by several researchers using different metaheuristic methods, such as NNA, WOA, SCA, SA and PSO used in [42]. Table 2 reveals that the optimal result of QOCSCNNA is 1.3548, as well as the optimal constraints obtained from QOCSCNNA, satisfy Equation (19), which proves the validity of the optimal solutions obtained by QOCSCNNA. In addition, the optimum solutions of WOA and SA are 1.3567 and 1.3569, respectively, which are very close to the best results of QOCSCNNA. In contrast, the NNA, PSO, and SCA algorithms have poor optimum solutions, which indicates that these three algorithms are not suitable for the problem. Furthermore, by comparing the results of the Wilcoxon rank-sum test (+, =, and − indicating better, equal, or worse performance of QOCSCNNA compared to other algorithms), it is possible to discover that QOCSCNNA outperforms NNA, PSO, and SCA in terms of performance. Hence, it can be summarized that the proposed QOCSCNNA demonstrates superior feasibility compared to other algorithms.

4.3.2. CSI Engineering Design Problem

As shown in Table 3, 11 parameters should be considered when minimizing the impact of a side impact on a vehicle. Figure 6 illustrates the model of the CSI crash design problem. The objective function of this design problem can be expressed as Equation (21):
F x m i n = 1.98 + 4.90 x 1 + 6.67 x 2 + 6.98 x 3 + 4.01 x 4 + 1.78 x 5 + 2.73 x 7
Subject to:
G 1 x = 1.16 0.3717 x 2 x 4 0.00931 x 2 x 10 0.484 x 3 x 9 + 0.01343 x 6 x 10 1 0  
G 2 x = 46.36 9.9 x 2 12.9 x 1 x 2 + 0.1107 x 3 x 10 32 0  
G 3 x = 33.86 + 2.95 x 3 + 0.1792 x 3 5.057 x 1 x 2 11.0 x 2 x 8 0.0215 x 5 x 10 9.98 x 7 x 8 + 22.0 x 8 x 9 32 0  
G 4 x = 28.98 + 3.818 x 3 4.2 x 1 x 2 + 0.0207 x 5 x 10 + 6.63 x 6 x 9 7.7 x 7 x 8 + 0.32 x 9 x 10 32 0
G 5 x = 0.261 0.0159 x 1 x 2 0.188 x 1 x 8 0.019 x 2 x 7 + 0.0144 x 3 x 5 + 0.0008757 x 5 x 10 + 0.08045 x 6 x 9 + 0.00139 x 8 x 11 + 0.00001575 x 10 x 11 32 0
G 6 x = 0.214 + 0.00817 x 5 0.131 x 1 x 8 0.0704 x 1 x 9 + 0.03099 x 2 x 6 0.018 x 2 x 7 + 0.0208 x 3 x 8 0.02 x 2 2 + 0.121 x 3 x 9 0.00364 x 5 x 6 + 0.0007715 x 5 x 10 0.0005354 x 6 x 10 + 0.00121 x 8 x 11 + 0.00184 x 9 x 10 0.32 0
G 7 x = 0.74 0.61 x 2 0.163 x 3 x 8 + 0.001232 x 3 x 10 0.166 x 7 x 9 + 0.227 x 2 2 0.32 0  
G 8 x = 4.72 0.5 x 4 0.19 x 2 x 3 0.0122 x 4 x 10 + 0.009325 x 6 x 10 + 0.000191 x 11 2 4 0  
G 9 x = 10.58 0.674 x 1 x 2 1.95 x 2 x 8 + 0.02054 x 3 x 10 0.0198 x 4 x 10 + 0.028 x 6 x 10 9.9 0
G 10 x = 16.45 0.489 x 3 x 7 0.843 x 5 x 6 + 0.0432 x 9 x 10 0.0556 x 9 x 11 0.000786 x 11 2 15.7 0  
Variable range:
0.5 x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 1.5   ,     x 8 , x 9 0.192 ,   0.345   ,   30 x 10   ,   x 11 + 30
The CSI problem is a widely studied classical engineering design problem and many heuristics have been proposed to solve it over the years. The methods include NNA, SA, WOA, PSO, and SCA. According to the comparative experimental results (Table 4), the presented QOCSCNNA achieves the optimal fitness value of 23.4538 and makes the optimal constraints satisfy Equations (22)–(32). This validates the efficacy of the optimal results obtained by QOCSCNNA. In addition, the optimal results of NNA, SA, WOA, PSO, and SCA are significantly higher than those of QOCSCNNA. This indicates that QOCSCNNA holds a clear advantage among the five algorithms for solving the problem. The analysis results by the Wilcoxon rank-sum test showed that QOCSCNNA was superior to the other algorithms. It further confirms the feasibility of the obtained QOCSCNNA.

4.3.3. TS Engineering Design Problem

The goal of the TS problem is to reduce the weight of the spring, illustrated in Figure 7. Minimum deflection, shear stress, surge frequency, outer diameter limits, and limitations on design variables need to be considered in the design process. The parameter settings include the average coil diameter D (denoted as x 1 ), the wire diameter d (denoted as x 2 ), and the effective number of coils N (denoted as x 3 ). The issue is described as:
F x m i n = ( x 3 + 2 ) x 2 x 1 2
Subject to:
G 1 x = 1 x 1 3 x 2 / 71785 x 1 4 0
G 2 x = ( 4 x 2 2 x 1 x 2 ) / 12566 x 1 3 x 2 x 1 4 + 1 / 5108 x 1 2 1 0
G 3 x = 1 140.45 x 1 / x 2 2 x 3 0
G 4 x = ( x 1 + x 2 ) / 1.5 1   0
Variable range:
0.05 x 1 2   ,     0.25 x 2   1.3   ,   2 x 3 15
Several researchers have tried various meta-heuristics to solve this problem, including NNA, SA, WOA, PSO, and HHO. Table 5 demonstrates the optimal solutions obtained by QOCSCNNA and the comparative algorithms, and it can be seen that the proposed QOCSCNNA obtains the optimal solution, i.e., 0.127. Also, it is given that the constraints on the optimal cost achieved with QOCSCNNA meet Equations (34)–(38), which implies that the best solution provided by QOCSCNNA is valid. In addition, HHO has an optimal fitness value of 0.0129, which is nearly the same as the optimal result of QOCSCNNA. On the contrary, the optimal solutions of NNA, SA, WOA, and PSO are inferior, which means QOCSCNNA and HHO have significant advantages. Moreover, through a comparison of the results of the Wilcoxon rank-sum test, it is observed that QOCSCNNA outperforms NNA, SA, WOA, and PSO in the aspect of performance. Therefore, it can be drawn that QOCSCNNA is a more efficient and feasible method compared with other algorithms.

5. Conclusions and Future Works

This paper reports on the NNA based on the quasi-oppositional-based strategy, piecewise linear chaotic mapping operator, and logistic chaotic sine-cosine learning strategy proposed to enhance global search capability and convergence. More specifically, QOBL allows the generation of quasi-opposite solutions between opposite solutions and the center of the solution space during the initialization phase, helping to balance exploration and exploitation in the generation jump. A new LCSC strategy by integrating LCM and SCLS is proposed which facilitates the algorithm to control at the bias strategy stage to jump out of the local optimum. Moreover, a dynamic adjustment factor that varies with the number of evaluations is presented, which facilitates tuning the search space and accelerates the convergence speed. To demonstrate the validity of QOCSCNNA, the performance of numerical optimization problems is investigated by solving challenging CEC 2017 functions. The results of the average and standard deviation of the comparison experiments in 29 test functions show that the QOCSCNNA algorithm outperforms the NNA algorithm in 23 functions and beats the other 7 algorithms in more than half of the test functions. Meanwhile, the Wilcoxon rank-sum test and convergence analysis indicate that the QOCSCNNA algorithm significantly outperforms the other algorithms. Furthermore, QOCSCNNA and other comparative algorithms are applied to three real-world engineering design problems, and the results further evidence the applicability of the algorithms in solving practical projects.
For future research, we concentrate on the next two areas. First, QOCSCNNA will continue to be improved to address more complex real-world engineering optimal problems, which include intelligent traffic management, supply chain optimization, and large-scale unmanned aircraft systems. Second, even though QOCSCNNA can greatly enhance the global search capability of NNA, we recognize that further exploration is needed to improve the performance of NNA, especially in dealing with high-dimensional problems. Therefore, we plan to introduce the attention mechanism in neural networks for efficient exploration as well as the use of back-propagation to update the weights to further improve the performance of NNA.

Author Contributions

Conceptualization, X.X.; formal analysis, F.W.; funding acquisition, S.L.; investigation, X.X.; methodology, X.X. and F.W.; project administration, S.L.; resources, S.L.; software, X.X. and F.W.; supervision, S.L.; validation, X.X. and F.W.; writing—original draft, X.X.; writing—review and editing, X.X. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Natural Science Foundation of China (52275480) and the Reserve Project of Central Guiding Local Science and Technology Development Funds (QKHZYD [2023]002).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Comparison results of algorithms on CEC 2017.
Table A1. Comparison results of algorithms on CEC 2017.
No.MetricQOCSCNNANNACSOSAHHOWOAWDESCARSA
F1AVG1.0000 × 1035.2273 × 1034.2623 × 1085.0774 × 1031.2246 × 1052.7875 × 1031.9213 × 1021.1539 × 1091.2285 × 1010
STD1.5594 × 10−44.2122 × 1038.0384 × 1083.5549 × 1035.5971 × 1042.7429 × 1032.1334 × 1021.6023 × 1093.6113 × 109
F3AVG3.0000 × 1023.0000 × 1024.1083 × 1033.1815 × 1043.0031 × 1023.0016 × 1023.0000 × 1023.4470 × 1039.3828 × 103
STD8.9486 × 10−121.0010 × 10−92.8705 × 1031.5764 × 1041.4204 × 10−11.7345 × 10−15.2034 × 10−132.9124 × 1034.6166 × 103
F4AVG4.0006 × 1024.0199 × 1024.3755 × 1024.2943 × 1024.0792 × 1024.0945 × 1024.0533 × 1024.8410 × 1021.5297 × 103
STD3.4750 × 10−11.0132 × 1013.1247 × 1013.3827 × 1011.6671 × 1011.6874 × 1011.7456 × 1016.1224 × 1015.9168 × 102
F5AVG5.1149 × 1025.1761 × 1025.2463 × 1025.7441 × 1025.3595 × 1025.4435 × 1025.4026 × 1025.4776 × 1025.8814 × 102
STD4.4021 × 1006.4411 × 1006.7348 × 1003.3369 × 1011.4968 × 1011.2415 × 1011.8390 × 1011.5013 × 1012.6776 × 101
F6AVG6.0039 × 1026.0000 × 1026.1112 × 1026.6755 × 1026.1670 × 1026.1923 × 1026.2053 × 1026.2147 × 1026.3924 × 102
STD2.5611 × 10−11.3743 × 10−67.7939 × 1002.3173 × 1011.0198 × 1018.3311 × 1001.3073 × 1019.2855 × 1001.1581 × 101
F7AVG7.2388 × 1027.2719 × 1027.3238 × 1027.9634 × 1027.6245 × 1027.6965 × 1027.7299 × 1027.8204 × 1028.1741 × 102
STD6.6604 × 1006.3264 × 1001.1115 × 1015.3283 × 1011.6185 × 1011.7812 × 1012.5893 × 1012.1705 × 1014.2648 × 101
F8AVG8.1108 × 1028.1716 × 1028.1781 × 1028.6931 × 1028.2558 × 1028.3072 × 1028.3917 × 1028.3569 × 1028.3708 × 102
STD4.9211 × 1006.7785 × 1005.4251 × 1002.6543 × 1016.0887 × 1001.3195 × 1011.7063 × 1019.8691 × 1001.1248 × 101
F9AVG9.0257 × 1029.0002 × 1029.4739 × 1023.0775 × 1031.1992 × 1031.1020 × 1031.7433 × 1031.2542 × 1031.5024 × 103
STD2.5656 × 1008.4907 × 10−23.8012 × 1011.7799 × 1032.5726 × 1021.9554 × 1025.9984 × 1022.2817 × 1023.9959 × 102
F10AVG1.2767 × 1031.4417 × 1031.9552 × 1032.1637 × 1031.7521 × 1031.8604 × 1032.0783 × 1032.2525 × 1032.1479 × 103
STD1.7181 × 1022.3379 × 1022.9479 × 1023.7853 × 1022.2240 × 1022.5633 × 1023.8612 × 1023.5089 × 1023.0579 × 102
F11AVG1.1080 × 1031.1152 × 1031.1726 × 1031.2969 × 1031.1380 × 1031.1501 × 1031.1149 × 1031.5303 × 1034.3697 × 103
STD4.8079 × 1009.1784 × 1004.9546 × 1012.5319 × 1023.7571 × 1014.8840 × 1018.7095 × 1006.7499 × 1021.7531 × 103
F12AVG3.2398 × 1031.5886 × 1041.0761 × 1053.7010 × 1062.9019 × 1051.4733 × 1061.3037 × 1041.9745 × 1072.0259 × 108
STD3.0675 × 1031.5987 × 1042.2332 × 1053.5550 × 1063.2864 × 1051.9770 × 1061.2535 × 1047.6865 × 1074.5420 × 108
F13AVG1.3145 × 1037.5368 × 1033.5358 × 1031.8885 × 1041.3338 × 1041.5718 × 1041.3429 × 1031.3325 × 1041.6636 × 107
STD6.6758 × 1005.9487 × 1032.2158 × 1031.1688 × 1049.5387 × 1031.1828 × 1045.0494 × 1016.8891 × 1032.9435 × 107
F14AVG1.4061 × 1031.4256 × 1031.4532 × 1031.3088 × 1041.4963 × 1031.4910 × 1031.4494 × 1031.5214 × 1033.2583 × 103
STD3.2039 × 1001.2052 × 1011.9130 × 1011.0440 × 1041.9157 × 1012.8546 × 1018.1283 × 1015.0084 × 1011.1504 × 103
F15AVG1.5043 × 1031.5078 × 1032.0383 × 1031.2101 × 1041.5800 × 1031.7077 × 1031.5124 × 1036.3238 × 1034.2151 × 103
STD2.6053 × 1005.6853 × 1001.5397 × 1039.2005 × 1035.0374 × 1019.6909 × 1017.7796 × 1004.2253 × 1032.2369 × 103
F16AVG1.6101 × 1031.6333 × 1031.7866 × 1032.0739 × 1031.8147 × 1031.7182 × 1031.9915 × 1031.8258 × 1032.0139 × 103
STD2.4942 × 1014.8432 × 1011.2516 × 1022.1413 × 1021.4412 × 1021.0270 × 1021.9766 × 1021.2447 × 1022.0499 × 102
F17AVG1.7081 × 1031.7218 × 1031.7547 × 1031.8582 × 1031.7640 × 1031.7726 × 1031.8639 × 1031.7741 × 1031.8074 × 103
STD8.2533 × 1001.9107 × 1011.6551 × 1011.0271 × 1023.1636 × 1013.7196 × 1011.1954 × 1022.6842 × 1014.9826 × 101
F18AVG2.2765 × 1031.3327 × 1042.8110 × 1032.2214 × 1041.4777 × 1041.6149 × 1041.8208 × 1031.6568 × 1041.3899 × 107
STD9.9498 × 1021.0439 × 1042.0918 × 1031.4389 × 1041.0784 × 1041.1130 × 1049.2447 × 1001.3148 × 1044.7873 × 107
F19AVG1.9014 × 1031.9021 × 1032.0291 × 1039.9946 × 1036.0167 × 1039.8421 × 1031.9094 × 1031.9580 × 1042.0567 × 105
STD5.6987 × 10−12.8074 × 1003.8966 × 1021.1165 × 1044.8818 × 1037.7129 × 1035.9387 × 1004.6590 × 1043.5967 × 105
F20AVG2.0042 × 1032.0178 × 1032.0765 × 1032.2958 × 1032.1063 × 1032.0754 × 1032.1583 × 1032.1302 × 1032.1900 × 103
STD5.2200 × 1001.1072 × 1014.1266 × 1011.2813 × 1026.1903 × 1013.4860 × 1019.0067 × 1016.2912 × 1016.6794 × 101
F21AVG2.2009 × 1032.2090 × 1032.3012 × 1032.4203 × 1032.3042 × 1032.2687 × 1032.3383 × 1032.3068 × 1032.2749 × 103
STD1.1942 × 1002.7408 × 1014.2005 × 1015.2922 × 1015.9704 × 1016.7893 × 1013.2279 × 1015.5607 × 1015.2630 × 101
F22AVG2.3084 × 1032.2965 × 1032.3221 × 1033.1894 × 1032.3131 × 1032.3356 × 1032.6411 × 1032.3760 × 1033.0777 × 103
STD1.9126 × 1011.9511 × 1012.0456 × 1017.4665 × 1025.3597 × 1001.8075 × 1025.6473 × 1021.2571 × 1023.6595 × 102
F23AVG2.6181 × 1032.6248 × 1032.6322 × 1032.8207 × 1032.6505 × 1032.6385 × 1032.6384 × 1032.6448 × 1032.6855 × 103
STD7.8347 × 1007.3500 × 1001.8049 × 1011.9624 × 1022.0815 × 1011.4541 × 1012.3021 × 1012.5283 × 1011.7166 × 101
F24AVG2.6003 × 1032.6901 × 1032.7495 × 1032.9065 × 1032.7617 × 1032.7542 × 1032.7540 × 1032.7913 × 1032.9018 × 103
STD1.2995 × 1021.1722 × 1023.9062 × 1018.9372 × 1019.2439 × 1017.1181 × 1017.1111 × 1016.0177 × 1018.8390 × 101
F25AVG2.9134 × 1032.9334 × 1032.9497 × 1032.9915 × 1032.9278 × 1032.9324 × 1032.9303 × 1032.9939 × 1033.4258 × 103
STD2.1652 × 1012.2886 × 1012.0807 × 1019.7024 × 1012.3714 × 1012.4213 × 1012.4698 × 1018.0498 × 1011.9369 × 102
F26AVG2.9770 × 1032.9559 × 1033.1403 × 1034.0366 × 1033.2685 × 1033.1671 × 1033.5374 × 1033.3500 × 1034.1195 × 103
STD8.3544 × 1014.3782 × 1011.5730 × 1026.8739 × 1025.2972 × 1023.0177 × 1026.1491 × 1023.3099 × 1023.4946 × 102
F27AVG3.0921 × 1033.0932 × 1033.1089 × 1033.2479 × 1033.1213 × 1033.1193 × 1033.1127 × 1033.1621 × 1033.2831 × 103
STD1.9594 × 1002.8759 × 1001.3637 × 1016.4546 × 1012.4699 × 1013.2729 × 1011.9635 × 1013.8743 × 1017.4340 × 101
F28AVG3.3480 × 1033.2893 × 1033.3576 × 1033.6733 × 1033.3046 × 1033.3119 × 1033.3573 × 1033.5270 × 1033.5508 × 103
STD1.0843 × 1028.3349 × 1011.3117 × 1021.9834 × 1021.5539 × 1021.5543 × 1021.1627 × 1021.9909 × 1021.0554 × 102
F29AVG3.1627 × 1033.1851 × 1033.2150 × 1033.4427 × 1033.2798 × 1033.2644 × 1033.3194 × 1033.2993 × 1033.4197 × 103
STD2.0458 × 1013.4660 × 1013.7003 × 1011.5928 × 1027.1812 × 1016.0127 × 1017.5910 × 1017.9825 × 1011.5867 × 102
F30AVG5.0572 × 1041.1594 × 1057.3684 × 1052.5244 × 1061.8827 × 1059.7804 × 1041.8197 × 1052.8045 × 1062.8547 × 106
STD1.5162 × 1052.8149 × 1051.1491 × 1062.6234 × 1063.9800 × 1051.6309 × 1053.6619 × 1056.4617 × 1066.5786 × 106
Table A2. The Wilcoxon rank-sum test results.
Table A2. The Wilcoxon rank-sum test results.
No.NNACSOSAHHOWOAWDESCARSA
F1++++++++
F3+++++=++
F4++++++++
F5++++++++
F6+++++++
F7++++++++
F8++++++++
F9+++++++
F10++++++++
F11++++++++
F12++++++++
F13++++++++
F14++++++++
F15++++++++
F16++++++++
F17++++++++
F18+++++=++
F19++++++++
F20++++++++
F21++++++++
F22++=+=++
F23++++=+++
F24++++++++
F25+++=+=++
F26=+++++++
F27=+++++++
F28=+==++
F29++++++++
F30+++++=++
Statistics Number (+/−/=)23/4/228/0/129/0/026/0/327/1/123/0/629/0/029/0/0

References

  1. Huang, L. A Mathematical Modeling and an Optimization Algorithm for Marine Ship Route Planning. J. Math. 2023, 2023, 5671089. [Google Scholar] [CrossRef]
  2. Ali, Z.A.; Zhangang, H.; Zhengru, D. Path planning of multiple UAVs using MMACO and DE algorithm in dynamic environment. Meas. Control 2023, 56, 459–469. [Google Scholar] [CrossRef]
  3. Fontes, D.B.M.M.; Homayouni, S.M.; Gonçalves, J.F. A hybrid particle swarm optimization and simulated annealing algorithm for the job shop scheduling problem with transport resources. Eur. J. Oper. Res. 2023, 306, 1140–1157. [Google Scholar] [CrossRef]
  4. Chen, D.; Zhang, Y. Diversity-Aware Marine Predators Algorithm for Task Scheduling in Cloud Computing. Entropy 2023, 25, 285. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, R.; Zhang, R. Techno-economic analysis and optimization of hybrid energy systems based on hydrogen storage for sustainable energy utilization by a biological-inspired optimization algorithm. J. Energy Storage 2023, 66, 107469. [Google Scholar] [CrossRef]
  6. Ta, N.; Zheng, Z.; Xie, H. An interval particle swarm optimization method for interval nonlinear uncertain optimization problems. Adv. Mech. Eng. 2023, 15, 16878132231153266. [Google Scholar] [CrossRef]
  7. Aljabhan, B.; Obaidat, M.A. Privacy-Preserving Blockchain Framework for Supply Chain Management: Perceptive Craving Game Search Optimization (PCGSO). Sustainability 2023, 15, 6905. [Google Scholar] [CrossRef]
  8. Rizk-Allah, R.M.; Hassanien, A.E. A hybrid equilibrium algorithm and pattern search technique for wind farm layout optimization problem. ISA Trans. 2023, 132, 402–418. [Google Scholar] [CrossRef]
  9. Luo, X.; Du, B.; Gui, P.; Zhang, D.; Hu, W. A Hunger Games Search algorithm with opposition-based learning for solving multimodal medical image registration. Neurocomputing 2023, 540, 126204. [Google Scholar] [CrossRef]
  10. Chen, D.; Fang, Z.; Li, S. A Novel BSO Algorithm for Three-Layer Neural Network Optimization Applied to UAV Edge Control. Neural Process. Lett. 2023. [Google Scholar] [CrossRef]
  11. Savsani, P.; Savsani, V. Passing vehicle search (PVS): A novel metaheuristic algorithm. Appl. Math. Model. 2016, 40, 3951–3978. [Google Scholar] [CrossRef]
  12. ALRahhal, H.; Jamous, R. AFOX: A New Adaptive Nature-Inspired Optimization Algorithm; no. 123456789; Springer: Dordrecht, The Netherlands, 2023. [Google Scholar] [CrossRef]
  13. Chen, L.; Hao, C.; Ma, Y. A Multi-Disturbance Marine Predator Algorithm Based on Oppositional Learning and Compound Mutation. Electronics 2022, 11, 4087. [Google Scholar] [CrossRef]
  14. Fan, Y.; Zhang, S.; Yang, H.; Xu, D.; Wang, Y. An Improved Future Search Algorithm Based on the Sine Cosine Algorithm for Function Optimization Problems. IEEE Access 2023, 11, 30171–30187. [Google Scholar] [CrossRef]
  15. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
  16. Zhang, J.; Zhang, G.; Kong, M.; Zhang, T.; Wang, D.; Chen, R. CWOA: A novel complex-valued encoding whale optimization algorithm. Math. Comput. Simul. 2023, 207, 151–188. [Google Scholar] [CrossRef]
  17. Xie, L.; Han, T.; Zhou, H.; Zhang, Z.R.; Han, B.; Tang, A. Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 9210050. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  19. Zhang, J. Artificial immune algorithm to function optimization problems. In Proceedings of the 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, Xi’an, China, 27–29 May 2011; pp. 667–670. [Google Scholar] [CrossRef]
  20. Dorigo, M.; Birattari, M.; Stutzle, T. Ant Colony Optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  21. Rao, R.V.; Patel, V. An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems. Int. J. Ind. Eng. Comput. 2012, 3, 535–560. [Google Scholar] [CrossRef]
  22. Guilmeau, T.; Chouzenoux, E.; Elvira, V. Simulated Annealing: A Review and a New Scheme. IEEE Work. Stat. Signal Process. Proc. 2021, 2021, 101–105. [Google Scholar] [CrossRef]
  23. Sadollah, A.; Sayyaadi, H.; Yadav, A. A dynamic metaheuristic optimization model inspired by biological nervous systems: Neural network algorithm. Appl. Soft Comput. 2018, 71, 747–782. [Google Scholar] [CrossRef]
  24. Zhang, Y. Chaotic neural network algorithm with competitive learning for global optimization. Knowl.-Based Syst. 2021, 231, 107405. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Jin, Z.; Chen, Y. Hybrid teaching–learning-based optimization and neural network algorithm for engineering design optimization problems. Knowl.-Based Syst. 2020, 187, 104836. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Jin, Z.; Chen, Y. Hybridizing grey wolf optimization with neural network algorithm for global numerical optimization problems. Neural Comput. Appl. 2020, 32, 10451–10470. [Google Scholar] [CrossRef]
  27. Wang, Y.; Wang, K.; Wang, G. Neural Network Algorithm with Dropout Using Elite Selection. Mathematics 2022, 10, 1827. [Google Scholar] [CrossRef]
  28. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  29. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Quasi-oppositional differential evolution. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 2229–2236. [Google Scholar]
  30. Jia, D.; Zheng, G.; Khan, M.K. An effective memetic differential evolution algorithm based on chaotic local search. Inf. Sci. 2011, 181, 3175–3187. [Google Scholar] [CrossRef]
  31. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  32. Xu, Z.; Yang, H.; Li, J.; Zhang, X.; Lu, B.; Gao, S. Comparative Study on Single and Multiple Chaotic Maps Incorporated Grey Wolf Optimization Algorithms. IEEE Access 2021, 9, 77416–77437. [Google Scholar] [CrossRef]
  33. Awad, N.H.; Ali, M.Z.; Liang, J.; Qu, B.Y.; Suganthan, P.N. Problem definitions and evaluation criteria for the CEC 2017 special session and competition on real-parameter optimization. Nanyang Technol. Univ. Singap. Tech. Rep. 2016, 1–34. [Google Scholar]
  34. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  35. Moradi, P.; Imanian, N.; Qader, N.N.; Jalili, M. Improving exploration property of velocity-based artificial bee colony algorithm using chaotic systems. Inf. Sci. 2018, 465, 130–143. [Google Scholar] [CrossRef]
  36. Cheng, R.; Jin, Y. A competitive swarm optimizer for large scale optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef]
  37. Heidari, A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  38. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  39. Civicioglu, P.; Besdok, E.; Gunen, M.A.; Atasever, U.H. Weighted differential evolution algorithm for numerical function optimization: A comparative study with cuckoo search, artificial bee colony, adaptive differential evolution, and backtracking search optimization algorithms. Neural Comput. Appl. 2020, 32, 3923–3937. [Google Scholar] [CrossRef]
  40. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2021, 191, 116158. [Google Scholar] [CrossRef]
  41. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social Network Search for Solving Engineering Optimization Problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef] [PubMed]
  42. Wu, F.; Zhang, J.; Li, S.; Lv, D.; Li, M. An Enhanced Differential Evolution Algorithm with Bernstein Operator and Refracted Oppositional-Mutual Learning Strategy. Entropy 2022, 24, 1205. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Structure of an artificial neural network.
Figure 1. Structure of an artificial neural network.
Entropy 25 01255 g001
Figure 2. One-dimensional space opposites points and quasi-opposite points.
Figure 2. One-dimensional space opposites points and quasi-opposite points.
Entropy 25 01255 g002
Figure 3. Flowchart of QOCSCNNA.
Figure 3. Flowchart of QOCSCNNA.
Entropy 25 01255 g003
Figure 4. Convergence graph of QOCSC and its competitors.
Figure 4. Convergence graph of QOCSC and its competitors.
Entropy 25 01255 g004aEntropy 25 01255 g004b
Figure 5. The model for the CB design problem [41].
Figure 5. The model for the CB design problem [41].
Entropy 25 01255 g005
Figure 6. The model for the CSI problem [41].
Figure 6. The model for the CSI problem [41].
Entropy 25 01255 g006
Figure 7. The model for the TS problem [41].
Figure 7. The model for the TS problem [41].
Entropy 25 01255 g007
Table 1. The definition of the CEC2017 test suite.
Table 1. The definition of the CEC2017 test suite.
No.FunctionOptimum
Unimodal functionF1Shifted and Rotated Bent Cigar Function100
F3Shifted and Rotated Zakharov Function300
Simple multimodal functionF4Shifted and Rotated Rosenbrock’s Function400
F4Shifted and Rotated Rastrigin’s Function500
F6Shifted and Rotated Expanded Scaffer’s F6 Function600
F7Shifted and Rotated Lunacek BiRastriginFunction700
F8Shifted and Rotated Non-Continuous Rastrigin’s Function800
F9Shifted and Rotated Levy Function900
F10Shifted and Rotated Schwefel’s Function1000
Hybrid function (HF)F11Hybrid Function 1 (N = 3)1100
F12Hybrid Function 2 (N = 3)1200
F13Hybrid Function 3 (N = 3)1300
F14Hybrid Function 4 (N = 4)1400
F15Hybrid Function 5 (N = 4)1500
F16Hybrid Function 6 (N = 4)1600
F17Hybrid Function 6 (N = 5)1700
F18Hybrid Function 6 (N = 5)1800
Hybrid function (HF)F19Hybrid Function 6 (N = 5)1900
F20Hybrid Function 6 (N = 6)2000
Composition function (CF)F21Composition Function 1 (N = 3)2100
F22Composition Function 2 (N = 3)2200
F23Composition Function 3 (N = 4)2300
F24Composition Function 4 (N = 4)2400
F25Composition Function 5 (N = 5)2500
F26Composition Function 6 (N = 5)2600
F27Composition Function 7 (N = 6)2700
F28Composition Function 8 (N = 6)2800
F29Composition Function 9 (N = 3)2900
F30Composition Function 10 (N = 3)3000
Search   range :   [ 100 ,   100 ] D (D is the population dimension)
Table 2. Comparison results between QOCSCNNA and its competitor CB design problems.
Table 2. Comparison results between QOCSCNNA and its competitor CB design problems.
QOCSCNNANNAPSOWOASCASA
x 1 5.848716.00425.71845.38755.60725.7227
x 2 5.22074.62578.19305.77666.00386.1674
x 3 4.61317.00745.93314.64854.15544.3927
x 4 3.84332.37772.89833.72433.93203.4333
x 5 2.21206.62371.63692.20482.13312.0293
G(x)−0.0258−0.0319−1.6287 × 10−5−1.2506 × 10−11−1.6002 × 10−50
F ( x ) m i n 1.35642.28631.52131.35671.36231.3569
+/−/= ++=+=
Table 3. Influence parameters of the weight of the door.
Table 3. Influence parameters of the weight of the door.
No.VariablesDescription of Variables
1 x 1 Thicknesses of Bpillar Inner
2 x 2 Bpillar Reinforcement
3 x 3 Floor Side Inner
4 x 4 Cross Members
5 x 5 Door Beam
6 x 6 Door Beltline Reinforcement
7 x 7 Roof Rail
8 x 8 Materials of Bpillar Inner
9 x 9 Floor Side Inner
10 x 10 Barrier Height
11 x 11 Hitting Position
Table 4. Comparison results between QOCSCNNA and its competitor CSI design problems.
Table 4. Comparison results between QOCSCNNA and its competitor CSI design problems.
QOCSCNNANNASAWOAPSOSCA
x 1 0.50000.50000.58460.56760.50000.5000
x 2 1.07020.93890.82140.81640.97491.0438
x 3 0.50000.50000.50000.52820.50000.5000
x 4 1.23461.44971.32951.38471.44841.3642
x 5 0.50000.50000.53860.50000.50000.5000
x 6 1.30720.70401.26000.69881.50000.9223
x 7 0.93591.11961.32451.26161.16040.9757
x 8 0.92000.92000.92000.92000.92000.9200
x 9 0.92000.92000.92000.92000.92000.9200
x 10 1.52000.5787−0.6113−5.1295−26.1752−7.1364
x 11 1.284715.754310.05252.16091.4852−1.9988
G 1 ( x ) −0.5422−0.5681−0.4742−0.5046−0.8772−0.6109
G 2 ( x ) −3.0534−0.95860−3.2758 × 10−4−3.0287−3.1005
G 3 ( x ) −0.1003−0.1158−0.8491−3.9851 × 10−9−0.6587−0.0384
G 4 ( x ) −1.5518−6.5431−5.0120−9.1875−10.2060−6.7642
G 5 ( x ) −0.0703−0.0967−0.0787−0.1282−0.0703−0.1067
G 6 ( x ) −0.1258−0.1282−0.1378−0.1600−0.1578−0.1548
G 7 ( x ) −0.1898−0.1982−0.2055−0.2019−0.2273−0.1978
G 8 ( x ) −0.0030−0.0531−7.3880 × 10−4−1.6522 × 10−4−2.5206 × 10−9−0.0031
G 9 ( x ) −1.5665−1.3500−1.1290−1.1124−2.0150−1.6091
G 10 ( x ) −0.0364−0.7984−0.7851−0.1884−1.2840−0.0618
F ( x ) m i n 23.453823.942023.754523.780324.288823.9060
+/−/= +++++
Table 5. Comparison results between QOCSCNNA and its competitor TS design problems.
Table 5. Comparison results between QOCSCNNA and its competitor TS design problems.
QOCSCNNANNASAWOAPSOHHO
x 1 0.50280.06590.05560.06120.06490.0552
x 2 0.38500.80400.45920.63090.76590.4478
x 3 9.80665.230110.01434.00406.84587.4345
G 1 ( x ) −1.1266 × 10−10−1.0099−0.4100−2.7309 × 10−6−1.4141−1.3636 × 10−8
G 2 ( x ) −1.1102 × 10−16−1.2845 × 10−11−1.1266 × 10−7−1.4795 × 10−10−5.3960 × 10−6−3.3307 × 10−16
G 3 ( x ) −490.0553−73.8673−370.1311−85.4248−105.3872−286.5735
G 4 ( x ) −0.7081−0.4201−0.6568−0.5386−0.4461−0.6647
F ( x ) m i n 0.01270.02520.01710.01420.02850.0129
+/−/= ++++=
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiong, X.; Li, S.; Wu, F. An Enhanced Neural Network Algorithm with Quasi-Oppositional-Based and Chaotic Sine-Cosine Learning Strategies. Entropy 2023, 25, 1255. https://doi.org/10.3390/e25091255

AMA Style

Xiong X, Li S, Wu F. An Enhanced Neural Network Algorithm with Quasi-Oppositional-Based and Chaotic Sine-Cosine Learning Strategies. Entropy. 2023; 25(9):1255. https://doi.org/10.3390/e25091255

Chicago/Turabian Style

Xiong, Xuan, Shaobo Li, and Fengbin Wu. 2023. "An Enhanced Neural Network Algorithm with Quasi-Oppositional-Based and Chaotic Sine-Cosine Learning Strategies" Entropy 25, no. 9: 1255. https://doi.org/10.3390/e25091255

APA Style

Xiong, X., Li, S., & Wu, F. (2023). An Enhanced Neural Network Algorithm with Quasi-Oppositional-Based and Chaotic Sine-Cosine Learning Strategies. Entropy, 25(9), 1255. https://doi.org/10.3390/e25091255

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop