Next Article in Journal
Mathematical Modeling in Bioinformatics: Application of an Alignment-Free Method Combined with Principal Component Analysis
Previous Article in Journal
Some Generalized Neutrosophic Metric Spaces and Fixed Point Results with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The OX Optimizer: A Novel Optimization Algorithm and Its Application in Enhancing Support Vector Machine Performance for Attack Detection

by
Ahmad K. Al Hwaitat
1,* and
Hussam N. Fakhouri
2
1
King Abdullah the II IT School, Department of Computer Science, The University of Jordan, Amman 11942, Jordan
2
Data Science and Artificial Intelligence Department, Faculty of Information Technology, University of Petra, Amman 11196, Jordan
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(8), 966; https://doi.org/10.3390/sym16080966
Submission received: 28 May 2024 / Revised: 1 July 2024 / Accepted: 11 July 2024 / Published: 30 July 2024
(This article belongs to the Section Computer)

Abstract

:
In this paper, we introduce a novel optimization algorithm called the OX optimizer, inspired by oxen animals, which are characterized by their great strength. The OX optimizer is designed to address the challenges posed by complex, high-dimensional optimization problems. The design of the OX optimizer embodies a fundamental symmetry between global and local search processes. This symmetry ensures a balanced and effective exploration of the solution space, highlighting the algorithm’s innovative contribution to the field of optimization. The OX optimizer has been evaluated on CEC2022 and CEC2017 IEEE competition benchmark functions. The results demonstrate the OX optimizer’s superior performance in terms of convergence speed and solution quality compared to existing state-of-the-art algorithms. The algorithm’s robustness and adaptability to various problem landscapes highlight its potential as a powerful tool for solving diverse optimization tasks. Detailed analysis of convergence curves, search history distributions, and sensitivity heatmaps further support these findings. Furthermore, the OX optimizer has been applied to optimize support vector machines (SVMs), emphasizing parameter selection and feature optimization. We tested it on the NSL-KDD dataset to evaluate its efficacy in an intrusion detection system. The results demonstrate that the OX optimizer significantly enhances SVM performance, facilitating effective exploration of the parameter space.

1. Introduction

Metaheuristics are high-level problem-independent algorithmic frameworks that provide a set of guidelines or strategies to develop heuristic optimization algorithms [1]. These frameworks are designed to solve complex optimization problems that are often too challenging for traditional optimization techniques due to their large size, non-linearity, and the presence of multiple local optima [2]. The primary objective of metaheuristics is to efficiently explore the search space and find near-optimal solutions within a reasonable amount of computational time [3].
Metaheuristics can be broadly classified into two categories [4]: single-solution-based and population-based approaches. Single-solution-based metaheuristics, such as simulated annealing and tabu search, iteratively improve a single solution by exploring its neighborhood [5]. In contrast, population-based metaheuristics, such as genetic algorithms, particle swarm optimization, and ant colony optimization, maintain and evolve a population of solutions over successive iterations [6].
One of the key strengths of metaheuristics is their flexibility and adaptability [7]. They are not confined to a specific problem domain and can be tailored to tackle a wide range of optimization problems, including those in engineering, economics, logistics, and artificial intelligence [8]. Additionally, metaheuristics can be hybridized with other optimization techniques to enhance their performance further, combining the strengths of different methods to create more robust and efficient algorithms [9].
The success of metaheuristics in finding high-quality solutions to complex problems has made them a popular choice in both academic research and practical applications [10]. Their ability to balance exploration and exploitation of the search space, avoid getting trapped in local optima, and adapt to various problem landscapes makes them a powerful tool for solving optimization problems that are otherwise intractable by conventional methods [6].
In the field of optimization, the principle of symmetry is a cornerstone that underpins effective algorithm design [11]. Symmetry, characterized by balance and harmony, ensures that different components of an optimization process work together seamlessly. Specifically, the symmetry between global and local search processes is crucial. Global search provides a wide-ranging exploration of the solution space, identifying regions that hold potential for optimal solutions [12]. In contrast, a local search hones in on these regions, conducting detailed and thorough refinement to fine-tune the solutions [13]. This balanced approach allows optimization algorithms to effectively combine the broad perspective of global search with the precision of local search, leading to improved convergence rates and solution quality [14]. The harmonious integration of these dual processes exemplifies the power of symmetry in achieving robust and adaptable optimization strategies capable of addressing a wide array of complex, high-dimensional problems [15]. This fundamental principle of symmetry not only enhances the efficacy of optimization algorithms but also highlights their versatility and applicability across various domains [9,16].
In the landscape of optimization algorithms, the introduction of novel approaches is essential to address the ever-evolving complexity of real-world problems [17]. The OX optimizer is one such novel optimization algorithm that promises to offer a fresh perspective and enhanced performance in solving complex optimization tasks [18]. The OX optimizer is designed to leverage the strengths of existing metaheuristic frameworks while introducing unique mechanisms to improve search efficiency and solution quality [19,20].
The OX optimizer’s application in enhancing the performance of support vector machines (SVMs) for attack detection exemplifies its potential. SVMs are a powerful class of supervised learning models used for classification and regression tasks. However, their performance heavily depends on the careful tuning of hyperparameters, which is an optimization problem in itself. The OX optimizer can effectively search for optimal hyperparameters, thereby enhancing the SVM’s ability to detect attacks in cybersecurity contexts.
Attack detection in cybersecurity is a critical task that requires high accuracy and real-time performance [21]. Traditional methods often fall short in handling the dynamic and sophisticated nature of modern cyber threats [21]. By applying the OX optimizer to optimize SVM parameters, it is possible to improve detection rates and reduce false positives, thereby creating more reliable and efficient attack detection systems.
This research aims to explore the theoretical foundations, algorithmic structure, and practical applications of the OX optimizer. We will investigate its performance through extensive computational experiments and compare it with other state-of-the-art optimization algorithms. Furthermore, the application of the OX optimizer in tuning SVM parameters for attack detection will be thoroughly analyzed, demonstrating its practical relevance and potential impact in enhancing cybersecurity measures.
In this paper, we introduce a novel optimization algorithm called the OX optimizer, inspired by oxen animals, designed to address the challenges posed by complex, high-dimensional optimization problems. The OX optimizer leverages a unique approach that combines the strengths of evolutionary algorithms with advanced local search techniques, ensuring both global exploration and fine-tuned exploitation of the search space; the design of the OX optimizer embodies a fundamental symmetry between global and local search processes. This symmetry ensures a balanced and effective exploration of the solution space, highlighting the algorithm’s innovative contribution to the field of optimization. The global search component, driven by evolutionary algorithms, performs a broad exploration of the search space, identifying promising regions that may contain the global optimum. Concurrently, the local search component, employing advanced local search techniques, meticulously refines the solutions within these regions to ensure high precision and optimality. By maintaining a dynamic interplay between these two processes, the OX optimizer achieves a harmonious balance that maximizes both exploration and exploitation, thereby enhancing its overall performance and robustness. This balanced approach allows the OX optimizer to adaptively navigate complex optimization landscapes, making it particularly effective for a wide range of challenging optimization problems.
The main research contributions of this paper are as follows:
  • A new optimization algorithm named the OX optimizer is designed, inspired by the strength of oxen, to enhance the optimization process.
  • The OX optimizer’s mathematical models are based on a fundamental symmetry between global and local search processes. The algorithm combines the strengths of evolutionary algorithms with advanced local search techniques, ensuring both broad exploration of the search space and fine-tuned exploitation.
  • The OX optimizer is implemented and evaluated using unimodal, multimodal, hybrid, and composition functions from the CEC2017 competition benchmark. Experimental results verify that the OX optimizer outperforms existing state-of-the-art algorithms in terms of convergence speed and solution quality.
  • The effectiveness of the OX optimizer is further demonstrated by its application in optimizing support vector machines (SVMs), emphasizing parameter selection and feature optimization. Specifically, it has been tested on the NSL-KDD dataset to evaluate its efficacy in an intrusion detection system. The results show that the OX optimizer significantly enhances SVM performance, particularly with the RBF kernel, by integrating traits such as strength and collaboration, facilitating effective and thorough exploration of the parameter space.
The rest of this paper is organized as follows: in the Literature Review, we delve into existing research that forms the foundation of our work. We then introduce the OX optimizer, starting with its inspiration drawn from the social behavior of oxen and how this influences its design. The Mathematical Model and the OX Algorithm are discussed in detail. Next, we explore the concepts of exploration and exploitation within the OX optimizer, providing a theoretical conclusion. The Experimental Results present an overview of the CEC2022 and CEC2017 benchmarks, followed by detailed experiments and results, including convergence curve analysis, search history curve analysis, box-plot analysis, heat map, and OX histogram analysis. We also include a case study on optimizing support vector machines (SVM) using the OX optimizer to detect attacks, covering dataset description, data pre-processing, conversion to LibSVM format, optimization using SVM and the OX optimizer, implementation, and evaluation. Finally, we discuss the classification of attacks using the optimized SVM, experiment results, feature optimization, and parameter and feature optimization using the OX optimizer, culminating in our conclusion.

2. Literature Review

Metaheuristics are advanced optimization techniques designed to solve complex optimization problems that are otherwise difficult to tackle using traditional optimization methods [22]. These algorithms have become indispensable in various fields due to their ability to find near-optimal solutions within a reasonable timeframe, especially for problems characterized by large search spaces, non-linearity, and multimodality [22]. This literature review delves into the development, classification, and recent advances in metaheuristic algorithms.
The development of metaheuristics dates back to the 1950s and 1960s, with early methods such as genetic algorithms (GA) and simulated annealing (SA) laying the groundwork. John Holland’s work on genetic algorithms in the 1970s was pivotal, introducing concepts of natural selection and genetic evolution to optimization problems [23]. His seminal book, Adaptation in Natural and Artificial Systems, laid the foundation for GAs and demonstrated their potential to solve complex problems through mechanisms inspired by biological evolution. Simulated annealing, introduced by Kirkpatrick, Gelatt, and Vecchi in 1983, is another cornerstone in the history of metaheuristics [24]. Inspired by the annealing process in metallurgy, SA employs a probabilistic technique to escape local optima by allowing transitions to worse solutions with a probability that decreases over time, mimicking the cooling process of metals.
The 1990s saw a surge in the development of new metaheuristic algorithms, including particle swarm optimization (PSO) by Kennedy and Eberhart in 1995 and ant colony optimization (ACO) by Dorigo and colleagues in the early 1990s [25]. PSO was inspired by the social behavior of birds flocking or fish schooling, where individuals (particles) adjust their positions based on personal experience and the collective experience of the swarm [26]. ACO, on the other hand, was based on the foraging behavior of ants, where artificial ants build solutions to optimization problems by exploiting a pheromone-based communication system [27]. In addition, metaheuristics can be broadly classified into two categories: single-solution-based algorithms and population-based algorithms. Single-solution-based algorithms focus on iteratively improving a single solution. These algorithms are typically characterized by their ability to exploit local search areas effectively. Examples include simulated annealing (SA) and tabu search (TS). Simulated annealing utilizes a probabilistic technique to escape local optima by allowing worse solutions temporarily. The probability of accepting worse solutions decreases over time, allowing the algorithm to converge to a near-optimal solution. The cooling schedule, which controls the decrease in temperature, is a crucial factor in the performance of SA. Tabu search employs a memory structure called the tabu list to avoid revisiting recently explored solutions, thus enhancing its search capability. TS systematically explores the solution space by moving from one potential solution to another while keeping track of a list of previously visited solutions to avoid cycles and encourage exploration [28].
Population-based algorithms work with a population of solutions, facilitating exploration and exploitation of the search space. These algorithms are generally more robust and capable of avoiding local optima due to their collective search strategy. Examples include genetic algorithms (GA), particle swarm optimization (PSO), and ant colony optimization (ACO). Genetic algorithms utilize crossover, mutation, and selection operators to evolve a population of solutions [29]. Inspired by natural evolution, GA employs a fitness function to select the best individuals for reproduction, promoting the survival of the fittest concept. The crossover operator combines parts of two parent solutions to create offspring, while the mutation operator introduces random changes to maintain genetic diversity. Particle swarm optimization models social behavior, with particles adjusting their positions based on personal and group experience [25]. Each particle in the swarm is influenced by its own best-known position and the best-known positions of its neighbors, leading to convergence towards optimal solutions through cooperation and information sharing. Ant colony optimization mimics the pheromone-laying behavior of ants to find optimal paths. Artificial ants construct solutions by moving through a problem space and depositing pheromones on promising paths. The collective behavior of ants, guided by pheromone trails, helps in discovering high-quality solutions [30].
Recent years have witnessed the emergence of hybrid metaheuristics, combining the strengths of different algorithms to overcome their individual limitations. For example, memetic algorithms integrate local search techniques with global search capabilities of metaheuristics, enhancing solution quality [31]. These hybrid approaches leverage the complementary strengths of different algorithms, resulting in more efficient and effective optimization strategies. Another significant trend is the development of metaheuristics inspired by new natural phenomena. Algorithms such as the firefly algorithm (FA), cuckoo search (CS), and bat algorithm (BA) have been introduced, each inspired by unique natural behaviors [32]. The firefly algorithm, proposed by Yang in 2008, is inspired by the flashing behavior of fireflies, where the attractiveness of a firefly is proportional to its brightness, guiding other fireflies towards brighter and potentially better solutions. Cuckoo search, developed by Yang and Deb in 2009, is inspired by the brood parasitism of cuckoo species, using Lévy flights to explore the search space efficiently [33]. The bat algorithm, introduced by Yang in 2010, is based on the echolocation behavior of bats, where bats adjust their positions based on the distance to their prey [34].
Moreover, the advent of parallel and distributed computing has significantly enhanced the performance of metaheuristic algorithms. Parallel implementations of GA, PSO, and ACO, among others, have been explored to exploit modern computational resources, leading to faster convergence and the ability to handle larger problem instances. Techniques such as master–slave models, island models, and multi-threading have been employed to distribute the computational load and improve scalability [35]. Furthermore, metaheuristics represent a powerful class of optimization techniques that have evolved significantly over the past few decades [36]. Their ability to provide high-quality solutions to complex optimization problems has made them a valuable tool in various scientific and engineering domains. Ongoing research and development in hybrid metaheuristics and parallel computing promise to further enhance their performance and applicability, opening new avenues for tackling ever more challenging optimization problems [37,38]. A list of state-of-the-art metaheuristics is shown in Table 1.

3. OX Optimizer

3.1. Inspiration

In nature, oxen are recognized for their great strength, allowing them to carry heavy loads over long distances [47] (see Figure 1). This characteristic can be translated into an algorithmic feature where the optimizer robustly handles complex, high-dimensional optimization problems, demonstrating a strong ability to navigate through challenging search spaces. Unlike other animals that might show bursts of speed, oxen are known for their steady and gradual progress [48]. This can inspire an optimization algorithm that makes consistent, incremental improvements in the solution, avoiding drastic changes that might lead to suboptimal results. This steady approach could be particularly effective in avoiding premature convergence and ensuring thorough exploration of the search space.
Oxen often work in pairs or teams, showcasing a collaborative effort [48]. This aspect can be incorporated into the algorithm by allowing individual agents (representing oxen) to collaborate or share information, potentially improving the convergence speed and quality of the solutions. This collaboration could mimic the way oxen coordinate their efforts in tasks like plowing. Oxen are adaptable to various environmental conditions, whether it be plowing fields or transporting goods [49]. This adaptability can be mirrored in the algorithm’s ability to tackle a wide range of optimization problems, from continuous to discrete and from simple to complex, making it a versatile tool for different application domains.
The endurance of oxen, capable of working for long periods without tiring, could inspire an optimization algorithm that performs consistently over extended iterations. This would be particularly useful in problems where the search space is vast and complex, requiring sustained effort to locate the optimal solution.

3.2. Social Behavior of Oxen and Their Influence on the OX Optimizer

The social behavior of oxen is characterized by their strong sense of community, teamwork, and hierarchical structure. Oxen work collaboratively, especially when performing tasks like plowing or hauling, demonstrating collective intelligence and shared efforts [50]. This cooperative nature is mirrored in the OX optimizer, where agents work together, share information, and follow a leading entity akin to a lead ox guiding the herd. The hierarchical and collaborative dynamics of oxen ensure focused and effective problem-solving, which is essential for the optimization process. Additionally, oxen exhibit resilience and adaptability, traits that are crucial for navigating diverse and complex environments, further informing the design of the OX optimizer.

3.3. Mathematical Model Equations and Description

This section discusses the mathematical model and then lists the equations of the OX optimizer. The mathematical model for the OX optimizer begins with the initialization phase. Let P = { p 1 , p 2 , , p n } represent a population of n OX agents in the search space, where each p i is a potential solution vector. Each p i is initialized randomly within the bounds of the problem’s search space. The next phase involves defining a strength function S ( p i ) that evaluates the load-carrying capacity of each OX agent, analogous to the fitness function in other optimizers. This function assesses how well an agent solves the optimization problem. Additionally, a persistence mechanism is implemented to ensure that agents do not fluctuate wildly between solutions, representing the steady nature of oxen. This is mathematically modeled as a damping factor that reduces the step size over iterations.
In the steady and gradual progress phase, a gradual progress function G ( p i , i t e r ) is introduced to modify the position of each OX agent based on the current iteration i t e r . This ensures steady movement in the search space, avoiding abrupt changes. The function G ( p i , i t e r ) gradually refines the position of p i by considering both the current position and a weighted influence from the best solution found so far. Collaborative work among the agents is represented by a collaboration term C ( p i , P i ) , where each agent p i adjusts its position by considering the positions of other agents P i in the population. This mimics the collaborative nature of oxen and can be modeled as an averaging or weighted sum of the positions of neighboring agents, adjusted by a collaboration coefficient.
The adaptability phase incorporates a mechanism where the algorithm adjusts its parameters based on the type of problem or the characteristics of the search space. This involves dynamically adjusting the strength and collaboration coefficients based on feedback from the environment or the problem’s nature. Endurance and longevity in the OX algorithm are represented by a long-term memory mechanism, where the algorithm stores and occasionally revisits promising solutions found in earlier iterations. This mechanism prevents stagnation in local optima and encourages exploration of the search space over an extended period.
Finally, the positions of the OX agents are updated at each iteration using a combination of strength, gradual progress, collaboration, and endurance mechanisms. The algorithm terminates when a stopping criterion is met, such as reaching a maximum number of iterations or achieving a satisfactory solution quality.
The OX animal optimizer, through this mathematical model, aims to emulate the robust and consistent attributes of oxen, providing a novel approach to optimization problems. This model can be further refined and tested on various optimization problems to assess its effectiveness and efficiency. The OX animal optimizer (OX), inspired by the behaviors of oxen, is designed to effectively balance the crucial aspects of exploration and exploitation in the optimization process. This balance is achieved through various mechanisms that are reflective of the natural attributes of oxen, such as their strength, endurance, gradual progress, and collaborative tendencies.

3.4. Mathematical Model Equations

The positions of the population, referred to as OX 1 and OX , are initialized randomly within the given bounds, as shown in Equations (1) and (2):
OX 1 i = initialization ( N , dim , ub , lb ) , for i = 1 , , N ,
OX i = initialization ( N , dim , ub , lb ) , for i = 1 , , N ,
where N is the population size, dim is the dimensionality of the search space, and ub and lb are the upper and lower bounds of the search space.

3.4.1. Fitness Evaluation

The fitness of each initial population member OX 1 i is evaluated using the objective function f obj , as shown in Equation (3):
f OX 1 i = f obj ( OX 1 i ) , for i = 1 , , N .
The population is then sorted based on fitness, and the best position is identified, as shown in Equations (4) and (5):
OX 1 sorted , idx sorted = sort ( { f OX 1 i } i = 1 N ) ,
OX 1 best = OX 1 sorted , 1 , and f OX 1 best = f OX 1 sorted , 1 ,
where OX 1 sorted are the sorted population members based on their fitness, idx sorted are the indices of the sorted population members, OX 1 best is the position of the best member, and f OX 1 best is the fitness of the best member.

3.4.2. Main Optimization Loop

For each iteration t = 1 , , Max_iter , the algorithm performs the following steps:

Offspring Generation Using Order Crossover

Two parents are selected using tournament selection, and offspring are generated using order crossover, as shown in Equations (6) and (7):
parent 1 , parent 2 = TournamentSelection ( OX 1 sorted , tournament_size ) ,
offspring = OrderCrossover ( parent 1 , parent 2 ) ,
where parent 1 and parent 2 are the selected parents, and offspring is the generated offspring.

Levy Flight for Mutation

Levy flight is applied to the offspring for a mutation-like effect, as shown in Equation (8):
offspring levy = offspring + 0.01 · step · ( offspring lb ) ,
where step is calculated based on the Levy distribution parameters, and offspring levy is the mutated offspring.

Boundary Checking

The offspring positions are checked to ensure they remain within the search space bounds, as shown in Equation (9):
offspring bounded = min ( max ( offspring levy , lb ) , ub ) ,
where offspring bounded is the offspring position after boundary checking.

Fitness Evaluation of Offspring

The fitness of each offspring is evaluated, as shown in Equation (10):
f OX i = f obj ( OX i ) , for i = 1 , , N ,
where f OX i is the fitness of the offspring OX i .

Updating Population and Best Position

The population is updated by combining and sorting the current population and the offspring based on fitness, as shown in Equations (11) and (12):
OX combined = { OX 1 sorted , OX } , and fitness combined = { f OX 1 sorted , f OX } ,
OX 1 sorted , fitness sorted = sort ( fitness combined ) ,
where OX combined is the combined population of sorted and current offspring, and fitness combined is the combined fitness of the sorted and current offspring.
The best position is updated if a fitter individual is found, as shown in Equation (14):
if fitness sorted , 1 < f OX 1 best , then
OX 1 best = OX 1 sorted , 1 , and f OX 1 best = fitness sorted , 1 ,
where OX 1 best is updated to the position of the best individual in the sorted population if its fitness f OX 1 best is better.

Best Solution

The convergence curve is updated to track the best fitness value found so far, as shown in Equation (15):
Best_solution [ t ] = f OX 1 best ,
where Best_solution [ t ] is the fitness of the best member at iteration t.

3.5. OX Pseudo-Code and Description

The OX optimizer is an evolutionary optimization algorithm inspired by oxen to simulate their behavior; it combines order crossover and Levy flights to enhance search efficiency, as shown in Algorithm 1. It begins by initializing two populations, OX1 and OX, within specified bounds. The algorithm then evaluates the fitness of the initial OX1 population, sorting them to identify elite individuals. During each iteration, offspring are generated through tournament selection and order crossover of parent solutions, followed by applying Levy flights for mutation-like effects. These offspring undergo boundary checking to ensure they remain within feasible limits. The fitness of the offspring is evaluated and combined with the existing population, from which the best solutions are selected, and the elite position is updated if a fitter solution is found. The convergence curve is updated to reflect the best fitness value obtained in each iteration. This process iterates until the maximum number of iterations is reached, ultimately returning the best fitness value and the corresponding position.

3.6. Exploration and Exploitation in OX Optimizer

Exploration refers to the algorithm’s ability to investigate a wide range of the search space to avoid local optima and discover diverse solutions. In the OX optimizer, exploration is primarily driven by the Levy flights applied to offspring solutions. The Levy flight mechanism introduces large, random jumps in the search space, enabling the algorithm to explore new, unvisited regions effectively. This stochastic component is essential for maintaining diversity in the population and ensuring that the algorithm does not prematurely converge on suboptimal solutions.
Exploitation focuses on refining existing solutions to improve their quality and converge towards the global optimum. In the OX optimizer, exploitation is facilitated by the order crossover process and the selection mechanisms. Order crossover recombines parts of two parent solutions to produce offspring that inherit desirable traits, effectively exploiting the search space around the current best solutions. Additionally, tournament selection ensures that better-performing individuals are more likely to be chosen as parents, further enhancing the exploitation process. The fitness evaluation and sorting of the population, along with the updating of the elite individual, continuously drive the algorithm towards better solutions.
Algorithm 1 A pseudo-code summarizing the optimization process of the OX optimizer.
  • Require: Population size N, Maximum iterations Max_iter , Lower bounds lb , Upper bounds ub , Dimensionality dim, Objective function f obj
  • Ensure: Best fitness value Alpha_score , Best position Alpha_pos
1:
Initialize OX 1 and OX positions using (1) and (2)
2:
Initialize best position OX 1 best and fitness f OX 1 best
3:
Evaluate fitness of initial OX 1 population using (3)
4:
Sort OX 1 based on fitness and identify best using (4) and (5)
5:
for  t = 1 to Max_iter  do
6:
    for  i = 1 to N do
7:
        Select two parents using tournament selection (6)
8:
        Generate offspring using Order Crossover (7)
9:
        Apply Levy flight to offspring (8)
10:
        Ensure offspring within bounds (9)
11:
        Evaluate fitness of offspring using (10)
12:
    end for
13:
    Combine and sort current population and offspring (11) and (12)
14:
    Update best position if a fitter individual is found (14)
15:
    Update best solution found (15)
16:
end for

3.7. Theoretical Discussion about the OX Optimizer

Robustness through strength and endurance: The OX optimizer’s approach to optimization is grounded in robustness, much like the enduring nature of oxen. The strength of each agent in the optimizer, analogous to the fitness measure in evolutionary algorithms, is a critical factor in evaluating and advancing towards optimal solutions. The endurance aspect, mirroring the persistent display of oxen, ensures that the optimizer can sustain its search over long iterations. This endurance is particularly beneficial in complex optimization landscapes where a thorough and persistent search is key to locating the optimal solution.
Adaptability to varied problem spaces: The OX optimizer exhibits a high degree of adaptability, akin to how oxen dynamically change in response to environmental conditions. This adaptability allows the optimizer to adjust its search strategy dynamically based on the specific requirements and characteristics of the problem at hand. Whether the problem is continuous or discrete, simple or complex, the OX optimizer can tailor its approach to suit the specific nature of the problem, thereby enhancing its effectiveness.
Balanced exploration and exploitation: Central to the OX optimizer’s effectiveness is its ability to balance exploration and exploitation, a key challenge in any optimization algorithm. Exploration is driven by Levy flights, introducing large, random jumps that enable the search of new regions in the search space, thus avoiding premature convergence on local optima. Simultaneously, exploitation is facilitated through order crossover and tournament selection, ensuring that promising areas are thoroughly refined. These mechanisms ensure that once a potentially optimal region is identified, the optimizer focuses its efforts on converging towards the best solution.
Collaborative information sharing: The OX optimizer incorporates collaborative mechanisms into its algorithmic structure, akin to the collective behavior observed in natural systems. Agents within the optimizer share information about their positions and experiences, leading to collective intelligence. This collaborative aspect allows the optimizer to benefit from a diverse range of insights and strategies, contributing to a more effective and comprehensive exploration of the search space.

4. Implementation and Testing

4.1. Overview of IEEE Congress on Evolutionary Computation (CEC2022)

The performance of our optimizer was also assessed using a diverse array of benchmark functions from the CEC2022 competition, as illustrated in Table 2. These functions are meticulously crafted to evaluate the effectiveness and versatility of evolutionary computation algorithms across different problem landscapes. The suite includes unimodal functions, like the shifted and fully rotated Zakharov function (F1), which test fundamental search capabilities and convergence behaviors. Multimodal functions, such as the shifted and fully rotated Levy function (F5), present numerous local optima to challenge the global search abilities of the algorithms. Hybrid functions, exemplified by hybrid function 3 (F8), combine elements from various problem types to simulate more complex and realistic optimization scenarios. Composition functions, like composition function 4 (F12), integrate multiple landscapes into a single test, thereby evaluating the algorithms’ adaptability and robustness. An illustration of the CEC2022 benchmark functions (F1–F6) is shown in Figure 2.

4.2. CEC2017 Benchmark Overview

To thoroughly assess the performance of our proposed optimizer, we employed the CEC2017 benchmark suite. These functions, outlined in Table 3, are methodically crafted to evaluate the robustness and efficacy of optimization algorithms across various complex scenarios. The suite encompasses a variety of function types, ranging from unimodal functions such as the rotated high conditioned elliptic function ( F 1 ) to complex multimodal functions like the shifted and rotated Rastrigin’s function ( F 9 ). Each function includes specific rotations and shifts to enhance complexity and eliminate any inherent bias toward particular coordinate axes. Additionally, the suite incorporates hybrid and composition functions, including hybrid function 1 ( F 17 ) and composition function 1 ( F 23 ), which combine multiple characteristics of simpler functions to simulate more intricate and realistic conditions. An illustration of selected function of CEC2017 Functions is shown in Figure 3.

4.3. Performance Evaluation with State-of-the-Art Optimization Algorithms

To evaluate the performance of the OX optimizer, we compared its performance with various state-of-the-art optimization algorithms, as shown in Table 4.

5. Statistical Results and Discussion

5.1. Results over CEC2022

The performance of the OX optimizer compared to other state-of-the-art algorithms over the CEC2022 benchmark suite (F1–F12) demonstrates its superior performance; as shown in Table 5, the OX optimizer consistently achieves competitive results across different functions. For instance, in F1, the OX achieves a mean score of 3.03 × 10 2 , significantly outperforming CMAES and WOA, which have mean scores of 2.13 × 10 4 and 1.58 × 10 4 , respectively. This trend is observed across several functions, where the OX optimizer ranks among the top performers, often achieving first or second place. In F2, the OX optimizer achieves the best mean score of 4.05 × 10 2 , while in F3, it ranks second with a mean score of 6.08 ×   10 2 , closely behind the leading algorithm. The robustness of the OX optimizer is further evident in functions like F6 and F7, where it maintains a leading position with minimal standard deviation, indicating consistent performance. Compared to traditional algorithms such as PSO, AOA, and SCA, the OX optimizer demonstrates lower mean scores and standard errors, highlighting its efficiency and reliability.

5.2. Results over CEC2017

The results of the OX optimizer in comparison to other state-of-the-art optimizers over the CEC2017 benchmark functions (F1–F14) highlight its competitive performance and robustness. As shown in Table 6, the OX optimizer consistently achieves lower mean values for most functions, indicating superior optimization capability. For example, in F1, the OX optimizer achieves a mean value of 1.50 ×   10 6 , outperforming CMAES (6.02 × 10 9 ), WOA (2.45 × 10 6 ), and other algorithms. Similarly, for F2, the OX optimizer’s mean value of 5.56 ×   10 3 is significantly better than the means of CMAES (1.93 × 10 11 ) and BOA (1.14 × 10 11 ). The standard deviations and standard errors for the OX optimizer are also lower or comparable, suggesting stability and reliability in its optimization process. Furthermore, the OX optimizer shows strong performance in multimodal functions such as F3 and F4, maintaining lower means and ranks compared to competitors like PSO, AOA, and SCA.
Furthermore, OX demonstrates notable performance compared to other state-of-the-art optimizers over the CEC2017 benchmark functions (F15–F30), as evidenced by the results in Table 7. The OX optimizer frequently achieves lower mean values, indicating superior optimization capabilities. For example, in F15, the OX optimizer has a mean of 2.53 × 10 3 , outperforming other algorithms such as CMAES (4.17 × 10 3 ) and WOA (5.86 × 10 3 ). Similarly, in F16, the OX optimizer’s mean of 1.91 × 10 3 is competitive with other top-performing algorithms. In multimodal and hybrid functions like F18 and F19, the OX optimizer shows its robustness with significantly lower means, such as 1.31 × 10 4 for F18, compared to CMAES (2.90 × 10 6 ) and BOA (4.13 × 10 5 ). These results highlight the OX optimizer’s efficiency in handling complex and diverse optimization problems, making it a strong contender against other well-known algorithms such as PSO, AOA, and SCA. The standard deviations and standard errors further affirm the stability and reliability of the OX optimizer’s performance across different test functions.

5.3. OX Convergence Diagram

As can be seen in Figure 4, the convergence curves for the OX optimizer over CEC2022 (F1 to F6) show an initial improvement followed by stabilization. For F1, the optimizer starts with a high initial value and quickly decreases, then flattens, indicating efficient convergence to a good solution early on. Similar behavior is observed for F2, F3, and F4, where there is a steep drop in the best value obtained early in the iterations, followed by a plateau. For F5, the optimizer demonstrates a rapid decline, followed by a period of stability, indicating that most of the optimization work is done early. F6 exhibits a more gradual decline throughout the iterations, suggesting that this function requires continuous improvements over a longer period before stabilization. Moreover, the OX optimizer shows efficiency in finding good solutions quickly for F1 to F6, with minor refinements occurring later in the process.
Furthermore, as can be seen in Figure 5 for functions F7 to F12, the OX convergence also displays a pattern of rapid initial improvement but with slight variations in the rate and timing of stabilization. For F7, the optimizer achieves fast convergence to a near-optimal solution. F8 follows a similar pattern, with good convergence performance and the best value obtained, then a long period of stability. The convergence curve for F9 shows a steep decline in the best value within the initial iterations, followed by leveling off, demonstrating efficient early convergence and minimal improvements thereafter. For F10, the optimizer again shows a good convergence performance in the best value obtained, followed by a plateau, indicating that it quickly finds a good solution and maintains stability. Moreover, the OX optimizer demonstrates consistent and efficient performance across F7 to F12, converging to good solutions and maintaining stability throughout the iterations.
The convergence curves for the OX optimizer over the CEC2017 benchmarks (F1–F15) provide a detailed view of its performance. For F1, the curve shows a significant decrease in the best value obtained, with gradual improvements thereafter, indicating fast initial convergence followed by slower refinements. Similarly, F2 exhibits a steep drop early on, followed by a steady state, suggesting the optimizer quickly approaches a near-optimal solution. F3’s convergence curve is quite steep at the beginning and then levels off, showing efficient optimization in the early stages. F4 and F5 both show rapid initial convergence, with minor improvements thereafter, demonstrating the optimizer’s quick adaptability to these functions. F6 follows a similar pattern but with a more pronounced plateau phase, indicating some difficulty in further refining the solution.

5.4. OX Search History Diagram

As it can be seen in Figure 6, the search history plots for the OX optimizer over selected functions of CEC2017 benchmark functions reveal distinct patterns in its exploration and exploitation behavior. For functions F1–F6, the search history is concentrated around the origin with a high density of points in a circular pattern, indicating thorough exploration within a broad region and gradual convergence towards the optimal solutions. This demonstrates the optimizer’s ability to extensively search the solution space before focusing on more promising areas.
Furthermore, in Figure 7, for functions F7 and F12, the optimizer’s search appears more linear but also concentrated around the optimal points, suggesting that it quickly narrows down on promising regions but potentially at the cost of broader exploration, indicating balanced exploration across the dimensions with a focus on a central area.

5.5. Box-Plot Analysis

Analyzing the box-plot for the OX optimizer across various functions provides valuable insights into the optimizer’s performance and variability. As can be seen in Figure 8 and Figure 9, for F1, the fitness scores show a wide range with significant spread, indicating considerable variability in the optimizer’s performance. F2 follows a similar pattern, though the spread is slightly narrower, indicating a slight improvement in consistency. However, there is still a high median score suggesting room for better optimization. For F3, the spread narrows further, indicating improved consistency, although the median score remains relatively high, showing that there is still variability in achieving lower fitness scores. F4 shows a more pronounced narrowing of the spread, indicating a further improvement in consistency, with the median score slightly lower, reflecting better optimization performance.
F5 displays a wider spread once again, with some outliers, suggesting that the optimizer encounters more difficulty with this function. The median score is high, emphasizing the challenge posed by this function. F6 shows a substantial spread with a higher range of scores, indicating significant variability and difficulty in optimization. In F7, the spread narrows with fewer outliers, reflecting improved consistency, but the median score remains relatively high. F8 shows a similar pattern with a widespread and high median score. F9 exhibits a substantial spread and a high median score, with some outliers indicating variability and difficulty in optimization. Finally, F12 shows a significant spread with a high median score, reflecting variability in performance and the challenge of achieving lower fitness scores. Moreover, the OX optimizer shows variability in performance across different functions, with certain functions posing more significant challenges and resulting in higher median fitness scores and broader spreads. This analysis highlights areas where the optimizer could potentially be improved to achieve more consistent and lower fitness scores across different benchmark functions.

5.6. OX Heat Map Analysis

The sensitivity analysis heatmaps for the functions illustrate how the performance of the optimizer is influenced by varying the number of search agents and the maximum iterations. As can be seen in Figure 10 and Figure 11, the heatmap of OX over F10 indicates that the optimizer performs best with around 30 to 40 search agents and 300 iterations, achieving the lowest fitness scores. As the number of search agents increases beyond 40, the performance slightly decreases. F9’s heatmap reveals a consistent performance improvement with increasing iterations, particularly noticeable with 30 and 40 search agents, where the fitness scores are more favorable in the mid-range iterations.
For F8, the optimizer exhibits optimal performance with 30 search agents and 300 iterations, where the fitness scores are minimized. This trend is slightly altered for F7, where the best performance is seen with 20 search agents and around 300 iterations, highlighting a different optimal configuration. F6 shows significant variability, indicating that higher iterations and search agents generally lead to better performance, but there are notable fluctuations. For F5, the optimal configuration is around 40 search agents and 300 iterations, achieving the lowest fitness scores. F4’s heatmap suggests a broader optimal range, with good performance across various combinations, but slightly favoring 30 search agents and 300 iterations. F3 displays a consistent improvement with increasing iterations, particularly with 30 and 40 search agents. F2 and F1 also show similar trends, where mid-range iterations with 30 to 40 search agents generally yield better fitness scores. The heatmaps collectively demonstrate the optimizer’s sensitivity to these parameters, highlighting the importance of tuning for optimal performance.

5.7. OX Histogram Analysis

As can be seen in Figure 12 and Figure 13, the OX histograms of the final fitness values for the OX optimizer across CEC2022 benchmark functions provide a comprehensive view of the distribution of final fitness values achieved during optimization runs. For F1, the histogram shows a broad distribution with a significant concentration of values around 2000 to 3000, indicating a tendency to converge around this range. However, there are also a notable number of runs that resulted in much higher fitness values, showing some variability in performance.
For F2, the histogram exhibits a bimodal distribution with peaks around 400 and 450, suggesting that the optimizer often converges to these two different ranges. This bimodal pattern could indicate the presence of multiple local minima or varying algorithm behavior. Similarly, the histogram for F3 shows a predominant concentration around 600 to 605, highlighting a more consistent performance with lesser variability. The histograms for F4, F5, F6, F7, F8, F9, and F12 also show distinct patterns, with F6 showing a wide range of final fitness values, indicating high variability and potential challenges in optimization, while others like F8 and F9 exhibit more concentrated distributions, suggesting more stable convergence behavior. Overall, these histograms reveal the optimizer’s performance consistency and the variability in achieving optimal solutions across different benchmark functions.

6. Case Study: Optimization of SVM Using an OX Optimizer to Detect Attacks

The process of optimizing SVM with the OX optimizer involves four key steps, mirroring the original approach but with modifications to leverage the OX optimizer’s strengths.

6.1. Dataset Description

The NSL-KDD dataset is an improved version of the original KDD Cup 1999 dataset, which is widely used for evaluating intrusion detection systems. The original KDD Cup 1999 dataset has been criticized for various issues, including redundant records that can lead to biased performance evaluations. To address these issues, the NSL-KDD dataset was introduced.
The NSL-KDD dataset contains 41 features for each connection record, along with a label that indicates whether the record is normal or represents a specific type of attack. These features fall into three main categories: basic features, derived from packet headers without inspecting the payload, including attributes like duration, protocol type, service, and flag. Content features are derived from the payload of the packet and include attributes such as the number of failed login attempts. Traffic features are computed using a two-second time window and include attributes like the number of connections to the same host in the past two seconds.
The dataset includes a variety of attack types grouped into four main categories. Denial of service (DoS) attacks and floods a network to disrupt service, with examples including SYN flood attacks. Remote to local (R2L) attacks occur when an attacker sends packets to a machine over the network without having an account on that machine, such as in password guessing attacks. User to root (U2R) attacks involve an attacker starting with access to a normal user account and then gaining root access to the system, like in buffer overflow attacks. Probe attacks involve scanning the network to gather information or find known vulnerabilities, such as port scanning.
The NSL-KDD dataset offers several improvements over the KDD Cup 1999 dataset. It has fewer redundant records, which helps provide a more balanced training and testing dataset. The dataset offers a more balanced distribution of attack and normal records, preventing classifiers from being biased towards more frequent records. Additionally, it contains records of varying difficulty levels, making it more challenging for classifiers and ensuring they generalize better.
The NSL-KDD dataset is divided into two main parts: KDDTrain+ and KDDTest+. KDDTrain+ is used for training models and contains 125,973 records, while KDDTest+ is used for testing models and contains 22,544 records. Each record in the NSL-KDD dataset includes a combination of numeric and categorical features, which can be preprocessed and transformed into a suitable format for machine learning models. The goal of using this dataset is to develop and evaluate intrusion detection systems that can effectively distinguish between normal and malicious network activities.

6.2. Data Pre-Processing

The NSL-KDD training dataset, comprising approximately 4,900,000 single connection vectors, each with 42 features indicating either an attack or normal status, is the starting point. In this step, labels are mapped to numeric values suitable for the SVM algorithm. Normal connections are assigned a target class of ‘zero’, and deviations (i.e., attacks) are assigned a target class of ‘one’. This step also involves filtering and modifying data, such as converting textual items to numeric values and ensuring each connection vector has 41 attributes.

6.3. Conversion to LibSVM Format

The pre-processed datasets are then converted to LibSVM format, with categorical features translated into numeric values. This process involves defining two target classes: ‘zero’ for normal instances and ‘one’ for attacks or intrusions. The converted data is then saved in LibSVM format, consisting of a label indicating the target class, an index for each feature, and the corresponding value of that feature. This step is followed by linear scaling of the datasets to enhance SVM classification performance.

6.4. Optimization Using SVM and an OX Optimizer

Here, the primary modification is the replacement of PSO with the OX optimizer. The NSL-KDD dataset, scaled in the range [0, 1] and formatted in LibSVM, is used for this purpose. The SVM algorithm, with its various kernel functions like linear, RBF, and polynomial, requires optimization of parameters like cost (C) and gamma (g), particularly for the RBF kernel. The OX optimizer, with its attributes of strength, persistence, gradual progress, and collaborative work, is employed to optimize these parameters and features. The endurance and adaptability mechanisms of the OX optimizer make it particularly suitable for navigating through the complex parameter space of the SVM, ensuring that the algorithm consistently moves towards the most promising solutions without premature convergence. The adaptability of the OX optimizer allows it to fine-tune its approach based on the feedback from the ongoing optimization process, dynamically adjusting its parameters to enhance the SVM’s performance.

6.5. Implementation and Evaluation

In this final step, the optimized SVM model, with parameters and features refined by the OX optimizer, is implemented for classification tasks. The effectiveness of this optimized model is evaluated based on its accuracy and ability to classify new data correctly. The endurance aspect of the OX optimizer ensures that the optimization process is thorough, potentially leading to a more robust and reliable SVM model.

6.6. Classification Using SVM Optimized by an OX Optimizer

The classification process using SVM involves training the system with a portion of the data to identify several support vectors that represent these training data. These vectors form the foundation of the SVM model. In this revised approach, the OX optimizer is employed to optimize the parameters C and g (as previously denoted in Equations (1) and (2)) and to select the feature subset. This optimization aims to refine and improve the SVM model’s performance. The SVM then classifies unknown datasets using the following input and output data formats:
( X i , Y i ) , , ( X n , Y n ) ,
where
X R m ,
and
Y { 0 , 1 } .
Here, ( X i , Y i ) , , ( X n , Y n ) represents the training data records, n is the number of samples, m is the input vector, and Y belongs to the category class ‘0’ or ‘1’.
In problems involving linear classification, a hyperplane divides the two categories. The formula for the hyperplane is given as follows:
( w · x ) + b = 0 .
The categorization can be described by the following conditions:
( w · x ) + b 0 if Y i = 1 ,
( w · x ) + b 0 if Y i = 0 .
The classification task involves training and testing data, each consisting of data instances. Each instance in the training set contains one “target value” (class labels: normal or attack) and several “attributes” (features). The goal of the SVM is to produce a model which predicts the target value of data instances in the testing set, given only the attributes. To achieve this, different kernel functions are used. In this experiment, the RBF kernel function is employed.
The formula for the RBF kernel optimization function is given as follows:
exp ( g · X i X j 2 ) .
The process of finding vectors from the training data is formulated as follows:
Minimize w 2 + C .
Robustness through strength and endurance: The OX optimizer’s approach to optimization is grounded in the robust nature of oxen. The strength of each agent in the optimizer, analogous to the fitness measure in evolutionary algorithms, is a critical factor in evaluating and advancing towards optimal solutions. The endurance aspect, mirroring the ox’s ability to work for extended periods, ensures that the optimizer can sustain its search over long iterations. This endurance is particularly beneficial in complex optimization landscapes where a thorough and persistent search is key to locating the optimal solution.
Adaptability to varied problem spaces: The OX optimizer exhibits a high degree of adaptability, akin to how oxen adapt to different environmental conditions. This adaptability allows the optimizer to adjust its search strategy dynamically based on the specific requirements and characteristics of the problem at hand. Whether the problem is continuous or discrete, simple or complex, the OX optimizer can tailor its approach to suit the specific nature of the problem, thereby enhancing its effectiveness.
Balanced exploration and exploitation: Central to the OX optimizer’s effectiveness is its ability to balance exploration and exploitation, a key challenge in any optimization algorithm. The optimizer’s steady and gradual progress, inspired by the ox’s methodical approach, ensures that new regions of the search space are explored to avoid premature convergence on local optima. Simultaneously, its mechanisms for exploiting promising areas ensure that once a potentially optimal region is identified, the optimizer focuses its efforts on refining and converging towards the best solution.
Collaborative information sharing: The OX optimizer incorporates the collaborative nature of oxen into its algorithmic structure. Agents within the optimizer share information about their positions and experiences, leading to collective intelligence. This collaborative aspect allows the optimizer to benefit from a diverse range of insights and strategies, contributing to a more effective and comprehensive exploration of the search space.
Systematic and incremental improvement: The algorithm’s approach to making incremental improvements in the solution mimics the ox’s steady nature. This systematic progression prevents drastic, suboptimal changes in the solution trajectory and supports a more stable convergence towards the optimal solution.

6.7. Experimental Results

In this experiment, the OX optimizer optimizes the parameters of the RBF SVM and also reduces the features of the training set, effectively removing noisy features. The training set contains 25,149 records, and the testing set includes 11,850 records. The experiment compares different kernel functions of SVM with feature selection in terms of accuracy, as measured by a confusion matrix.
The kernel functions examined include linear, Gaussian, RBF, and polynomial. The results as shown in Table 8, indicate that the RBF kernel function, with features optimized by the OX optimizer, provides the highest accuracy. This improvement is attributed to the OX optimizer’s strength in handling complex optimization landscapes, its gradual and steady progress in refining solutions, and its collaborative approach to feature selection, all of which contribute to an enhanced SVM model.

6.8. Parameter and Feature Optimization Using an OX Optimizer

The OX optimizer, drawing inspiration from the characteristics of oxen, adopts a unique approach to optimization tasks. In parameter optimization, the goal is to find the optimal set of parameters for an SVM model, typically the parameters C and g. The process begins with the initialization of a population of ‘OX agents’, each representing a potential solution with a position corresponding to a set of parameter values. The fitness of each OX agent is evaluated based on the SVM’s performance, with the aim of minimizing error rates or maximizing accuracy. The strength of each OX agent, analogous to the ox’s load-carrying capacity, is associated with its fitness value. This strength, along with the OX agent’s persistence, emulates the steady and consistent nature of oxen, ensuring gradual changes in parameter values. The steps for OX optimizer in Feature Optimizatio are shown in Algorithm 2.
Collaboration is a key aspect of the OX optimizer, where OX agents share information and adjust their parameters based on collective knowledge. This collaborative behavior leads to a more effective search strategy. The optimizer’s endurance allows for sustained search efforts over time, while its adaptability helps adjust the strategy based on feedback from the environment. The position of each OX agent, representing the SVM parameters, is updated iteratively, and the algorithm continues until a satisfactory solution is found or a predefined number of iterations is reached.
In feature optimization, the focus shifts to selecting the optimal subset of features from a dataset to enhance the SVM’s performance. Each OX agent’s position in this scenario represents a binary-encoded feature set. The initial random selection of features introduces diversity, which is crucial for an effective optimization search. The fitness evaluation here is based on the SVM’s performance with the selected feature set. The OX optimizer employs its strength and collaboration mechanisms to refine the feature selection process. This iterative approach, coupled with the OX optimizer’s endurance and adaptability to feedback, ensures a thorough exploration of the feature space. The process converges upon identifying an optimal or near-optimal feature set or upon meeting a stopping criterion.
Algorithm 2 Feature optimization using an OX optimizer.
1:
Step 1: Initialization
2:
Define l as a binary string of size 40.
3:
Step 2: OX Agent Definition
4:
Define OX agent as a set {position, fitness, bestPosition, bestFitness}.
5:
Step 3: Initialize Population
6:
Create an array of OX agents of size max_size, and initialize variables bestGlobalPosition and bestGlobalFitness.
7:
for each OX agent i from 1 to max_size do
8:
    position = random_string(l)
9:
    writeRandomFeatures(position, Str)  ▹ Generate feature.txt from binary string l, where Str is the scaled training dataset
10:
    Fitness = SVMF(feature.txt, Stest, C, g) ▹Stest is the scaled test dataset, and C and g are parameters obtained from parameter optimization
11:
    Update OX agent i with {position, Fitness, position, Fitness}
12:
    if OX agent[i].Fitness < bestGlobalFitness then
13:
        Update bestGlobalFitness and bestGlobalPosition
14:
    end if
15:
end for
16:
Step 4: Optimization Loop
17:
while  i < max_iterations do
18:
    for each OX agent j in the population do
19:
        Select two parents using tournament selection
20:
        Generate NewPosition using Order Crossover
21:
        Apply Levy flight to NewPosition
22:
        Ensure NewPosition within bounds
23:
        writeRandomFeatures(NewPosition, Str)
24:
        NewFitness = SVMF(feature.txt, Stest, C, g)
25:
        if NewFitness < OX agent[j].bestFitness then
26:
           Update OX agent[j].bestPosition = NewPosition
27:
           Update OX agent[j].bestFitness = NewFitness
28:
        end if
29:
        if NewFitness < bestGlobalFitness then
30:
           Update bestGlobalPosition = NewPosition
31:
           Update bestGlobalFitness = NewFitness
32:
        end if
33:
    end for
34:
end while
35:
return BestFitness and BestPosition

7. Conclusions

The OX optimizer represents a significant advancement in the field of optimization algorithms, demonstrating robust performance across a variety of complex, high-dimensional problems. Its novel approach, which combines evolutionary algorithms with advanced local search techniques, ensures efficient global exploration and precise local exploitation. This has been validated through superior results on CEC2022 and CEC2017 benchmark functions, showcasing the algorithm’s superior convergence speed and solution quality compared to existing state-of-the-art algorithms. The practical utility of the OX optimizer is further confirmed through its application to support vector machines (SVMs) for intrusion detection, as tested on the NSL-KDD dataset. The results highlight its ability to significantly enhance SVM performance, particularly with the RBF kernel, by facilitating effective parameter space exploration. The OX optimizer consistently outperformed traditional methods, demonstrating its capability to navigate complex optimization landscapes effectively. The success of the OX optimizer is attributed to the integration of oxen-like characteristics such as strength, steady progress, collaboration, and endurance into the optimization process. These traits enable a more nuanced and adaptive exploration of the parameter space, leading to more robust and reliable SVM models.

Author Contributions

Conceptualization, H.N.F.; Methodology, A.K.A.H. and H.N.F.; Formal analysis, A.K.A.H. and H.N.F.; Resources, A.K.A.H.; Writing—original draft, A.K.A.H. and H.N.F.; review and editing, A.K.A.H. and H.N.F. All authors have read and agreed to the published version of the manuscript.

Funding

This Research is funded by Security Management Technology Group (SMT).

Data Availability Statement

Data are contained within the article.

Acknowledgments

We thank Samir M. Abu Tahoun, Security Management Technology Group (SMT) (http://www.smtgroup.org/ accessed on 1 July 2024), for the financial support of our research project.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
DOAJDirectory of open access journals
TLAThree-letter acronym
LDLinear dichroism

References

  1. Ma, X.; Li, X.; Zhang, Q.; Tang, K.; Liang, Z.; Xie, W.; Zhu, Z. A survey on cooperative co-evolutionary algorithms. IEEE Trans. Evol. Comput. 2018, 23, 421–441. [Google Scholar] [CrossRef]
  2. Fakhouri, H.N.; Hudaib, A.; Sleit, A. Multivector particle swarm optimization algorithm. Soft Comput. 2020, 24, 11695–11713. [Google Scholar] [CrossRef]
  3. Guo, W.; Chen, M.; Wang, L.; Mao, Y.; Wu, Q. A survey of biogeography-based optimization. Neural Comput. Appl. 2017, 28, 1909–1926. [Google Scholar] [CrossRef]
  4. Fakhouri, S.N.; Hudaib, A.; Fakhouri, H.N. Enhanced optimizer algorithm and its application to software testing. J. Exp. Theor. Artif. Intell. 2020, 32, 885–907. [Google Scholar] [CrossRef]
  5. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-Qaness, M.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 1–30. [Google Scholar] [CrossRef]
  6. Timmis, J.; Knight, T.; Castro, L.N.; Hart, E. An overview of artificial immune systems. In Computation in Cells and Tissues: Perspectives and Tools of Thought; Springer: Berlin/Heidelberg, Germany, 2004; pp. 51–91. [Google Scholar]
  7. Fakhouri, H.N.; Hudaib, A.; Sleit, A. Hybrid particle swarm optimization with sine cosine algorithm and nelder–mead simplex for solving engineering design problems. Arab. J. Sci. Eng. 2020, 45, 3091–3109. [Google Scholar] [CrossRef]
  8. Kumar, A.; Pant, S.; Ram, M.; Yadav, O. Meta-Heuristic Optimization Techniques: Applications in Engineering; Walter de Gruyter GmbH & Co. KG: Berlin, Germany, 2022; Volume 10. [Google Scholar]
  9. Zhan, Z.-H.; Shi, L.; Tan, K.C.; Zhang, J. A survey on evolutionary computation for complex continuous optimization. Artif. Intell. Rev. 2022, 55, 59–110. [Google Scholar] [CrossRef]
  10. Alba, E.; Nakib, A.; Siarry, P. Metaheuristics for Dynamic Optimization; Springer: Berlin/Heidelberg, Germany, 2013; Volume 433. [Google Scholar]
  11. Guo, C.; Tang, H.; Niu, B.; Lee, C.B.P. A survey of bacterial foraging optimization. Neurocomputing 2021, 452, 728–746. [Google Scholar] [CrossRef]
  12. Ihsan, R.R.; Almufti, S.M.; Ormani, B.; Asaad, R.R.; Marqas, R.B. A survey on cat swarm optimization algorithm. Asian J. Res. Comput. Sci. 2021, 10, 22–32. [Google Scholar] [CrossRef]
  13. Fakhouri, H.N.; Hamad, F.; Alawamrah, A. Success history intelligent optimizer. J. Supercomput. 2022, 78, 6461–6502. [Google Scholar] [CrossRef]
  14. Siarry, P. Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2016; Volume 71. [Google Scholar]
  15. Sunnåker, M.; Busetto, A.G.; Numminen, E.; Corander, J.; Foll, M.; Dessimoz, C. Approximate bayesian computation. PLoS Comput. Biol. 2013, 9, 1002803. [Google Scholar] [CrossRef]
  16. Deng, Z.; Huang, M.; Wan, N.; Zhang, J. The current development of structural health monitoring for bridges: A review. Buildings 2023, 13, 1360. [Google Scholar] [CrossRef]
  17. Soler-Dominguez, A.; Juan, A.A.; Kizys, R. A survey on financial applications of metaheuristics. ACM Comput. Surv. (CSUR) 2017, 50, 1–23. [Google Scholar] [CrossRef]
  18. Fakhouri, H.N.; Hwaitat, A.K.A.; Ryalat, M.; Hamad, F.; Zraqou, J.; Maaita, A.; Alkalaileh, M.; Sirhan, N.N. Improved path testing using multi-verse optimization algorithm and the integration of test path distance. Int. J. Interact. Mob. Technol. 2023, 17. [Google Scholar] [CrossRef]
  19. Swan, J.; Adriaensen, S.; Brownlee, A.E.; Hammond, K.; Johnson, C.G.; Kheiri, A.; Krawiec, F.; Merelo, J.J.; Minku, L.L.; Özcan, E.; et al. Metaheuristics “in the large”. Eur. J. Oper. Res. 2022, 297, 393–406. [Google Scholar] [CrossRef]
  20. Huang, M.; Ling, Z.; Sun, C.; Lei, Y.; Xiang, C.; Wan, Z.; Gu, J. Two-stage damage identification for bridge bearings based on sailfish optimization and element relative modal strain energy. Struct. Eng. Mech. Int’l J. 2023, 86, 715–730. [Google Scholar]
  21. Du, D.; Zhu, M.; Li, X.; Fei, M.; Bu, S.; Wu, L.; Li, K. A review on cybersecurity analysis, attack detection, and attack defense methods in cyber-physical power systems. J. Mod. Power Syst. Clean Energy 2022, 11, 727–743. [Google Scholar] [CrossRef]
  22. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Hamad, F. Novel hybrid success history intelligent optimizer with gaussian transformation: Application in cnn hyperparameter tuning. Clust. Comput. 2023, 27, 3717–3739. [Google Scholar] [CrossRef]
  23. Lambora, A.; Gupta, K.; Chopra, K. Genetic algorithm-a literature review. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 380–384. [Google Scholar]
  24. Delahaye, D.; Chaimatanan, S.; Mongeau, M. Simulated annealing: From basics to applications. Handb. Metaheuristics 2019, 272, 1–35. [Google Scholar]
  25. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  26. Fakhouri, H.N.; Awaysheh, F.M.; Alawadi, S.; Alkhalaileh, M.; Hamad, F. Four vector intelligent metaheuristic for data optimization. Computing 2024, 106, 2321–2359. [Google Scholar] [CrossRef]
  27. Omidvar, M.N.; Li, X.; Yao, X. A review of population-based metaheuristics for large-scale black-box global optimization—Part i. IEEE Trans. Evol. Comput. 2021, 26, 802–822. [Google Scholar] [CrossRef]
  28. Glover, F.; Laguna, M.; Marti, R. Principles of tabu search. Approx. Algorithms Metaheuristics 2007, 23, 1–12. [Google Scholar]
  29. Eltaeib, T.; Mahmood, A. Differential evolution: A survey and analysis. Appl. Sci. 2018, 8, 1945. [Google Scholar] [CrossRef]
  30. Nazari-Heris, M.; Mohammadi-Ivatloo, B.; Asadi, S.; Kim, J.-H.; Geem, Z.W. Harmony search algorithm for energy system applications: An updated review and analysis. J. Exp. Theor. Artif. Intell. 2019, 31, 723–749. [Google Scholar] [CrossRef]
  31. Rashedi, E.; Rashedi, E.; Nezamabadi-Pour, H. A comprehensive survey on gravitational search algorithm. Swarm Evol. Comput. 2018, 41, 141–158. [Google Scholar] [CrossRef]
  32. Wang, H.; Zhou, X.; Sun, H.; Yu, X.; Zhao, J.; Zhang, H.; Cui, L. Firefly algorithm with adaptive control parameters. Soft Comput. 2017, 21, 5091–5102. [Google Scholar] [CrossRef]
  33. Mareli, M.; Twala, B. An adaptive cuckoo search algorithm for optimisation. Appl. Comput. Inform. 2018, 14, 107–115. [Google Scholar] [CrossRef]
  34. Cui, Z.; Li, F.; Zhang, W. Bat algorithm with principal component analysis. Int. J. Mach. Learn. Cybern. 2019, 10, 603–622. [Google Scholar] [CrossRef]
  35. Garnett, R. Bayesian Optimization; Cambridge University Press: Cambridge, UK, 2023. [Google Scholar]
  36. Orús, R.; Mugel, S.; Lizaso, E. Quantum computing for finance: Overview and prospects. Rev. Phys. 2019, 4, 100028. [Google Scholar] [CrossRef]
  37. Kim, Y.; Bang, H. Introduction to kalman filter and its applications. Introd. Implement. Kalman Filter 2018, 1, 1–16. [Google Scholar]
  38. Bardenet, R.; Doucet, A.; Holmes, C. On markov chain monte carlo methods for tall data. J. Mach. Learn. Res. 2017, 18, 1–43. [Google Scholar]
  39. Xiang, W.-L.; Li, Y.-Z.; Meng, X.-L.; Zhang, C.-M.; An, M.-Q. A grey artificial bee colony algorithm. Appl. Soft Comput. 2017, 60, 1–17. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  41. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  42. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  43. Liu, Y.; Li, Y.; Schiele, B.; Sun, Q. Online hyperparameter optimization for class-incremental learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2023; Volume 37, pp. 8906–8913. [Google Scholar]
  44. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  45. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  46. Zhao, S.; Zhang, T.; Ma, S.; Wang, M. Sea-horse optimizer: A novel nature-inspired meta-heuristic for global optimization problems. Appl. Intell. 2023, 53, 11833–11860. [Google Scholar] [CrossRef]
  47. Conroy, A.D.B. Ox yokes: Culture, comfort and animal welfare. In Proceedings of the TAWS Workshop 2004, Bedford, UK, 15 April 2004. [Google Scholar]
  48. Lydekker, R. Wild Oxen, Sheep & Goats of All Lands, Living and Extinct; R. Ward: Mumbai, India, 1898; Volume 2. [Google Scholar]
  49. Davenport, J. Environmental Stress and Behavioural Adaptation; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  50. Pozdíšek, J.; Svozilová, M.; Mičová, P.; Rzonca, J.; Štýbnarová, M. The Bulls and Oxen Living-Activities in Winter and Summer Period with Utilization of Pasture. 2002. Available online: http://www.slpk.sk/eldo/2006/003_06/34.pdf (accessed on 10 June 2024).
  51. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef]
  52. Mirjalili, S. Sca: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  53. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile search algorithm (rsa): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
Figure 1. Oxen: the inspirational animal for the OX optimizer.
Figure 1. Oxen: the inspirational animal for the OX optimizer.
Symmetry 16 00966 g001
Figure 2. Illustration of the CEC2022 benchmark functions (F1–F6).
Figure 2. Illustration of the CEC2022 benchmark functions (F1–F6).
Symmetry 16 00966 g002
Figure 3. Illustration of selected function of CEC2017 Functions (6, 7, 8, 9, 11, 12, 13, 14, 15).
Figure 3. Illustration of selected function of CEC2017 Functions (6, 7, 8, 9, 11, 12, 13, 14, 15).
Symmetry 16 00966 g003
Figure 4. Convergence curve analysis over CEC2022 benchmark functions (F1–F6).
Figure 4. Convergence curve analysis over CEC2022 benchmark functions (F1–F6).
Symmetry 16 00966 g004
Figure 5. Convergence curve analysis over CEC2022 benchmark functions (F7–F12).
Figure 5. Convergence curve analysis over CEC2022 benchmark functions (F7–F12).
Symmetry 16 00966 g005
Figure 6. Search history analysis for CEC2017 (F1–F6).
Figure 6. Search history analysis for CEC2017 (F1–F6).
Symmetry 16 00966 g006
Figure 7. Search history analysis for CEC2017 (F7–F12).
Figure 7. Search history analysis for CEC2017 (F7–F12).
Symmetry 16 00966 g007
Figure 8. Box-plot analysis over CEC2022 (F1–F6).
Figure 8. Box-plot analysis over CEC2022 (F1–F6).
Symmetry 16 00966 g008
Figure 9. Box-plot analysis over CEC2022 (F7–F12).
Figure 9. Box-plot analysis over CEC2022 (F7–F12).
Symmetry 16 00966 g009
Figure 10. Sensitivity analysis over CEC2022 (F1–F6).
Figure 10. Sensitivity analysis over CEC2022 (F1–F6).
Symmetry 16 00966 g010
Figure 11. Sensitivity analysis over CEC2022 (F7–F12).
Figure 11. Sensitivity analysis over CEC2022 (F7–F12).
Symmetry 16 00966 g011
Figure 12. Histogram analysis over CEC2022 (F1–F6).
Figure 12. Histogram analysis over CEC2022 (F1–F6).
Symmetry 16 00966 g012
Figure 13. Histogram analysis over CEC2022 (F7–F12).
Figure 13. Histogram analysis over CEC2022 (F7–F12).
Symmetry 16 00966 g013
Table 1. State-of-the-art metaheuristic algorithms according to their inspiration.
Table 1. State-of-the-art metaheuristic algorithms according to their inspiration.
AlgorithmInspirationCharacteristics
Genetic Algorithm (GA)Natural selection [23]Population-based, uses crossover and mutation
Particle Swarm Optimization (PSO)Social behavior of birds [25]Population-based, uses velocity and position updates
Simulated Annealing (SA)Metallurgical annealing [24]Single-solution based, uses cooling schedules
Tabu Search (TS)Memory-based search [28]Single-solution based, uses adaptive memory
Differential Evolution (DE)Evolutionary strategies [29]Population-based, uses mutation and recombination
Harmony Search (HS)Musical improvisation [30]Population-based, uses harmony memory
Firefly Algorithm (FA)Flashing behavior of fireflies [32]Population-based, uses light intensity and attraction
Artificial Bee Colony (ABC)Foraging behavior of honey bees [39]Population-based, uses employed, onlooker, and scout bees
Cuckoo Search (CS)Brood parasitism of cuckoo birds [33]Population-based, uses Levy flights and parasitic reproduction
Bat Algorithm (BA)Echolocation behavior of bats [34]Population-based, uses frequency tuning and pulse rate
Quantum Evolutionary Algorithm (QEA)Quantum computing principles [36]Population-based, uses quantum bits and quantum gates
Bayesian Optimization (BO)Bayesian inference [35]Probabilistic, uses surrogate models and acquisition functions
Whale Optimization Algorithm (WOA)Hunting strategy of whales [40]Population-based, unique mechanism for global optimization
Moth–Flame Optimization (MFO)Navigation around flames [41]Population-based, logarithmic spiral model for exploration–exploitation balance
Butterfly Optimization Algorithm (BOA)Foraging behavior of butterflies [42]Population-based, uses sensory modalities
Online Hyperparameter Optimization (OHO)Online optimization [43]Iterative improvement of hyperparameters
Arithmetic Optimization Algorithm (AOA)Basic arithmetic operations [44]Iterative improvement of solutions
Golden Jackal Optimization (GJO)Social hunting behavior of golden jackals [45]Population-based, inspired by social hunting
Sea-Horse Optimizer (SHO)Oscillatory behavior of slime molds [46]Population-based, inspired by oscillatory behavior
Table 2. Benchmark functions utilized in the 2022 IEEE Congress on Evolutionary Computation (CEC2022).
Table 2. Benchmark functions utilized in the 2022 IEEE Congress on Evolutionary Computation (CEC2022).
No.CategoryFunction Description f m i n
F1UnimodalShifted and Fully Rotated Zakharov Function300
MultimodalF2Shifted and Fully Rotated Rosenbrock’s Function400
F3Shifted and Fully Rotated Expanded Schaffer’s f6 Function600
F4Shifted and Fully Rotated Non-Continuous Rastrigin’s Function800
F5Shifted and Fully Rotated Levy Function900
HybridF6Hybrid Function 1 (N = 3)1800
F7Hybrid Function 2 (N = 6)2000
F8Hybrid Function 3 (N = 5)2200
CompositionF9Composition Function 1 (N = 5)2300
F10Composition Function 2 (N = 4)2400
F11Composition Function 3 (N = 5)2600
F12Composition Function 4 (N = 6)2700
Table 3. CEC2017 benchmark functions with their optimum values.
Table 3. CEC2017 benchmark functions with their optimum values.
NameDRangeFunction f o p t
F 1 10[−100,100]Rotated High Conditioned Elliptic Function100
F 2 10[−100,100]Rotated Bent Cigar Function200
F 3 10[−100,100]Rotated Discus Function300
F 4 10[−100,100]Shifted and Rotated Rosenbrock’s Function400
F 5 10[−100,100]Shifted and Rotated Ackley’s Function500
F 6 10[−100,100]Shifted and Rotated Weierstrass Function600
F 7 10[−100,100]Shifted and Rotated Griewank’s Function700
F 8 10[−100,100]Shifted Rastrigin’s Function800
F 9 10[−100,100]Shifted and Rotated Rastrigin’s Function900
F 10 10[−100,100]Shifted Schwefel’s Function1000
F 11 10[−100,100]Shifted and Rotated Schwefel’s Function1100
F 12 10[−100,100]Shifted and Rotated Katsuura Function1200
F 13 10[−100,100]Shifted and Rotated HappyCat Function1300
F 14 10[−100,100]Shifted and Rotated HGBat Function1400
F 15 10[−100,100]Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Function1500
F 16 10[−100,100]Shifted and Rotated Expanded Scaffer’s F6 Function1600
F 17 10[−100,100]Hybrid Function 1 (N = 3)1700
F 18 10[−100,100]Hybrid Function 2 (N = 3)1800
F 19 10[−100,100]Hybrid Function 3 (N = 3)1900
F 20 10[−100,100]Hybrid Function 4 (N = 4)2000
F 21 10[−100,100]Hybrid Function 5 (N = 5)2100
F 22 10[−100,100]Hybrid Function 6 (N = 5)2200
F 23 10[−100,100]Composition Function 1 (N = 5)2300
F 24 10[−100,100]Composition Function 2 (N = 3)2400
F 25 10[−100,100]Composition Function 3 (N = 3)2500
F 26 10[−100,100]Composition Function 4 (N = 5)2600
F 27 10[−100,100]Composition Function 5 (p = 5)2700
F 28 10[−100,100]Composition Function 6 (N = 5)2800
F 29 10[−100,100]Composition Function 7 (N = 3)2900
F 30 10[−100,100]Composition Function 8 (N = 3)3000
Table 4. Summary of state-of-the-art optimization algorithms used in comparison.
Table 4. Summary of state-of-the-art optimization algorithms used in comparison.
Algorithm NameReferenceDescription
Covariance Matrix Adaptation Evolution
Strategy (CMAES)
[51]A powerful evolutionary algorithm known for its robust adaptation of the covariance matrix to guide the
search process.
Whale Optimization Algorithm (WOA)[40]Mimics the hunting strategy of whales, providing a unique mechanism for global optimization.
Moth-Flame Optimization (MFO)[41]Uses a logarithmic spiral model to simulate the moths’ navigation around flames, enhancing the exploration-exploitation balance.
Butterfly Optimization Algorithm (BOA)[42]Inspired by the foraging behavior of butterflies, using sensory modalities to locate optimal solutions.
Particle Swarm Optimization (PSO)[25]Well-regarded for its simplicity and efficiency, simulating the social behavior of bird flocking to converge on high-quality solutions.
Online Hyperparameter Optimization (OHO)[43]Focuses on dynamically adjusting hyperparameters during the optimization process to improve performance.
Arithmetic Optimization Algorithm (AOA)[44]Employs basic arithmetic operations to iteratively improve solutions.
Sine Cosine Algorithm (SCA)[52]Utilizes mathematical sine and cosine functions
for optimization.
Golden Jackal Optimization (GJO)[45]Inspired by the social hunting behavior of golden jackals, optimizing by mimicking these strategies.
Sea-Horse Optimizer (SHO)[46]Draws from the oscillatory behavior of slime molds for optimization processes.
Reptile Search Algorithm (RSA)[53]Uses the adaptive foraging behavior of reptiles to enhance search strategies and solution convergence.
Table 5. Statistical results on CEC2022 (F1–F12) with FES = 1000 and 30 independent runs.
Table 5. Statistical results on CEC2022 (F1–F12) with FES = 1000 and 30 independent runs.
FunStatisticsOXCMAESWOAMFOBOAPSOOHOAOASCAGJOSHORSA
F1Mean3.03 × 10 2 2.13 × 10 4 1.58 × 10 4 2.27 × 10 3 7.07 × 10 3 3.11 × 10 2 1.47 × 10 4 9.08 × 10 3 1.17 × 10 3 2.62 × 10 3 2.05 × 10 3 7.79 × 10 3
Std1.66 × 10 0 9.67 × 10 3 1.35 × 10 4 2.47 × 10 3 1.46 × 10 3 3.14 × 10 1 5.89 × 10 3 3.40 × 10 3 5.26 × 10 2 1.91 × 10 3 2.05 × 10 3 3.27 × 10 3
SEM5.24 × 10 1 3.06 × 10 3 4.26 × 10 3 7.80 × 10 2 4.63 × 10 2 9.92 × 10 0 1.86 × 10 3 1.07 × 10 3 1.66 × 10 2 6.03 × 10 2 6.49 × 10 2 1.04 × 10 3
Rank112115721093648
F2Mean4.05 × 10 2 6.42 × 10 2 4.37 × 10 2 4.24 × 10 2 2.12 × 10 3 4.19 × 10 2 3.31 × 10 3 9.23 × 10 2 4.67 × 10 2 4.42 × 10 2 4.35 × 10 2 7.64 × 10 2
Std3.82 × 10 0 1.27 × 10 2 3.34 × 10 1 3.06 × 10 1 7.00 × 10 2 3.03 × 10 1 1.13 × 10 3 2.47 × 10 2 2.70 × 10 1 2.85 × 10 1 3.69 × 10 1 2.45 × 10 2
SEM1.21 × 10 0 4.03 × 10 1 1.06 × 10 1 9.67 × 10 0 2.21 × 10 2 9.58 × 10 0 3.56 × 10 2 7.81 × 10 1 8.53 × 10 0 9.02 × 10 0 1.17 × 10 1 7.75 × 10 1
Rank185311212107649
F3Mean6.08 × 10 2 6.33 × 10 2 6.28 × 10 2 6.10 × 10 2 6.38 × 10 2 6.10 × 10 2 6.60 × 10 2 6.35 × 10 2 6.19 × 10 2 6.06 × 10 2 6.10 × 10 2 6.44 × 10 2
Std1.08 × 10 1 2.36 × 10 1 6.10 × 10 0 3.43 × 10 0 6.34 × 10 0 1.46 × 10 0 2.15 × 10 0 5.72 × 10 0 5.33 × 10 0 2.90 × 10 0 5.59 × 10 0 4.55 × 10 0
SEM3.41 × 10 0 7.45 × 10 0 1.93 × 10 0 1.09 × 10 0 2.00 × 10 0 4.62 × 10 1 6.80 × 10 1 1.81 × 10 0 1.69 × 10 0 9.16 × 10 1 1.77 × 10 0 1.44 × 10 0
Rank287510412961311
F4Mean8.32 × 10 2 8.32 × 10 2 8.41 × 10 2 8.35 × 10 2 8.47 × 10 2 8.39 × 10 2 8.46 × 10 2 8.32 × 10 2 8.42 × 10 2 8.34 × 10 2 8.34 × 10 2 8.51 × 10 2
Std2.07 × 10 1 9.07 × 10 0 1.62 × 10 1 8.51 × 10 0 5.63 × 10 0 3.31 × 10 0 5.02 × 10 0 8.56 × 10 0 1.06 × 10 1 8.59 × 10 0 6.87 × 10 0 5.18 × 10 0
SEM6.55 × 10 0 2.87 × 10 0 5.11 × 10 0 2.69 × 10 0 1.78 × 10 0 1.05 × 10 0 1.59 × 10 0 2.71 × 10 0 3.37 × 10 0 2.72 × 10 0 2.17 × 10 0 1.64 × 10 0
Rank128611710394512
F5Mean9.23 × 10 2 9.00 × 10 2 1.44 × 10 3 1.05 × 10 3 1.28 × 10 3 9.31 × 10 2 1.60 × 10 3 1.33 × 10 3 9.97 × 10 2 9.75 × 10 2 1.05 × 10 3 1.50 × 10 3
Std6.99 × 10 1 0.00 × 10 0 2.35 × 10 2 2.35 × 10 2 1.66 × 10 2 7.97 × 10 1 1.41 × 10 2 8.26 × 10 1 3.96 × 10 1 8.40 × 10 1 1.53 × 10 2 9.77 × 10 1
SEM2.21 × 10 1 0.00 × 10 0 7.44 × 10 1 7.44 × 10 1 5.24 × 10 1 2.52 × 10 1 4.47 × 10 1 2.61 × 10 1 1.25 × 10 1 2.66 × 10 1 4.82 × 10 1 3.09 × 10 1
Rank211068312954711
F6Mean5.03 × 10 3 1.94 × 10 7 3.09 × 10 3 5.34 × 10 3 5.83 × 10 7 5.15 × 10 3 1.17E+093.99 × 10 3 2.35 × 10 6 8.94 × 10 3 5.14 × 10 3 4.50 × 10 7
Std1.46 × 10 3 2.27 × 10 7 1.48 × 10 3 2.63 × 10 3 1.65 × 10 8 1.91 × 10 3 8.53 × 10 8 1.73 × 10 3 1.42 × 10 6 4.80 × 10 3 1.37 × 10 3 2.49 × 10 7
SEM4.61 × 10 2 7.17 × 10 6 4.68 × 10 2 8.31 × 10 2 5.21 × 10 7 6.04 × 10 2 2.70 × 10 8 5.46 × 10 2 4.49 × 10 5 1.52 × 10 3 4.33 × 10 2 7.89 × 10 6
Rank391611512287410
F7Mean2.02 × 10 3 2.05 × 10 3 2.05 × 10 3 2.02 × 10 3 2.08 × 10 3 2.02 × 10 3 2.12 × 10 3 2.11 × 10 3 2.06 × 10 3 2.04 × 10 3 2.03 × 10 3 2.14 × 10 3
Std9.00 × 10 0 3.16 × 10 1 2.38 × 10 1 8.10 × 10 0 1.26 × 10 1 6.03 × 10 0 1.27 × 10 1 2.13 × 10 1 5.47 × 10 0 9.95 × 10 0 1.05 × 10 1 3.38 × 10 1
SEM2.85 × 10 0 9.99 × 10 0 7.53 × 10 0 2.56 × 10 0 3.99 × 10 0 1.91 × 10 0 4.01 × 10 0 6.73 × 10 0 1.73 × 10 0 3.15 × 10 0 3.32 × 10 0 1.07 × 10 1
Rank176392111085412
F8Mean2.23 × 10 3 2.25 × 10 3 2.23 × 10 3 2.23 × 10 3 2.30 × 10 3 2.23 × 10 3 2.41 × 10 3 2.29 × 10 3 2.23 × 10 3 2.23 × 10 3 2.23 × 10 3 2.25 × 10 3
Std3.85 × 10 1 1.41 × 10 1 5.43 × 10 0 3.89 × 10 0 7.65 × 10 1 7.21 × 10 0 1.25 × 10 2 7.34 × 10 1 2.26 × 10 0 3.00 × 10 0 2.26 × 10 0 1.12 × 10 1
SEM1.22 × 10 1 4.47 × 10 0 1.72 × 10 0 1.23 × 10 0 2.42 × 10 1 2.28 × 10 0 3.94 × 10 1 2.32 × 10 1 7.13 × 10 1 9.48 × 10 1 7.14 × 10 1 3.53 × 10 0
Rank184711612103259
F9Mean2.53 × 10 3 2.55 × 10 3 2.55 × 10 3 2.53 × 10 3 2.80 × 10 3 2.53 × 10 3 2.83 × 10 3 2.67 × 10 3 2.55 × 10 3 2.59 × 10 3 2.59 × 10 3 2.68 × 10 3
Std1.05 × 10 0 5.49 × 10 1 2.14 × 10 1 8.41 × 10 0 5.74 × 10 1 1.38E-056.75 × 10 1 4.03 × 10 1 1.58 × 10 1 3.90 × 10 1 3.36 × 10 1 6.23 × 10 1
SEM3.33 × 10 1 1.73 × 10 1 6.77 × 10 0 2.66 × 10 0 1.81 × 10 1 4.35E-062.14 × 10 1 1.27 × 10 1 5.00 × 10 0 1.23 × 10 1 1.06 × 10 1 1.97 × 10 1
Rank154311212967810
F10Mean2.59 × 10 3 2.97 × 10 3 2.63 × 10 3 2.59 × 10 3 2.50 × 10 3 2.59 × 10 3 2.88 × 10 3 2.64 × 10 3 2.50 × 10 3 2.59 × 10 3 2.59 × 10 3 2.63 × 10 3
Std1.10 × 10 2 5.75 × 10 2 1.86 × 10 2 5.02 × 10 0 1.56 × 10 0 6.00 × 10 1 2.35 × 10 2 1.53 × 10 2 5.19 × 10 1 6.19 × 10 1 6.24 × 10 1 9.06 × 10 1
SEM3.47 × 10 1 1.82 × 10 2 5.89 × 10 1 1.59 × 10 0 4.94 × 10 1 1.90 × 10 1 7.42 × 10 1 4.83 × 10 1 1.64 × 10 1 1.96 × 10 1 1.97 × 10 1 2.87 × 10 1
Rank312962711101548
F11Mean2.80 × 10 3 3.03 × 10 3 2.89 × 10 3 2.80 × 10 3 3.06 × 10 3 2.81 × 10 3 4.11 × 10 3 3.44 × 10 3 2.77 × 10 3 2.88 × 10 3 2.83 × 10 3 3.23 × 10 3
Std1.74 × 10 2 2.03 × 10 2 1.65 × 10 2 1.56 × 10 2 2.80 × 10 2 1.39 × 10 2 5.47 × 10 2 4.33 × 10 2 5.77 × 10 0 1.90 × 10 2 1.83 × 10 2 4.42 × 10 2
SEM5.50 × 10 1 6.41 × 10 1 5.21 × 10 1 4.93 × 10 1 8.86 × 10 1 4.39 × 10 1 1.73 × 10 2 1.37 × 10 2 1.82 × 10 0 6.02 × 10 1 5.80 × 10 1 1.40 × 10 2
Rank287394121116510
F12Mean2.87 × 10 3 2.88 × 10 3 2.90 × 10 3 2.86 × 10 3 2.93 × 10 3 2.86 × 10 3 3.20 × 10 3 2.97 × 10 3 2.87 × 10 3 2.86 × 10 3 2.88 × 10 3 2.95 × 10 3
Std1.47 × 10 1 4.87 × 10 0 4.29 × 10 1 1.88 × 10 0 1.77 × 10 1 2.92 × 10 0 1.32 × 10 2 4.68 × 10 1 1.07 × 10 0 2.28 × 10 0 1.23 × 10 1 5.34 × 10 1
SEM4.64 × 10 0 1.54 × 10 0 1.36 × 10 1 5.94 × 10 1 5.58 × 10 0 9.24 × 10 1 4.18 × 10 1 1.48 × 10 1 3.40 × 10 1 7.21 × 10 1 3.89 × 10 0 1.69 × 10 1
Table 6. Statistical results over CEC2017 (F1–F14) with FES = 1000 and 30 independent runs.
Table 6. Statistical results over CEC2017 (F1–F14) with FES = 1000 and 30 independent runs.
FunMeasurmentOXCMAESWOAMFOBOAPSOOHOAOASCAGJOSHORSA
F1Mean1.50 × 10 6 6.02 × 10 9 2.45 × 10 6 1.89 × 10 8 9.81 × 10 9 1.45 × 10 8 1.48 × 10 10 7.62 × 10 9 7.81 × 10 8 4.77 × 10 8 5.11 × 10 8 1.10 × 10 10
Std4.91 × 10 5 2.17 × 10 9 3.36 × 10 5 4.63 × 10 8 2.59 × 10 9 2.30 × 10 8 1.60 × 10 9 2.28 × 10 9 3.33 × 10 8 3.01 × 10 8 7.89 × 10 8 3.46 × 10 9
SEM2.01 × 10 5 8.86 × 10 8 1.37 × 10 5 1.89 × 10 8 1.06 × 10 9 9.41 × 10 7 6.55 × 10 8 9.30 × 10 8 1.36 × 10 8 1.23 × 10 8 3.22 × 10 8 1.41 × 10 9
Rank182410312975611
F2Mean5.56 × 10 3 1.93 × 10 11 2.47 × 10 4 1.56 × 10 6 1.14 × 10 11 6.77 × 10 6 4.58 × 10 11 3.33 × 10 9 1.73 × 10 7 1.37 × 10 7 6.66 × 10 6 5.18 × 10 12
Std1.30 × 10 4 2.58 × 10 11 3.20 × 10 4 2.74 × 10 6 2.34 × 10 11 1.62 × 10 7 7.45 × 10 11 6.03 × 10 9 3.48 × 10 7 2.04 × 10 7 1.62 × 10 7 7.02 × 10 12
SEM5.29 × 10 3 1.05 × 10 11 1.31 × 10 4 1.12 × 10 6 9.55 × 10 10 6.61 × 10 6 3.04 × 10 11 2.46 × 10 9 1.42 × 10 7 8.35 × 10 6 6.63 × 10 6 2.87 × 10 12
Rank110239511876412
F3Mean3.04 × 10 2 3.81 × 10 4 1.25 × 10 3 9.06 × 10 3 1.09 × 10 4 3.55 × 10 3 1.09 × 10 4 8.67 × 10 3 1.30 × 10 3 2.56 × 10 3 2.35 × 10 3 8.24 × 10 3
Std1.51 × 10 0 1.68 × 10 4 9.83 × 10 2 7.97 × 10 3 2.05 × 10 3 2.48 × 10 3 8.43 × 10 2 2.91 × 10 3 6.22 × 10 2 1.94 × 10 3 1.91 × 10 3 3.67 × 10 3
SEM6.18 × 10 1 6.88 × 10 3 4.01 × 10 2 3.25 × 10 3 8.38 × 10 2 1.01 × 10 3 3.44 × 10 2 1.19 × 10 3 2.54 × 10 2 7.91 × 10 2 7.78 × 10 2 1.50 × 10 3
Rank112291061183547
F4Mean4.07 × 10 2 6.92 × 10 2 4.31 × 10 2 4.07 × 10 2 1.36 × 10 3 4.59 × 10 2 1.87 × 10 3 8.82 × 10 2 4.35 × 10 2 4.27 × 10 2 4.37 × 10 2 9.77 × 10 2
Std2.01 × 10 0 2.08 × 10 2 4.26 × 10 1 1.88 × 10 0 3.74 × 10 2 4.99 × 10 1 5.61 × 10 2 3.61 × 10 2 8.66 × 10 0 1.84 × 10 1 2.52 × 10 1 5.02 × 10 2
SEM8.22 × 10 1 8.49 × 10 1 1.74 × 10 1 7.66 × 10 1 1.53 × 10 2 2.04 × 10 1 2.29 × 10 2 1.47 × 10 2 3.54 × 10 0 7.51 × 10 0 1.03 × 10 1 2.05 × 10 2
Rank284111712953610
F5Mean5.39 × 10 2 5.91 × 10 2 5.66 × 10 2 5.25 × 10 2 5.88 × 10 2 5.21 × 10 2 6.11 × 10 2 5.53 × 10 2 5.46 × 10 2 5.39 × 10 2 5.25 × 10 2 5.78 × 10 2
Std2.28 × 10 1 9.46 × 10 0 1.33 × 10 1 9.07 × 10 0 8.82 × 10 0 7.92 × 10 0 1.89 × 10 1 2.15 × 10 1 4.22 × 10 0 1.11 × 10 1 5.42 × 10 0 9.14 × 10 0
SEM9.32 × 10 0 3.86 × 10 0 5.44 × 10 0 3.70 × 10 0 3.60 × 10 0 3.23 × 10 0 7.73 × 10 0 8.77 × 10 0 1.72 × 10 0 4.55 × 10 0 2.21 × 10 0 3.73 × 10 0
Rank411831011276529
F6Mean6.06 × 10 2 6.30 × 10 2 6.41 × 10 2 6.00 × 10 2 6.40 × 10 2 6.09 × 10 2 6.57 × 10 2 6.37 × 10 2 6.16 × 10 2 6.07 × 10 2 6.09 × 10 2 6.44 × 10 2
Std2.64 × 10 0 2.54 × 10 1 1.12 × 10 1 3.68E-031.85 × 10 0 8.88 × 10 0 6.57 × 10 0 1.02 × 10 1 3.67 × 10 0 2.78 × 10 0 5.32 × 10 0 8.78 × 10 0
SEM1.08 × 10 0 1.04 × 10 1 4.56 × 10 0 1.50E-037.55 × 10 1 3.62 × 10 0 2.68 × 10 0 4.17 × 10 0 1.50 × 10 0 1.14 × 10 0 2.17 × 10 0 3.58 × 10 0
Rank271019512863411
F7Mean7.28 × 10 2 7.30 × 10 2 7.82 × 10 2 7.42 × 10 2 7.80 × 10 2 7.37 × 10 2 8.08 × 10 2 7.97 × 10 2 7.73 × 10 2 7.47 × 10 2 7.49 × 10 2 8.08 × 10 2
Std6.01 × 10 0 3.17 × 10 0 3.39 × 10 1 1.80 × 10 1 1.32 × 10 1 7.45 × 10 0 8.35 × 10 0 2.16 × 10 1 9.48 × 10 0 4.94 × 10 0 8.51 × 10 0 7.69 × 10 0
SEM2.46 × 10 0 1.29 × 10 0 1.38 × 10 1 7.35 × 10 0 5.37 × 10 0 3.04 × 10 0 3.41 × 10 0 8.82 × 10 0 3.87 × 10 0 2.02 × 10 0 3.48 × 10 0 3.14 × 10 0
Rank129483111075612
F8Mean8.24 × 10 2 8.25 × 10 2 8.40 × 10 2 8.30 × 10 2 8.53 × 10 2 8.20 × 10 2 8.48 × 10 2 8.28 × 10 2 8.36 × 10 2 8.21 × 10 2 8.25 × 10 2 8.54 × 10 2
Std1.17 × 10 1 4.07 × 10 0 1.49 × 10 1 4.53 × 10 0 5.90 × 10 0 5.84 × 10 0 1.37 × 10 0 7.24 × 10 0 4.16 × 10 0 6.73 × 10 0 5.32 × 10 0 8.29 × 10 0
SEM4.77 × 10 0 1.66 × 10 0 6.09 × 10 0 1.85 × 10 0 2.41 × 10 0 2.38 × 10 0 5.59 × 10 1 2.95 × 10 0 1.70 × 10 0 2.75 × 10 0 2.17 × 10 0 3.39 × 10 0
Rank359711110682412
F9Mean9.01 × 10 2 9.00 × 10 2 1.41 × 10 3 9.48 × 10 2 1.42 × 10 3 9.50 × 10 2 1.76 × 10 3 1.28 × 10 3 9.73 × 10 2 1.03 × 10 3 9.60 × 10 2 1.44 × 10 3
Std4.12 × 10 1 0.00 × 10 0 2.25 × 10 2 1.11 × 10 2 1.19 × 10 2 3.98 × 10 1 9.98 × 10 1 1.42 × 10 2 2.38 × 10 1 8.15 × 10 1 4.34 × 10 1 9.90 × 10 1
SEM1.68 × 10 1 0.00 × 10 0 9.18 × 10 1 4.55 × 10 1 4.86 × 10 1 1.62 × 10 1 4.07 × 10 1 5.79 × 10 1 9.71 × 10 0 3.33 × 10 1 1.77 × 10 1 4.04 × 10 1
Rank219310412867511
F10Mean1.69 × 10 3 2.68 × 10 3 2.04 × 10 3 1.79 × 10 3 2.55 × 10 3 1.99 × 10 3 3.08 × 10 3 2.14 × 10 3 2.43 × 10 3 1.86 × 10 3 1.75 × 10 3 2.58 × 10 3
Std3.55 × 10 2 2.37 × 10 2 2.78 × 10 2 1.99 × 10 2 2.21 × 10 2 4.57 × 10 2 1.08 × 10 2 1.80 × 10 2 9.02 × 10 1 2.31 × 10 2 2.05 × 10 2 1.39 × 10 2
SEM1.45 × 10 2 9.68 × 10 1 1.14 × 10 2 8.13 × 10 1 9.04 × 10 1 1.87 × 10 2 4.42 × 10 1 7.35 × 10 1 3.68 × 10 1 9.41 × 10 1 8.36 × 10 1 5.66 × 10 1
Rank111639512784210
F11Mean1.14 × 10 3 1.46 × 10 3 1.19 × 10 3 1.27 × 10 3 1.46 × 10 3 1.16 × 10 3 3.08 × 10 3 1.50 × 10 3 1.20 × 10 3 1.18 × 10 3 1.12 × 10 3 3.64 × 10 3
Std6.50 × 10 1 1.94 × 10 2 6.54 × 10 1 2.48 × 10 2 2.15 × 10 2 6.01 × 10 1 1.74 × 10 3 4.97 × 10 2 5.15 × 10 1 1.02 × 10 2 8.59 × 10 0 1.44 × 10 3
SEM2.65 × 10 1 7.92 × 10 1 2.67 × 10 1 1.01 × 10 2 8.77 × 10 1 2.45 × 10 1 7.09 × 10 2 2.03 × 10 2 2.10 × 10 1 4.14 × 10 1 3.51 × 10 0 5.87 × 10 2
Rank295783111064112
F12Mean1.16 × 10 5 1.55 × 10 8 3.66 × 10 6 1.39 × 10 6 1.89 × 10 8 1.09 × 10 6 9.47 × 10 8 6.49 × 10 6 1.04 × 10 7 4.58 × 10 5 6.94 × 10 5 2.44 × 10 8
Std4.62 × 10 4 1.31 × 10 8 4.24 × 10 6 3.34 × 10 6 3.03 × 10 8 1.17 × 10 6 6.20 × 10 8 1.04 × 10 7 8.62 × 10 6 6.06 × 10 5 5.44 × 10 5 1.71 × 10 8
SEM1.89 × 10 4 5.33 × 10 7 1.73 × 10 6 1.36 × 10 6 1.24 × 10 8 4.78 × 10 5 2.53 × 10 8 4.25 × 10 6 3.52 × 10 6 2.47 × 10 5 2.22 × 10 5 6.96 × 10 7
Rank196510412782311
F13Mean9.10 × 10 3 8.47 × 10 4 1.47 × 10 4 1.01 × 10 4 2.31 × 10 5 1.72 × 10 4 1.88 × 10 7 1.55 × 10 4 3.18 × 10 4 9.50 × 10 3 1.06 × 10 4 8.59 × 10 6
Std4.41 × 10 3 9.71 × 10 4 6.81 × 10 3 1.18 × 10 4 3.54 × 10 5 9.24 × 10 3 1.77 × 10 7 1.18 × 10 4 1.94 × 10 4 6.72 × 10 3 5.98 × 10 3 8.68 × 10 6
SEM1.80 × 10 3 3.96 × 10 4 2.78 × 10 3 4.80 × 10 3 1.44 × 10 5 3.77 × 10 3 7.21 × 10 6 4.81 × 10 3 7.92 × 10 3 2.74 × 10 3 2.44 × 10 3 3.55 × 10 6
Rank195310712682411
F14Mean1.48 × 10 3 3.11 × 10 3 2.26 × 10 3 2.07 × 10 3 3.63 × 10 3 2.93 × 10 3 7.94 × 10 4 4.81 × 10 3 1.53 × 10 3 2.10 × 10 3 3.41 × 10 3 3.99 × 10 3
Std3.02 × 10 1 1.14 × 10 3 1.43 × 10 3 6.11 × 10 2 2.00 × 10 3 1.82 × 10 3 7.23 × 10 4 5.06 × 10 3 3.53 × 10 1 1.52 × 10 3 1.42 × 10 3 3.23 × 10 3
SEM1.23 × 10 1 4.63 × 10 2 5.84 × 10 2 2.49 × 10 2 8.17 × 10 2 7.42 × 10 2 2.95 × 10 4 2.07 × 10 3 1.44 × 10 1 6.19 × 10 2 5.78 × 10 2 1.32 × 10 3
Rank175396121124810
Table 7. Statistical results over CEC2017 (F15–F30) with FES = 1000 and 30 independent runs.
Table 7. Statistical results over CEC2017 (F15–F30) with FES = 1000 and 30 independent runs.
FunMeasurmentOXCMAESWOAMFOBOAPSOOHOAOASCAGJOSHORSA
F15Mean2.53 × 10 3 4.17 × 10 3 5.86 × 10 3 1.09 × 10 4 1.13 × 10 4 5.92 × 10 3 1.61 × 10 4 1.75 × 10 4 2.93 × 10 3 2.85 × 10 3 3.01 × 10 3 1.13 × 10 4
Std1.61 × 10 3 1.49 × 10 3 2.23 × 10 3 8.91 × 10 3 4.76 × 10 3 3.24 × 10 3 3.03 × 10 3 3.55 × 10 3 1.23 × 10 3 1.21 × 10 3 1.48 × 10 3 5.38 × 10 3
SEM6.57 × 10 2 6.07 × 10 2 9.11 × 10 2 3.64 × 10 3 1.94 × 10 3 1.32 × 10 3 1.24 × 10 3 1.45 × 10 3 5.03 × 10 2 4.93 × 10 2 6.02 × 10 2 2.20 × 10 3
Rank156897111232410
F16Mean1.91 × 10 3 2.06 × 10 3 1.92 × 10 3 1.72 × 10 3 1.91 × 10 3 1.91 × 10 3 2.26 × 10 3 1.97 × 10 3 1.78 × 10 3 1.84 × 10 3 1.78 × 10 3 2.02 × 10 3
Std1.13 × 10 2 1.38 × 10 2 9.80 × 10 1 1.17 × 10 2 1.49 × 10 2 1.63 × 10 2 7.27 × 10 1 1.91 × 10 2 1.02 × 10 2 1.84 × 10 2 1.21 × 10 2 1.03 × 10 2
SEM4.60 × 10 1 5.63 × 10 1 4.00 × 10 1 4.79 × 10 1 6.09 × 10 1 6.66 × 10 1 2.97 × 10 1 7.79 × 10 1 4.15 × 10 1 7.53 × 10 1 4.94 × 10 1 4.20 × 10 1
Rank511816712924310
F17Mean1.81 × 10 3 1.81 × 10 3 1.77 × 10 3 1.76 × 10 3 1.82 × 10 3 1.81 × 10 3 1.81 × 10 3 1.90 × 10 3 1.81 × 10 3 1.76 × 10 3 1.75 × 10 3 1.82 × 10 3
Std7.93 × 10 1 2.06 × 10 1 2.94 × 10 1 2.66 × 10 1 2.00 × 10 1 2.80 × 10 1 1.16 × 10 1 1.09 × 10 2 1.28 × 10 1 1.11 × 10 1 2.12 × 10 1 2.43 × 10 1
SEM3.24 × 10 1 8.41 × 10 0 1.20 × 10 1 1.09 × 10 1 8.16 × 10 0 1.14 × 10 1 4.72 × 10 0 4.45 × 10 1 5.22 × 10 0 4.54 × 10 0 8.67 × 10 0 9.92 × 10 0
Rank574210691283111
F18Mean1.31 × 10 4 2.90 × 10 6 1.62 × 10 4 2.59 × 10 4 4.13 × 10 5 2.34 × 10 4 8.87 × 10 8 1.38 × 10 4 1.26 × 10 5 3.47 × 10 4 1.70 × 10 4 9.04 × 10 6
Std6.16 × 10 3 2.36 × 10 6 1.80 × 10 4 1.42 × 10 4 2.87 × 10 5 1.64 × 10 4 1.02 × 10 9 9.18 × 10 3 1.81 × 10 5 1.25 × 10 4 1.01 × 10 4 1.45 × 10 7
SEM2.52 × 10 3 9.62 × 10 5 7.33 × 10 3 5.79 × 10 3 1.17 × 10 5 6.69 × 10 3 4.15 × 10 8 3.75 × 10 3 7.40 × 10 4 5.11 × 10 3 4.14 × 10 3 5.91 × 10 6
Rank110369512287411
F19Mean2.70 × 10 3 2.10 × 10 5 1.08 × 10 4 2.02 × 10 4 2.14 × 10 4 1.16 × 10 4 2.24 × 10 6 3.44 × 10 4 5.58 × 10 3 6.37 × 10 3 5.10 × 10 3 4.70 × 10 5
Std1.79 × 10 3 4.29 × 10 5 9.10 × 10 3 1.27 × 10 4 2.01 × 10 4 4.80 × 10 3 3.04 × 10 6 2.33 × 10 4 7.78 × 10 3 5.85 × 10 3 4.85 × 10 3 5.08 × 10 5
SEM7.32 × 10 2 1.75 × 10 5 3.72 × 10 3 5.17 × 10 3 8.19 × 10 3 1.96 × 10 3 1.24 × 10 6 9.53 × 10 3 3.18 × 10 3 2.39 × 10 3 1.98 × 10 3 2.07 × 10 5
Rank110578612934211
F20Mean2.10 × 10 3 2.21 × 10 3 2.16 × 10 3 2.12 × 10 3 2.16 × 10 3 2.17 × 10 3 2.26 × 10 3 2.14 × 10 3 2.10 × 10 3 2.16 × 10 3 2.11 × 10 3 2.25 × 10 3
Std6.46 × 10 1 6.77 × 10 1 5.53 × 10 1 5.00 × 10 1 3.16 × 10 1 8.60 × 10 1 1.27 × 10 1 5.39 × 10 1 1.81 × 10 1 5.90 × 10 1 6.14 × 10 1 6.81 × 10 1
SEM2.64 × 10 1 2.76 × 10 1 2.26 × 10 1 2.04 × 10 1 1.29 × 10 1 3.51 × 10 1 5.19 × 10 0 2.20 × 10 1 7.38 × 10 0 2.41 × 10 1 2.51 × 10 1 2.78 × 10 1
Rank110847912526311
F21Mean2.27 × 10 3 2.29 × 10 3 2.32 × 10 3 2.33 × 10 3 2.28 × 10 3 2.32 × 10 3 2.38 × 10 3 2.33 × 10 3 2.29 × 10 3 2.29 × 10 3 2.33 × 10 3 2.31 × 10 3
Std7.36 × 10 1 2.30 × 10 1 5.71 × 10 1 1.01 × 10 1 3.35 × 10 0 9.69 × 10 0 5.73 × 10 1 2.52 × 10 1 5.13 × 10 1 6.34 × 10 1 1.24 × 10 1 6.68 × 10 1
SEM3.01 × 10 1 9.40 × 10 0 2.33 × 10 1 4.12 × 10 0 1.37 × 10 0 3.95 × 10 0 2.34 × 10 1 1.03 × 10 1 2.09 × 10 1 2.59 × 10 1 5.07 × 10 0 2.73 × 10 1
Rank147102812115396
F22Mean2.31 × 10 3 2.68 × 10 3 2.31 × 10 3 2.31 × 10 3 2.35 × 10 3 2.31 × 10 3 3.43 × 10 3 2.77 × 10 3 2.35 × 10 3 2.36 × 10 3 2.35 × 10 3 3.28 × 10 3
Std1.19 × 10 0 8.72 × 10 2 7.09 × 10 0 2.06 × 10 1 2.76 × 10 1 2.71 × 10 0 4.05 × 10 2 2.53 × 10 2 3.46 × 10 1 6.55 × 10 1 6.68 × 10 1 2.90 × 10 2
SEM4.84 × 10 1 3.56 × 10 2 2.89 × 10 0 8.43 × 10 0 1.13 × 10 1 1.11 × 10 0 1.66 × 10 2 1.03 × 10 2 1.41 × 10 1 2.67 × 10 1 2.73 × 10 1 1.18 × 10 2
Rank194372121068511
F23Mean2.64 × 10 3 2.68 × 10 3 2.66 × 10 3 2.64 × 10 3 2.67 × 10 3 2.64 × 10 3 2.87 × 10 3 2.70 × 10 3 2.65 × 10 3 2.64 × 10 3 2.65 × 10 3 2.69 × 10 3
Std1.02 × 10 1 4.26 × 10 0 2.24 × 10 1 6.80 × 10 0 1.12 × 10 1 1.28 × 10 1 8.67 × 10 1 3.35 × 10 1 7.99 × 10 0 7.05 × 10 0 6.13 × 10 0 2.01 × 10 1
SEM4.17 × 10 0 1.74 × 10 0 9.14 × 10 0 2.78 × 10 0 4.58 × 10 0 5.22 × 10 0 3.54 × 10 1 1.37 × 10 1 3.26 × 10 0 2.88 × 10 0 2.50 × 10 0 8.21 × 10 0
Rank197384121162510
F24Mean2.75 × 10 3 2.81 × 10 3 2.79 × 10 3 2.76 × 10 3 2.65 × 10 3 2.75 × 10 3 3.01 × 10 3 2.79 × 10 3 2.76 × 10 3 2.76 × 10 3 2.78 × 10 3 2.85 × 10 3
Std1.24 × 10 2 1.58 × 10 1 2.89 × 10 1 7.41 × 10 0 5.73 × 10 1 6.39 × 10 1 8.09 × 10 1 1.13 × 10 2 9.81 × 10 1 1.02 × 10 1 2.02 × 10 1 2.40 × 10 1
SEM5.08 × 10 1 6.44 × 10 0 1.18 × 10 1 3.02 × 10 0 2.34 × 10 1 2.61 × 10 1 3.30 × 10 1 4.61 × 10 1 4.01 × 10 1 4.17 × 10 0 8.25 × 10 0 9.79 × 10 0
Rank210951312846711
F25Mean2.93 × 10 3 3.14 × 10 3 2.95 × 10 3 2.94 × 10 3 3.45 × 10 3 2.94 × 10 3 3.74 × 10 3 3.08 × 10 3 2.95 × 10 3 2.94 × 10 3 2.93 × 10 3 3.36 × 10 3
Std2.76 × 10 1 1.19 × 10 2 2.04 × 10 1 2.34 × 10 1 1.82 × 10 2 2.67 × 10 1 1.47 × 10 2 1.31 × 10 2 1.16 × 10 1 1.36 × 10 1 1.97 × 10 1 1.29 × 10 2
SEM1.13 × 10 1 4.85 × 10 1 8.31 × 10 0 9.55 × 10 0 7.44 × 10 1 1.09 × 10 1 6.00 × 10 1 5.34 × 10 1 4.75 × 10 0 5.55 × 10 0 8.06 × 10 0 5.28 × 10 1
Rank196411512873210
F26Mean3.19 × 10 3 4.10 × 10 3 3.41 × 10 3 3.02 × 10 3 3.19 × 10 3 3.30 × 10 3 4.35 × 10 3 3.76 × 10 3 3.07 × 10 3 3.07 × 10 3 3.34 × 10 3 4.17 × 10 3
Std5.12 × 10 2 5.46 × 10 2 4.65 × 10 2 5.21 × 10 1 1.39 × 10 2 5.59 × 10 2 3.30 × 10 2 2.06 × 10 2 2.08 × 10 1 1.61 × 10 2 3.96 × 10 2 1.30 × 10 2
SEM2.09 × 10 2 2.23 × 10 2 1.90 × 10 2 2.13 × 10 1 5.67 × 10 1 2.28 × 10 2 1.35 × 10 2 8.40 × 10 1 8.50 × 10 0 6.59 × 10 1 1.62 × 10 2 5.31 × 10 1
Rank410815612932711
F27Mean3.11 × 10 3 3.12 × 10 3 3.15 × 10 3 3.09 × 10 3 3.16 × 10 3 3.15 × 10 3 3.43 × 10 3 3.20 × 10 3 3.11 × 10 3 3.11 × 10 3 3.12 × 10 3 3.18 × 10 3
Std2.14 × 10 1 1.38 × 10 1 6.39 × 10 1 2.11 × 10 0 3.16 × 10 1 4.56 × 10 1 5.39 × 10 1 5.24 × 10 1 3.09 × 10 0 1.11 × 10 1 1.64 × 10 1 8.79 × 10 1
SEM8.74 × 10 0 5.62 × 10 0 2.61 × 10 1 8.60 × 10 1 1.29 × 10 1 1.86 × 10 1 2.20 × 10 1 2.14 × 10 1 1.26 × 10 0 4.53 × 10 0 6.70 × 10 0 3.59 × 10 1
Rank257198121134610
F28Mean3.26 × 10 3 3.46 × 10 3 3.31 × 10 3 3.27 × 10 3 3.76 × 10 3 3.41 × 10 3 3.92 × 10 3 3.60 × 10 3 3.31 × 10 3 3.40 × 10 3 3.39 × 10 3 3.78 × 10 3
Std1.30 × 10 2 6.52 × 10 1 9.94 × 10 1 1.02 × 10 2 2.52 × 10 2 1.16 × 10 2 7.60 × 10 1 2.20 × 10 2 9.43 × 10 1 1.20 × 10 2 1.72 × 10 2 5.63 × 10 1
SEM5.30 × 10 1 2.66 × 10 1 4.06 × 10 1 4.17 × 10 1 1.03 × 10 2 4.75 × 10 1 3.10 × 10 1 8.97 × 10 1 3.85 × 10 1 4.89 × 10 1 7.03 × 10 1 2.30 × 10 1
Rank183210712946511
F29Mean3.25 × 10 3 3.41 × 10 3 3.32 × 10 3 3.21 × 10 3 3.27 × 10 3 3.25 × 10 3 3.70 × 10 3 3.46 × 10 3 3.27 × 10 3 3.18 × 10 3 3.25 × 10 3 3.34 × 10 3
Std5.98 × 10 1 1.42 × 10 2 1.26 × 10 2 3.41 × 10 1 4.61 × 10 1 4.39 × 10 1 1.06 × 10 2 1.63 × 10 2 5.00 × 10 1 2.29 × 10 1 5.35 × 10 1 7.97 × 10 1
SEM2.44 × 10 1 5.78 × 10 1 5.16 × 10 1 1.39 × 10 1 1.88 × 10 1 1.79 × 10 1 4.31 × 10 1 6.66 × 10 1 2.04 × 10 1 9.35 × 10 0 2.18 × 10 1 3.25 × 10 1
Rank310827412116159
F30Mean8.24 × 10 5 8.86 × 10 5 6.26 × 10 5 8.52 × 10 5 4.64 × 10 6 4.35 × 10 5 7.47 × 10 7 8.17 × 10 6 8.66 × 10 5 2.96 × 10 5 5.02 × 10 5 6.49 × 10 6
Std8.70 × 10 5 1.28E-106.02 × 10 5 5.76 × 10 5 2.92 × 10 6 4.90 × 10 5 6.04 × 10 7 1.01 × 10 7 7.96 × 10 5 4.98 × 10 5 2.91 × 10 5 6.90 × 10 6
SEM3.55 × 10 5 5.21E-112.46 × 10 5 2.35 × 10 5 1.19 × 10 6 2.00 × 10 5 2.47 × 10 7 4.13 × 10 6 3.25 × 10 5 2.03 × 10 5 1.19 × 10 5 2.82 × 10 6
Rank584692121171310
Table 8. Comparison of accuracy and time for different kernel functions of SVM.
Table 8. Comparison of accuracy and time for different kernel functions of SVM.
Algorithm with SVMAccuracy (%)Time (in Seconds)
(OX with Parameter and Feature Selection)83.511.42
(RBF with Parameter and Feature Selection)81.813.719
(Polynomial Kernel with Feature Selection)78.518.25
(Sigmoid Kernel with Feature Selection)75.216.87
(Chi-Square Kernel with Feature Selection)69.822.45
(Hellinger Kernel with Feature Selection)71.321.70
(RBF without Feature Selection)48.25814.415
(Linear with Feature Selection)51.95120.478
(Gaussian with Feature Selection)51.46740.147
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al Hwaitat, A.K.; Fakhouri, H.N. The OX Optimizer: A Novel Optimization Algorithm and Its Application in Enhancing Support Vector Machine Performance for Attack Detection. Symmetry 2024, 16, 966. https://doi.org/10.3390/sym16080966

AMA Style

Al Hwaitat AK, Fakhouri HN. The OX Optimizer: A Novel Optimization Algorithm and Its Application in Enhancing Support Vector Machine Performance for Attack Detection. Symmetry. 2024; 16(8):966. https://doi.org/10.3390/sym16080966

Chicago/Turabian Style

Al Hwaitat, Ahmad K., and Hussam N. Fakhouri. 2024. "The OX Optimizer: A Novel Optimization Algorithm and Its Application in Enhancing Support Vector Machine Performance for Attack Detection" Symmetry 16, no. 8: 966. https://doi.org/10.3390/sym16080966

APA Style

Al Hwaitat, A. K., & Fakhouri, H. N. (2024). The OX Optimizer: A Novel Optimization Algorithm and Its Application in Enhancing Support Vector Machine Performance for Attack Detection. Symmetry, 16(8), 966. https://doi.org/10.3390/sym16080966

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop