Next Article in Journal
Structural Reliability Analysis Using Stochastic Finite Element Method Based on Krylov Subspace
Previous Article in Journal
Cubic q-Bézier Triangular Patch for Scattered Data Interpolation and Its Algorithm
Previous Article in Special Issue
Multi-Objective Unsupervised Feature Selection and Cluster Based on Symbiotic Organism Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization

by
Hemin Sardar Abdulla
1,
Azad A. Ameen
1,*,
Sarwar Ibrahim Saeed
1,
Ismail Asaad Mohammed
2 and
Tarik A. Rashid
3,*
1
Department of Computer Science, College of Science, Charmo University, Chamchamal 46023, Iraq
2
Department of Information Technology, Technical College of Informatics, Sulaimani Polytechnic University, Sulaymaniyah P.O. Box 70-236, Iraq
3
Computer Science and Engineering Department, University of Kurdistan Hewler, Erbil 44001, Iraq
*
Authors to whom correspondence should be addressed.
Algorithms 2024, 17(9), 423; https://doi.org/10.3390/a17090423
Submission received: 15 August 2024 / Revised: 11 September 2024 / Accepted: 17 September 2024 / Published: 23 September 2024

Abstract

:
The rapid advancement of intelligent technology has led to the development of optimization algorithms that leverage natural behaviors to address complex issues. Among these, the Rat Swarm Optimizer (RSO), inspired by rats’ social and behavioral characteristics, has demonstrated potential in various domains, although its convergence precision and exploration capabilities are limited. To address these shortcomings, this study introduces the Modified Rat Swarm Optimizer (MRSO), designed to enhance the balance between exploration and exploitation. The MRSO incorporates unique modifications to improve search efficiency and robustness, making it suitable for challenging engineering problems such as Welded Beam, Pressure Vessel, and Gear Train Design. Extensive testing with classical benchmark functions shows that the MRSO significantly improves performance, avoiding local optima and achieving higher accuracy in six out of nine multimodal functions and in all seven fixed-dimension multimodal functions. In the CEC 2019 benchmarks, the MRSO outperforms the standard RSO in six out of ten functions, demonstrating superior global search capabilities. When applied to engineering design problems, the MRSO consistently delivers better average results than the RSO, proving its effectiveness. Additionally, we compared our approach with eight recent and well-known algorithms using both classical and CEC-2019 benchmarks. The MRSO outperformed each of these algorithms, achieving superior results in six out of 23 classical benchmark functions and in four out of ten CEC-2019 benchmark functions. These results further demonstrate the MRSO’s significant contributions as a reliable and efficient tool for optimization tasks in engineering applications.

1. Introduction

The rapid advancement of intelligent technology has enabled computers to undertake increasingly complex tasks such as computation, decision-making, and analysis, roles traditionally performed by the human brain. This shift has freed up human reasoning resources for more creative activities and facilitated the development of optimization algorithms. These algorithms, a novel branch of artificial intelligence, have progressed beyond well-established subfields like natural computing and heuristic methods [1,2].
Optimization involves finding the best solution or strategy using a combination of modern mathematics, computer science, artificial intelligence, and other multidisciplinary approaches. It provides an accurate mathematical framework to address complex real-world choices, maximize resource utilization, enhance decision-making, and improve system performance. Scientific advancements in a variety of fields drive industry progress and development. However, traditional optimization algorithms have a limited application scope as optimization challenges continue to grow. To tackle this, researchers have started exploring new methods, with metaheuristic algorithms emerging as an effective solution. Metaheuristics are advanced algorithms based on heuristic principles, designed to solve a wide range of complex optimization problems [3,4]. As a result, metaheuristic algorithms have become very useful tools that combine evolutionary principles, natural selection, and inheritance with ideas from natural events and how people and animals behave. Their adaptability and robustness make them highly effective in addressing diverse real-world optimization challenges. Consequently, the adoption of nature-inspired algorithms has become widespread in engineering and scientific research, underscoring their vital role in contemporary optimization tasks [5,6].
A crucial attribute of metaheuristic algorithms is their ability to balance exploration and exploitation within the search space. Exploitation involves local searches to refine solutions, while exploration encompasses global searches to discover new optima across the entire search landscape. This dual capability ensures a comprehensive exploration of potential solutions, offering a robust population of feasible options. Metaheuristics are versatile and simple to implement, often inspired by natural phenomena, and have successfully addressed practical problems across various fields. Their general-purpose nature means they require minimal structural changes when applied to different themes, though their reliance on random initial solutions often results in approximate rather than exact outcomes. Metaheuristic algorithms are particularly valuable for solving complex real-world problems characterized by numerous local optima and a global optimum that would be computationally infeasible to pinpoint precisely. This is exemplified by the set-covering problem, a significant optimization challenge in computer theory and operations research. Here, the objective is to minimize the cost of covering a set of elements with subsets, and this has applications in route planning, resource allocation, and decision-making. The inherent flexibility and ease of implementation of metaheuristics, combined with their gradient-free problem-solving approach, make them indispensable in efficiently obtaining optimal solutions for such intricate problems [2,7].
The Rat Swarm Optimizer (RSO) is one of the recent metaheuristic algorithms inspired by the natural behaviors of rats, particularly their chasing and attacking strategies [8]. This algorithm has demonstrated success in solving various real-world constrained engineering design problems and performs effectively across diverse search spaces [9,10].
The RSO has demonstrated effectiveness in solving optimization problems; however, the RSO faces several limitations that hinder its performance in complex optimization tasks. One major issue is its tendency to converge too quickly, often before adequately exploring the search space, leading to suboptimal solutions. This premature convergence is particularly problematic in complex problems. Additionally, the RSO struggles to maintain a proper balance between exploitation (local search) and exploration (global search), which reduces its effectiveness in navigating multimodal landscapes. It is also prone to getting trapped in local optima, especially in fixed-dimension multimodal problems, limiting its ability to reach the global optimum. Furthermore, the RSO’s performance declines when dealing with high-dimensional tasks, raising concerns about its scalability. Lastly, the absence of comprehensive benchmarking across diverse problems limits the validation of the RSO’s reliability and versatility in different scenarios.
To overcome these limitations, this research introduces the Modified Rat Swarm Optimizer (MRSO), designed to enhance the performance and robustness of the original RSO in complex optimization tasks. The MRSO aims to achieve a better balance between exploration (global search) and exploitation (local search), reducing the risk of premature convergence and preventing the algorithm from getting trapped in local optima. By improving the exploration of the search space, the MRSO provides more accurate and reliable solutions, particularly in high-dimensional and multimodal problems. Additionally, the MRSO incorporates mechanisms that enhance scalability and robustness in various environments, making it more effective for a wide range of optimization challenges, including real-world engineering problems. The main objective of the MRSO is to deliver faster, more consistent convergence while improving versatility and efficiency across various optimization scenarios. The following are the main contributions of this modification:
  • Development of MRSO: The proposed algorithm enhances the RSO by introducing mechanisms to better balance exploration and exploitation, thereby improving its overall search capabilities.
  • Application to Engineering Problems: The MRSO is applied to seven real-world constrained engineering problems, demonstrating its effectiveness in addressing these complex challenges.
  • Comprehensive Benchmarking: The MRSO’s performance is rigorously tested against classical benchmark functions, CEC 2019 test functions, and engineering problems, outperforming the RSO and showing competitive results compared to other optimization algorithms.
  • Exploration and Exploitation Analysis: A detailed analysis of the MRSO’s ability to balance exploration and exploitation is presented, highlighting its effectiveness in avoiding local optima and achieving global solutions.
  • Limitations and Future Work: The paper acknowledges the limitations of the MRSO and suggests future research directions, including testing on large-scale real-world problems and incorporating surrogate models to handle computationally expensive tasks.
The structure of this paper is organized as follows: Section 2 provides the theoretical background of the Rat Swarm Optimizer (RSO), offering a detailed explanation of its principles. Section 3 explores the applications of the RSO algorithm and reviews related works, highlighting its effectiveness across various fields. Section 4 presents an in-depth discussion of the proposed Modified Rat Swarm Optimizer (MRSO) algorithm. Section 5 focuses on benchmark testing and performance analysis of the MRSO, comparing it with the standard RSO. Section 6 examines the application of the MRSO to engineering design problems, showcasing its practical utility. Finally, Section 7 draws conclusions from the study, highlights the challenges encountered, and suggests areas for future research efforts.

2. Rat Swarm Optimizer (RSO)

The Rat Swarm Optimizer (RSO) is one of the recent bioinspired, population-based metaheuristic algorithms designed to solve complex optimization problems. Introduced in late 2020, the RSO mimics the following and attacking behaviors of rats in nature [11,12]. The RSO operates by randomly initializing candidate solutions without any prior information about the optimal solution, similar to other population-based techniques [11]. Its simple structure, fast convergence rate, and ease of understanding and implementation set it apart from other metaheuristics [11]. However, like other metaheuristic algorithms, the RSO often faces the issue of getting trapped in local minima, especially when dealing with complex objective functions involving many variables. Researchers have proposed modifications to overcome these weaknesses [11].

2.1. Inspiration

The RSO algorithm is inspired by the social and aggressive behaviors of black and brown rats, which are the two main species of rats [12,13]. Rats are known for their group dynamics as they live and operate in groups, showing a high level of social intelligence. These behaviors include chasing and fighting prey, which are fundamental motivations for the RSO algorithm [12,13]. In these groups, a strong rat often leads, with other rats following and supporting the leader, mimicking the search process for optimal solutions [9].
Rats are medium-sized rodents with long tails, and they show significant differences in size and weight. They live in groups, known as bucks and does, which are territorial and sociable by nature. These groups engage in activities such as grooming, jumping, chasing, and fighting, often displaying highly aggressive behavior under certain conditions [10]. This social and territorial nature of rats, especially their aggressiveness in chasing and fighting prey, provides the basis for the RSO algorithm [10].

2.2. Essential Steps in the RSO Algorithm

The RSO procedure is composed of several essential steps, each of which plays a critical role in the algorithm’s functionality. A detailed explanation of each step will be provided in the following sections to offer a comprehensive understanding of the process:

2.2.1. Chasing the Prey

Rats’ chasing behavior is typically a social activity. The most effective search agent is identified as the rat that knows the prey’s location. The rest of the group adjusts their positions based on the location of this best rat, as described below [8,13]:
P = A · P i t + C · P r t P i t
Here, P i t denotes the position of the i th rat (solution), with (t) representing the current iteration number. P r t indicates the position of the best candidate solution found so far. The calculation of (A) proceeds as follows:
A = R t · R M a x i t e r a t i o n ,   where   t = 0 ,   1 ,   2 ,   ,   M a x i t e r a t i o n
R and C are random values, with R ranging between [1, 5] and C ranging between [0, 2]. These values serve as parameters for the exploration and exploitation mechanisms in the algorithm [8]:
R = r a n d 1 ,   5
C = r a n d 0 ,   2

2.2.2. Fighting the Prey

The fighting behavior is represented mathematically in the following way:
P i t + 1 = P i t P
The next position of rat number i is denoted as P i t + 1 . The parameters A and C are crucial for balancing exploration and exploitation mechanisms. A small A value (e.g., 1) combined with a moderate C value emphasizes exploitation, while other values may shift the focus towards exploration.

3. Related Work and Applications of RSO

The RSO has been significantly effective in the coordinated design of power system stabilizers (PSSs) and static VAR compensators (SVCs). The adaptive rat swarm optimization (ARSO) variant incorporates the concept of the opposite number to enhance the initial population and employs opposite or random solutions to avoid local optima. This approach considerably improves global search capabilities. The performance of the ARSO variant has been validated against standard benchmarks and in case studies, proving its superiority in achieving optimal damping and outperforming traditional methods [11].
In the realm of photovoltaic (PV) systems, ARSO combined with pattern search (PS) has shown outstanding performance. This hybrid method leverages the global search ability of ARSO and the local search strength of PS, resulting in high accuracy, reliability, and convergence speeds in extracting parameters from single-diode, double-diode, and PV modules [14]. Similarly, the RSO has been adapted for feature selection (FS) in various domains, including Arabic sentiment analysis and cybersecurity. Enhancements like binary encoding, opposite-based learning strategies, and the integration of local search paradigms from particle swarm optimization (PSO) have been introduced. These modifications improve local exploitation, diversity, and convergence rates, leading to superior performance compared to other FS methods [9,15,16].
The RSO’s versatility extends to healthcare, particularly in diagnosing COVID-19. The hybrid RSO-AlexNet-COVID-19 approach combines the RSO with convolutional neural networks (CNNs) to optimize hyperparameters and enhance diagnostic accuracy. This method achieved 100% classification accuracy for CT images and 95.58% for X-ray images, surpassing other CNN architectures [12]. Additionally, the modified RSO algorithm has been applied to data clustering, enhancing the accuracy and stability of clustering results. By integrating nonlinear convergence factors and reverse initial population strategies, this method addresses the limitations of traditional clustering algorithms like K-means. Moreover, the modified particle swarm optimization rat search algorithm (PSORSA) has improved PV system modeling, showcasing superior accuracy and efficiency in parameter extraction [17,18].
The RSO’s adaptability is also evident in solving the NP-hard Traveling Salesman Problem (TSP). By incorporating decision-making mechanisms and local search heuristics like 2-opt and 3-opt, the hybrid HDRSO algorithm demonstrated robust performance on symmetric TSP instances, achieving competitive results [19,20]. In manufacturing, the RSO has been employed to address flow shop scheduling problems. By mapping rat locations to task-processing sequences, the RSO optimizes execution time and resource use, enhancing production system flexibility, lead times, and quality [21]. Furthermore, the Modified Rat Swarm Optimization with Deep Learning (MRSODL) model combines the RSO with deep belief networks (DBNs) for robust object detection and classification in waste recycling. The ERSO-DSEL model similarly leverages feature selection and ensemble learning to enhance cybersecurity, achieving high accuracy in intrusion detection [15,22].
Lastly, the RSO has been applied to load frequency control (LFC) in power systems, improving controller performance and stability. The RSO-based Proportional–Integral–Derivative (PID) controller has demonstrated superior results compared to traditional methods, reducing frequency errors and the settling time [23].

4. The Proposed Modified Rat Swarm Optimizer (MRSO)

The proposed Modified Rat Swarm Optimizer (MRSO) aims to enhance the original RSO’s performance by balancing exploration and exploitation. This balance is achieved through Function No. (2) in the “Chasing the Prey” section, which is static and relies on a single parameter, C . In the MRSO, this function is reformulated as follows:
F 1 = R t 1 × R M a x i t e r a t i o n
F 2 = 1 t × 1 M a x i t e r a t i o n
F 3 = 2 × r a n d 0 ,   1 1 r a n d 0 ,   1
A M o d i f i e d = F 1 × F 2 × F 3 ,   where   t = 0 ,   1 ,   2 ,   ,   M a x i t e r a t i o n
Consequentially, function No. (1) is updated as follows:
P = A M o d i f i e d · P i t + C · P r t P i t
After initializing the parameters A M o d i f i e d , C, and R, the results are evaluated using an objective function, with the best solution saved as P r . The rats’ positions are then updated using Equation (5), and parameters R, A, and A M o d i f i e d are updated according to Equations (3), (4), and (6)–(9). If a rat’s position exceeds the search space, it is adjusted by reassigning it to the previous centers. Each rat’s position is then tested and assessed by the objective function. If a better solution than P r is found, P r is updated to this new best position. This process continues through the maximum number of iterations ( M a x i t e r a t i o n ) . Ultimately, the position is selected using the best position identified P r . These modifications result in improved performance through obtaining the optimum fitness function. Algorithm 1 and Figure 1 present the MRSO pseudocode and flowchart, respectively.
Algorithm 1: The pseudocode representation of the MRSO algorithm
1: Initial the parameters of MRSO: N , d , T m a x , A , C , and R
2: Initial of MRSO population
3: X i , j = X j m i n + X j m a x X j m i n × 0 , 1    i = 1 , 2 , N ,   a n d   j = 1 , 2 , , d  
4: Calculate f X i      i = 1 , 2 , N       F i t n e s s   e v a l u a t i o n
5: Select the rat with the best position X g b e s t
6: t = 1
7: while  t T m a x  do
8:  R 1 , 5
9:  A = R t × R T m a x
10:   C 0 , 2
11:  for  i = 1 : N  do
12:     X = A · X i t + C · X g b e s t X i t     C h a s i n g   t h e   p r e y
13:     X i t + 1 = X g b e s t X          f i g h t i n g   w i t h   p r e y
14:    if f X i t + 1 < f X g b e s t  Then
15:         X g b e s t = X i t + 1
16:      end if
17:  end for
18:   t = t + 1
19: end while
20: Return the best solution X g b e s t

5. Benchmark Testing and Performance Analysis of MRSO

In this section, we evaluated the efficiency and reliability of the proposed Modified Rat Swarm Optimizer (MRSO) by testing it against 23 classical benchmark functions and 10 benchmark functions from the CEC 2019 competition. These benchmark functions are commonly used by researchers to validate the performance of new algorithms, providing a standard basis for comparison [24,25].

5.1. Experimental Configuration and Evaluation Methods

The implementation was carried out using MATLAB R2020a on a Windows 11 system. To achieve accurate and reliable results, the initial population was selected randomly. The experiment used a population size of 30, with 500 iterations, and each algorithm was run 30 times.
To evaluate the performance of the MRSO, we utilized the MATLAB code of eight recent and well-known algorithms, including the Sine Cosine Algorithm (SCA) [26], the Mud Ring Algorithm (MRA) [27], the Liver Cancer Algorithm (LCA) [28], the Circle Search Algorithm (CSA) [29], the Tunicate Swarm Algorithm (TSA) [30], the Dingo Optimizer Algorithm (DOA) [31], the Elk Herd Optimizer (EHO) [32], and the White Shark Optimizer (WSO) [33], for comparison. For our evaluations, we utilized two sets of benchmark functions: 23 classical benchmark functions and 10 CEC-2019 benchmark functions. The evaluation of the MRSO involved several key criteria:
  • Average and Standard Deviation: The average and standard deviations were calculated to compare the performance of the standard RSO and several recent metaheuristic algorithms against the proposed MRSO approach.
  • Box and Whisker Plot: A box and whisker plot was used to visually compare the performance of the RSO, recent metaheuristic algorithms, and the proposed MRSO approach.

5.1.1. Evaluating with Averages (Mean)

The average, or mean, is a fundamental statistical measure used to summarize the central tendency of a dataset. In optimization, the average objective function value across multiple independent runs provides a clear indicator of an algorithm’s typical performance, facilitating straightforward comparisons with other algorithms. In minimization problems, lower average values indicate better performance, while in maximization problems, higher average values are desirable [34,35]. This metric confirms the MRSO’s ability to achieve optimal results consistently across different runs.

5.1.2. Assessing Stability with Standard Deviation

Standard deviation (std.) measures the extent of variation or dispersion within a set of values and is crucial for assessing the stability of an optimization algorithm. A lower standard deviation across multiple runs suggests that the algorithm consistently produces similar results, demonstrating both effectiveness in finding good solutions and reliability in maintaining performance. Thus, standard deviation is a vital metric for evaluating the stability and consistency of an algorithm’s performance [34,35].

5.1.3. Comparative Analysis with Wilcoxon and Friedman Methods

Statistical tests are essential for determining the significance and robustness of results in optimization algorithm evaluation. The Wilcoxon Rank-Sum Test, a non-parametric method, compares two independent samples without assuming a normal distribution, making it suitable for performance metrics that do not follow typical distribution patterns. It calculates a p-value to indicate whether the differences in performance are statistically significant, with a p-value below 0.05 confirming the superiority of one algorithm over another [35,36].
The Friedman Mean Rank Method extends this comparative analysis to more than two algorithms across multiple datasets. By ranking the algorithms for each dataset and calculating average ranks, this method identifies significant differences in effectiveness, with lower ranks indicating better performance. Together, these tests provide a comprehensive framework for validating the performance of optimization algorithms, ensuring that observed differences are statistically meaningful and reliable across various datasets [36,37].

5.2. Comparative Study of MRSO and RSO Using Standard Benchmarks

The standard benchmark functions, a set of 23 widely studied functions, are commonly used to evaluate the performance of optimization algorithms. These functions are divided into three categories based on their characteristics: unimodal, multimodal, and fixed-dimension multimodal functions [24].

5.2.1. Unimodal Functions (F1–F7)

Unimodal functions, which possess only one global optimum and no local optimum, provide an excellent means to assess the exploitation capability of optimization algorithms. By analyzing the average and standard deviation (std.) metrics, we can evaluate the precision and stability of both the MRSO and the RSO in finding optimal solutions within smooth search landscapes. As shown in Table 1, the MRSO demonstrates improvements in two out of seven unimodal functions (F6 and F7) over the standard RSO, indicating better exploitation capabilities.
In terms of average performance, the MRSO outperforms the RSO in two out of the seven unimodal functions. For example, in function F6, the MRSO achieves an average of 8.160 × 10−1, significantly better than the RSO’s 3.415 indicating the MRSO’s improved exploitation capability. In F7, the MRSO’s average is 1.981 × 10−4, which is also lower and thus better than the RSO’s 4.861 × 10−4, highlighting the MRSO’s precision in finding optimal solutions. For other functions like F1, although the MRSO has an average of 1.12 × 10−6, the RSO’s average of 1.537 × 10−256 is much closer to zero, indicating better performance in this case. Similarly, for functions F2 and F4, the RSO outperforms the MRSO with much smaller averages. Overall, the MRSO shows superior average performance in two of the seven unimodal functions.
When considering the standard deviation, which indicates the consistency of the algorithm, the MRSO outperforms the RSO in three out of the seven functions. In F6, the MRSO demonstrates greater stability with a standard deviation of 4.033 × 10−1 compared to the RSO’s 4.748 × 10−1. Similarly, in F7, the MRSO has a lower standard deviation (1.438 × 10−4) than the RSO (5.636 × 10−4), showing that the MRSO produces more reliable results. In F5, the MRSO also has better stability with a standard deviation of 1.323 × 10−1, while the RSO’s is higher at 2.635 × 10−1. In other functions, such as F1, the RSO has a perfect consistency with a standard deviation of 0, while the MRSO shows some variance. Thus, the MRSO provides more consistent results in three of the seven unimodal functions.

5.2.2. Multimodal Functions (F8–F16)

Multimodal functions, with their multiple local optima, are essential for testing an algorithm’s exploration capability. Based on the results in Table 1 (and further detailed in Table A2 in Appendix A), the MRSO outperforms the RSO in six out of nine multimodal functions, indicating its superior ability to explore complex search spaces. For example, in F8, the MRSO achieves a better average (−8.613 × 10+3) compared to the RSO’s −5.709 × 10+3, demonstrating its effectiveness in avoiding local optima. Similarly, the MRSO outperforms the RSO in F12 with an average of 5.646 × 10−2 versus the RSO’s 3.366 × 10−1, and in F14 with an average of 1.395 compared to the RSO’s 2.646. The MRSO also shows improved performance in F10, F13, and F15, further reinforcing its strength in exploration. However, in F9 and F11, both algorithms achieve perfect averages of 0, showing equal performance. Overall, The MRSO demonstrates superior exploration across most of the multimodal functions.
Regarding the standard deviation, the MRSO also exhibits better consistency in five out of the nine functions. For instance, in F12, the MRSO’s lower standard deviation (4.465 × 10−2) compared to the RSO’s 1.055 × 10−1 indicates more stable outcomes across multiple runs. In F14, the MRSO shows improved consistency with a standard deviation of 8.072 × 10−1 versus the RSO’s 1.787. Similarly, the MRSO performs better in F15 with a lower standard deviation (4.068 × 10−4) than the RSO (6.153 × 10−4), confirming its reliability. Although the RSO shows better consistency in F8 with a lower standard deviation (1.073 × 10+3) than the MRSO (1.835 × 10+3), the MRSO’s superior consistency in other functions solidifies its advantage. These comparisons highlight the MRSO’s overall improved stability and accuracy in solving multimodal functions.

5.2.3. Fixed-Dimension Multimodal Functions (F17–F23)

Fixed-dimension multimodal functions, characterized by multiple local optima, are crucial for testing an algorithm’s ability to avoid getting trapped in local solutions while searching for the global optimum. Based on the results presented in Table 1 (and further detailed in Table A3 in Appendix A), the MRSO outperforms the RSO in five out of the seven fixed-dimension multimodal functions, as evidenced by lower average values in these cases. For example, in F19, the MRSO achieves an average of −3.856, significantly better than RSO’s −3.426, demonstrating superior performance in navigating complex landscapes. Similarly, the MRSO outperforms the RSO in F20 with an average of −2.775 compared to the RSO’s −1.723, and in F22, the MRSO performs better with an average of −3.719 against the RSO’s −1.035, indicating its improved exploratory capacity. The MRSO also shows better results in F21 and F23, further confirming its strength in solving fixed-dimension multimodal functions. In contrast, the RSO slightly outperforms the MRSO in F17, with an average of 3.990 × 10−1 compared to the MRSO’s 4.076 × 10−1, showcasing the RSO’s advantage in this case. Overall, the MRSO demonstrates improved performance across most fixed-dimension multimodal functions.
In terms of standard deviation, the MRSO exhibits more consistent results in four out of the seven functions, further demonstrating its reliability. For instance, in F19, the MRSO has a much lower standard deviation (1.807 × 10−3) compared to the RSO’s 2.942 × 10−1, indicating better stability across multiple runs. Similarly, in F20, the MRSO shows a lower standard deviation (3.885 × 10−1) than the RSO (3.772 × 10−1), highlighting its consistency in producing reliable results. The MRSO also demonstrates improved stability in F21, with a higher standard deviation (2.536) compared to the RSO’s 3.728 × 10−1. Although the MRSO performs less consistently in some cases, such as F22 where it has a higher standard deviation (2.774) than the RSO (6.255 × 10−1), the MRSO overall provides more stable outcomes across most of the fixed-dimension multimodal functions.

5.3. Performance Comparison of MRSO and RSO with CEC 2019 Benchmark Functions

The CEC-C06 2019 benchmark test functions comprise ten mathematical functions specifically created for the IEEE Congress on Evolutionary Computation (CEC) 2019 competition. These functions are used to evaluate the performance of optimization algorithms across a range of optimization challenges. They incorporate features such as rotation, scaling, and shifting. Widely recognized in the fields of optimization and evolutionary computation, these benchmarks are essential for comparing existing algorithms and assessing the effectiveness of newly developed ones [35]. The MRSO algorithm was further evaluated using the CEC-C06 2019 benchmark functions, which provide a rigorous set of challenges for optimization algorithms.
In terms of average performance, as presented in Table 2 and Figure 2, the MRSO outperforms the RSO in six out of the ten CEC 2019 benchmark functions, specifically in functions F2, F3, F6, F7, F9, and F10. For example, in F2, the MRSO achieves an average of 1.835 × 10+1, which is lower and better than the RSO’s 1.848 × 10+1, indicating the MRSO’s superior precision in solving this optimization problem. Similarly, in F6, the MRSO has an average of 1.095 × 10+01 compared to the RSO’s 1.165 × 10+1, demonstrating better convergence to the optimal solution. The MRSO also outperforms the RSO in F9, where its average (4.967 × 10+2) is significantly lower than the RSO’s 5.866 × 10+2, confirming its ability to handle complex landscapes more effectively. Functions like F7 and F10 further show the MRSO’s dominance with better averages than the RSO, while F1, F4, F5, and F8 exhibit similar results between the two algorithms. Overall, the MRSO’s lower averages in six functions highlight its improved search capability and precision in handling diverse optimization challenges compared to the standard RSO.
Regarding standard deviation, the MRSO demonstrates more consistent performance in five out of the ten functions, as seen in Table 2 and Figure 2. For example, in F3, the MRSO shows a significantly lower standard deviation (1.337 × 10−6) compared to the RSO’s 1.828 × 10−4, indicating more stable results across multiple runs. In F9, the MRSO again exhibits superior consistency with a lower standard deviation (1.493 × 10+2) compared to the RSO’s 1.362 × 10+2, reflecting its ability to produce reliable outcomes in challenging optimization tasks. Similarly, in F7, the MRSO’s standard deviation (2.291 × 10+2) is lower than the RSO’s (2.154 × 10+2), demonstrating better reliability. However, in F6, the RSO shows slightly more consistent results with a lower standard deviation (8.597 × 10−1) compared to the MRSO’s (1.011). While the RSO performs slightly better in some cases, the MRSO’s overall consistency in five functions reinforces its robustness and reliability in delivering stable optimization results across a range of problems.

5.4. Statistical Analysis

A comparison between the standard RSO algorithm and our modified version, the MRSO, using the classical benchmark test functions reveals notable differences in performance, as depicted in Table 1. For instance, functions like F3 and F6 exhibit extremely low p-values (0 and 1.099 × 10−30, respectively), indicating highly significant improvements with the MRSO over the RSO. Similarly, functions F7, F8, and F12 show p-values of 8.792 × 10−03, 4.498 × 10−10, and 2.127 × 10−19, respectively, suggesting that the MRSO outperforms the RSO with high statistical significance. These results underscore the MRSO’s enhanced capability to solve these functions more effectively than the standard RSO.
On the other hand, several functions show p-values above 0.05, indicating no significant difference between the MRSO and the RSO. Functions F1, F2, F4, F5, and F10 have p-values of 2.030 × 10−1, 2.716 × 10−1, 3.215 × 10−1, 5.945 × 10−1, and 1.651 × 10−1, respectively. These results suggest that for these functions, the MRSO’s modifications did not yield statistically significant improvements over the RSO. This indicates that while the MRSO shows promising results in many cases, its enhancements might not always provide a significant advantage, particularly for certain benchmark functions.
When examining the CEC 2019 benchmark test functions, several statistically significant improvements were also observed with the MRSO. Functions F2, F3, F6, F7, F9, and F10 exhibit p-values of 3.797 × 10−4, 2.256 × 10−2, 4.905 × 10−3, 2.915 × 10−3, 1.791 × 10−2, and 5.316 × 10−6, respectively, indicating significant improvements by the MRSO over the RSO. These values demonstrate that the performance differences for these functions are highly unlikely to be due to random variation, as illustrated in Table 2. Conversely, the p-values for functions F1 (1.053 × 10−1), F4 (6.275 × 10−1), F5 (6.076 × 10−1), and F8 (9.215 × 10−1) suggest that the differences in performance between the MRSO and the RSO for these functions are not statistically significant. This indicates that for these specific functions, the modifications in the MRSO did not result in substantial performance improvements compared to the standard RSO.
Overall, the significant p-values for the majority of the functions tested confirm that the MRSO generally performs better than the RSO, particularly in functions where statistical significance was achieved. These results highlight the effectiveness of the modifications introduced in the MRSO, leading to improved optimization capabilities and better performance across a variety of benchmark test functions.

5.5. Evaluating the MRSO Algorithm alongside Metaheuristic Methods with Classical Benchmark Functions

By using 23 classical benchmark functions, we evaluated the performance of the MRSO algorithm in comparison with eight new metaheuristic algorithms: the Sine Cosine Algorithm (SCA) [26,38], the Mud Ring Algorithm (MRA) [27], the Liver Cancer Algorithm (LCA) [28], the Circle Search Algorithm (CSA) [29], the Tunicate Swarm Algorithm (TSA) [30], the Dingo Optimizer Algorithm (DOA) [31] the Elk Herd Optimizer (EHO) [32], and the White Shark Optimizer (WSO) [33].
Based on the analysis of unimodal functions (F1–F7) from Table 3, the MRSO outperforms the other eight metaheuristic algorithms in two out of the seven functions. For instance, in F7, the MRSO achieves an average of 1.98 × 10−4, which is significantly lower and better than the SCA’s 2.66 × 10−3 and the TSA’s 7.43 × 10−5. This highlights the MRSO’s superior exploitation capability in certain cases. Additionally, the MRSO excels in F6 with an average of 8.16 × 10−1, outperforming algorithms like the SCA and the DOA, which score higher averages of 4.34 × 10−1 and 5.58, respectively. However, the MRSO does not perform as well in functions such as F1 and F5, where algorithms like the MRA achieve perfect averages (0) across multiple functions, indicating that the MRSO needs further refinement for simpler unimodal tasks.
When considering the standard deviation metrics, which reflect the consistency of an algorithm across iterations, the MRSO outperforms the eight algorithms in three out of seven functions. For example, in F7, the MRSO has a lower standard deviation (1.44 × 10−4) compared to the DOA’s 2.80 × 10−4 and the SCA’s 2.81 × 10−3, demonstrating the MRSO’s more reliable performance. In F6, the MRSO’s standard deviation of 4.03 × 10−1 indicates stable results, which is comparable with the TSA but better than the DOA, whose performance fluctuates with a higher standard deviation (1.09). However, the MRA consistently outperforms the MRSO in terms of stability, achieving zero deviations across multiple functions such as F1, F2, and F3, indicating near-perfect consistency in these cases.
In terms of statistical significance, represented by p-values, the MRSO shows a significant advantage in two functions. For instance, in F7, the MRSO’s performance is statistically significant with a p-value of 1.18 × 10−5 when compared to algorithms like the SCA and the DOA, highlighting a meaningful difference in performance. In F6, the MRSO’s p-value of 9.06 × 10−6 suggests that it performs significantly better than other algorithms like TSA. However, in functions like F5, the p-value of 9.93 × 10−1 indicates that the MRSO and the competing algorithms show similar performances with no statistically significant differences. Thus, while the MRSO shows strong performance in some areas, the p-value analysis suggests that there is room for improvement in other functions to achieve consistently significant results. Table 3 summarizes these comparisons, and the results highlight the specific areas where the MRSO surpasses other metaheuristic methods, as well as the cases where it needs further optimization.
In terms of average performance, as shown in Table 4, the MRSO outperforms the eight metaheuristic algorithms in five out of the nine multimodal functions (F8–F16). For example, in F9, F11, and F16, the MRSO achieves the best possible average values of 0, which are consistent with the global minima for these functions. In F8, the MRSO demonstrates competitive performance with an average of −8.61 × 10+3, which surpasses most algorithms except the CSA. Similarly, in F15, the MRSO delivers superior results with an average of 8.07 × 10−4, showing its capability to avoid local optima and converge effectively. However, the MRSO does not perform as well in functions like F14, where it scores an average of 1.39, slightly underperforming compared to the CSA and other algorithms. Overall, the MRSO proves to be highly competitive in most multimodal functions.
Regarding standard deviation, the MRSO exhibits stable and consistent performance across several functions, outperforming other algorithms in four of the nine multimodal functions (F8–F16). For instance, in F9, F11, and F16, the MRSO achieves a standard deviation of 0, indicating perfect consistency across runs, outperforming algorithms such as the DOA, the SCA, and the TSA. In F12, the MRSO shows better stability with a lower standard deviation of 4.46 × 10−2 compared to the SCA’s 3.83 × 10−2 and the TSA’s 3.81 × 10−1, highlighting its reliability in delivering consistent results. However, in F14, the MRSO shows slightly higher variation with a standard deviation of 8.07 × 10−1, indicating that it may struggle with consistency in this function compared to some other algorithms like the CSA and the DOA. Despite these minor fluctuations, the MRSO remains one of the more stable performers in this evaluation.
In terms of statistical significance, based on the p-values in Table 4, the MRSO demonstrates statistically significant performance in four out of nine functions. For example, in F9 and F11, the MRSO’s p-values indicate that its results are statistically indistinguishable from the global optimum (p-values of 7.76 × 10−2 and 4.55 × 10−3, respectively). In F16, the MRSO’s p-value of 9.85 × 10−5 demonstrates its significant superiority over many of the compared algorithms. However, in functions like F13 and F14, the MRSO does not show as much statistical significance, with higher p-values, such as 2.80 × 10−68 for F13 and 3.77 × 10−1 for F14, indicating that its performance is less conclusive when compared to the other metaheuristic methods. Nevertheless, the MRSO consistently shows significant results in the majority of the tested functions, establishing its competitiveness. Table 4 illustrates that the MRSO is a strong performer across multimodal functions in terms of average values, stability (standard deviation), and statistical significance (p-values), outperforming the eight metaheuristic algorithms in several key areas.
In terms of average performance, as shown in Table 5, the MRSO outperforms the eight metaheuristic algorithms in three out of the seven fixed-dimension multimodal functions (F17–F23). For example, in F17, the MRSO achieves the exact global minimum with an average value of 3.99 × 10−1, which ties with the TSA in terms of best performance. Similarly, the MRSO reaches the global minimum in F18 with an average of 3.00, outperforming most other algorithms, except for the SCA and the EHO, which show comparable results. However, the MRSO lags behind in functions like F21, where it achieves an average of −2.63, significantly worse than the CSA and the MRA, which achieve much better results. Overall, the MRSO demonstrates solid performance in three functions but leaves room for improvement in the remaining four.
When analyzing the standard deviation, which indicates the consistency of the algorithm, the MRSO performs best in four out of seven functions. For instance, in F18, the MRSO achieves a standard deviation of 4.03 × 10−6, highlighting its excellent stability and consistency in this function. In F17 and F19, the MRSO also shows relatively low standard deviations (3.30 × 10−3 and 1.81 × 10−3, respectively), demonstrating that the MRSO consistently performs well across different runs. However, the MRSO exhibits higher variability in functions such as F21 and F22, with standard deviations of 2.54 and 2.77, respectively, showing that its performance fluctuates more significantly in these cases compared to algorithms like the CSA and the MRA, which demonstrate greater stability.
Regarding statistical significance, the MRSO shows significant results in two out of the seven functions. For example, in F18, the MRSO has a p-value of 3.36 × 10−4, indicating that its performance is statistically significant compared to most of the other algorithms, including the MRA and the WSO. In F19, the MRSO’s p-value of 8.21 × 10−3 confirms that it performs significantly better than several algorithms, including the SCA and the DOA. However, in functions like F20 and F21, the MRSO does not achieve statistical significance, with p-values of 1.05 × 10−1 and 6.44 × 10−1, respectively, suggesting that its performance is less conclusive in these cases. Overall, the MRSO demonstrates statistically significant improvements in two functions, while maintaining competitive performance across the rest. Table 5 highlights the MRSO’s strengths in terms of average performance, stability, and statistical significance in the fixed-dimension multimodal functions. While it excels in several key areas, there are opportunities for further refinement in functions where its performance falls behind other metaheuristic algorithms.
To rank the algorithms, we used the Friedman mean ranking score [39], which measures how well each algorithm performed across all 23 classical benchmark functions. A lower score means better performance. As shown in Table 6, the MRSO received a ranking score of 10.2, placing it in the middle of the group. The MRA (7.5) and the CSA (8.1) performed better than the MRSO, taking the top spots. The TSA and the DOA followed closely with scores of 11.3 and 10.6, respectively. The SCA, the EHO, and the WSO ranked lower with scores of 12.8, 12.4, and 13.2, while the LCA had the lowest performance with a score of 15.1. These results show that while the MRSO did well in some benchmarks, especially in multimodal and fixed-dimension functions, the MRA and the CSA were stronger overall across all the tested functions.

5.6. The MRSO Algorithm and Metaheuristic Approaches: Insights from the CEC-C06 2019 Benchmarking

As seen in Table 7, the MRSO outperformed eight metaheuristic algorithms in five out of the ten CEC-C06 2019 benchmark functions, particularly in functions F2, F3, F6, F7, and F10. For instance, the MRSO achieved an average of 1.83 × 10+1 in F2, which was better than most other algorithms. In F6, the MRSO’s average of 1.09 × 10+1 also surpassed all other algorithms, demonstrating its ability to handle complex optimization problems effectively. However, the MRSO did not perform as well in F1 and F5, where its average was higher than some of the competing algorithms, indicating that there is room for improvement in certain benchmarks.
In terms of standard deviation, the MRSO demonstrated consistency in its performance across multiple functions. For example, in F2 and F3, the MRSO had very low standard deviations of 7.23 × 10−3 and 1.34 × 10−6, respectively, indicating reliable and stable performance. The MRSO also performed consistently in F6 with a standard deviation of 1.01, which was lower than most other algorithms. However, in functions like F1, the MRSO had a higher standard deviation of 3.20 × 10+5, showing greater variability in its results compared to other algorithms, which suggests that further refinement may be necessary to improve its stability across all benchmarks.
The p-values, as shown in Table 7, further highlight the MRSO’s statistical significance in several functions. In F2, the MRSO’s p-value of 1.95 × 10−10 indicates a statistically significant improvement over competing algorithms, confirming the robustness of its performance. Similarly, in F6, the p-value of 3.06 × 10−6 demonstrates that the MRSO’s results are not due to random chance and are significantly better than other algorithms. On the other hand, in functions like F5, the p-value of 5.34 × 10−25 shows that other algorithms, like the WSO, performed better than the MRSO. Despite this, the MRSO consistently showed significant results in key functions, reinforcing its competitiveness. Table 7 highlights the MRSO’s strengths in handling diverse optimization problems, showing strong performance in several key benchmarks against eight other metaheuristic algorithms, with statistically significant improvements in many areas.
To systematically rank the algorithms, we again used the Friedman mean ranking score to evaluate the overall performance of each algorithm across all ten CEC-C06 2019 benchmark functions. As shown in Table 8, the MRSO achieved a ranking score of 3, placing it among the top performers. The SCA followed closely with a score of 3.5, and the TSA and the DOA both ranked similarly with a score of 4.4. The WSO had a slightly better score of 3.1, indicating strong performance. On the other hand, the MRA, the LCA, and the CSA ranked lower, with scores of 7.2, 7.5, and 7.9, respectively, showing that these algorithms were less effective overall compared to the MRSO and the other top-ranked algorithms. These rankings highlight the MRSO’s strong capability in handling the CEC-C06 2019 benchmark functions, outperforming most of the competing algorithms.
At the conclusion of Section 5.5 and Section 5.6, our proposed MRSO algorithm performs better on the 10 CEC-C06 2019 benchmark functions than on the classical benchmark functions. This suggests that the MRSO is particularly effective at handling complex, multidimensional optimization problems, demonstrating its strength in addressing more challenging tasks compared to other metaheuristic algorithms. These results confirm the algorithm’s potential for broader applications in solving difficult optimization challenges.

6. Utilizing MRSO for Solving Engineering Design Issues

This section explores the application of the Modified Rat Swarm Optimizer (MRSO) for tackling various engineering design problems. It provides a theoretical overview of seven real-world constrained engineering design challenges and compares the performance of the MRSO with the original Rat Swarm Optimizer (RSO). Through this comparison, the effectiveness and improvements offered by the MRSO in solving these complex design issues are highlighted.

6.1. Overview of Constrained Engineering Design Problems

In this subsection, we present a theoretical description of seven significant engineering design problems that are commonly encountered in practice. These problems include Pressure Vessel Design, Tension/Compression Spring Design, Three Bar Truss, Gear Train Design, Cantilever Beam, and Welded Beam. Each problem is characterized by its unique set of constraints and objectives, demonstrating the complexity and diversity of engineering optimization tasks.

6.1.1. Pressure Vessel Design

Designing a pressure vessel, as proposed by Kannan and Kramer (1994) [40], aims to minimize the overall cost, which includes material, shaping, and welding. The vessel, shown in Figure 3, consists of a cylindrical body with hemispherical heads at both ends. The problem involves four design variables:
  • S t = shell thickness
  • H t = head thickness
  • I r = inner radius
  • L c = length   of   the   cylindrical   sec tion
Here, I r and L c are continuous variables, while S t     a n d   H t are discrete values that are multiples of 0.0625 inches.
Consider the following:
P = P 1 ,   P 2 ,   P 3 ,   P 4 = S t ,   H t ,   I r ,   L c
The objective is to minimize the cost function:
f P = 0.6224 P 1 P 3 P 4 + 1.7781 P 2 P 3 2 + 3.1661 P 1 2 P 4 + 19.84 P 1 2 P 3
subject to the following constraints:
S 1 P = P 1 + 0.0193 P 3 0 S 2 P = P 3 + 0.00954 P 3 0 S 3 P = π P 3 2 P 4 4 3 π P 3 3 + 1,296,000 0 S 4 P = P 4 240 0
with variable ranges:
0 P 1 99 ,   0 P 2 99 ,   10 P 3 200 ,   10 P 4 200

6.1.2. Tension/Compression Spring Design (TCSD)

The TCSD problem, shown in Figure 4, seeks to minimize the volume of a coil spring under constant load. It involves three design variables [41,42]:
Consider the following:
P = P 1 ,   P 2 ,   P 3 = W r   wire   diameter ,   W d   winding   diameter ,   S a number   of   spring s   activecoils
The objective is to minimize the cost function:
f P = P 3 + 2 P 2 P 1 2
Subject to the following:
S 1 P = 1 P 2 3 71785 P 1 4 0 S 2 P = 4 P 2 2 P 1 P 2 12566 P 2 P 1 3 P 1 4 + 1 5108 P 1 2 1 0 S 3 P = 1 140.45 P 1 P 2 2 P 3 0 S 4 P = P 1 + P 2 1.5 1 0
with variable bounds:
0.05 P 1 2.00 0.25 P 2 1.30 2.00 P 3 15.00

6.1.3. Three Bar Truss Design

Introduced by Ray and Saini [43,44], this problem aims to minimize the weight of a Three Bar Truss while considering stress, deflection, and buckling constraints. The design variables are the cross-sectional areas of the bars. The desired placement of the bars is shown in Figure 5. The objective is to minimize the cost function [44]:
f B 1 ,   B 2 = 2 2 B 1 + B 2 × l
subject to the following:
S 1 = 2 B 1 + B 2 2 B 1 2 + 2 B 1 B 2 P P 0 S 2 = B 2 2 B 1 2 + 2 B 1 B 2 P P 0 S 3 = 1 2 B 1 2 + 2 B 1 B 2 P P 0
where
0 B 1 ,   B 2 1 ,   l = 100   cm   a n d   P = 2 KN cm 2   k i l o n e w t o n / s q u a r e   c e n t i m e t r e

6.1.4. Gear Train Design

The Gear Train Design problem is a type of unconstrained optimization involving four integer variables. Initially introduced by Sandgran [45], this problem focuses on minimizing the cost associated with the gear ratio of a specific gear train, illustrated in Figure 6. The gear ratio is calculated using the following formula [46]:
G e a r   r a t i o = G t a × G t b G t c × G t d
Here, G t a ,   G t b ,   G t c ,   and   G t d represent the number of teeth on gears G 1 ,   G 2 ,   G 3 ,   and   G 4 , respectively. Each G t i indicates the number of teeth on the gearwheel G i . The mathematical expression for this optimization problem is as follows:
f G t a ,   G t b ,   G t c ,   G t d = 1 6.931 G t a × G t b G t c × G t d
This formula is aimed at optimizing the gear ratio to achieve cost efficiency in the design [46].

6.1.5. Cantilever Beam Design

This problem involves optimizing the weight of a Cantilever Beam with a square cross-section, as shown in Figure 7. The beam is fixed at one end and subjected to a vertical force at the other. The design variables are the heights (or widths) of the beam elements, while the thickness is fixed (t = 2/3). These constraints ensure that the beam can carry the applied load without failing. The objective is to minimize the following [46]:
f x = 0.0624 x 1 + x 2 + x 3 + x 4 + x 5
subject to the following:
S x = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
with constraints on the following variable bounds: 0.01 X J 100 .

6.1.6. Welded Beam Design

The goal here is to minimize the cost of producing a welded beam, as illustrated in Figure 8. Subject to several constraints ensuring structural integrity and performance, these constraints include limits on stress, deflection, and stability of the welded beam under the applied loads.
The design variables are as follows [8,42]:
Consider the following:
P = W t ,   W l ,   B h , B t   = w e l d   t h i c k n e s s ,   l e n g t h   o f   t h e   w e l d e d   s e c t i o n ,   b a r   h e i g h t , b a r   t h i c k n e s s  
The objective is to minimize the following:
f P = 1.10471 W t 2 W l + 0.04811 B h B t 14.0 + W l
subject to the following:
S 1 P = τ P τ m a x 0 S 2 P = σ P σ m a x 0 S 3 P = δ P δ m a x 0 S 4 P = W t B t 0 S 5 P = P P c P 0 S 6 P = 0.125 W t 0 S 7 P = 1.10471 P W t 2 + 0.04811 B h B t 14.0 + W l 5.0 0
with constraints on the following variable bounds: 0.05 W t 2.00 ,   0.25 W l 1.30   a n d   2.00 B h ,   B t 15.00 .

6.1.7. Tire Design Problem

The Tire Design problem focuses on minimizing tire deflection, which is critical to improving vehicle performance, safety, and longevity. Tire deflection occurs when a tire deforms under load, impacting factors such as handling, fuel efficiency, and wear. The optimization involves key parameters, including the aspect ratio, section width, load capacity, pressure, and rim diameter, which influence the tire’s stiffness and ability to maintain its shape under varying loads and road conditions. By adjusting these parameters, engineers aim to strike a balance between minimizing deflection and maintaining comfort, traction, and durability [47,48]. This problem, as illustrated in Figure 9, is common in vehicle dynamics optimization and is often addressed using techniques such as constrained nonlinear optimization.
The design variables are as follows:
Consider the following:
P = S 1 ,   S 2 , S 3 , S 4 , S 5 = A s p e c t   R a t i o ,   W i d t h   S e c t i o n ,   l o a d   C a p a c i t y ,   P r e s s u r e , R i m   D i a m e t e r  
The objective is to minimize the following:
f P = F z × W 1 + W 2 + W 3 K 1
subject to the following:
R = S 5 × 25.4 2
H = S 2 × S 1 100
T = H 2 × 1 S 1 100 1000
I = H 3 12 × T × 1,000,000
F z = S 3 4
P = S 4 × 1000
A = ( R 2 ( H 2 ) 2 )
B = ( R 2 ( T + H 2 ) 2 )
W 1 = K 1 × F z P 0.9
W 1 = K 2 × F z P 0.8 × A B 1.1
W 1 = K 3 × F z P 0.7 × A B 1.3 × I H 3 0.7
where K 1 = 5000 ,   K 2 = 10,000 ,   and   K 3 = 20,000 .

6.2. Comparative Analysis of MRSO and RSO in Engineering Applications

This subsection presents a comparative analysis of the MRSO versus the original RSO across seven engineering design problems. The objective is to emphasize the enhancements and benefits of the MRSO in solving constrained engineering design problems more efficiently. The comparison highlights the practical improvements brought by the MRSO in terms of performance and reliability.
The results from Table 9 show that the MRSO consistently outperforms the original RSO in terms of average performance across all seven engineering design problems [49]. The average performance metrics in Table 9 clearly demonstrate the MRSO’s superiority over the RSO. In the Pressure Vessel Design problem, the MRSO achieves an average cost of 6.956 × 10+3, which is significantly better than the RSO’s 1.720 × 10+4. Similarly, in the String Design problem, the MRSO vastly outperforms the RSO, achieving an average cost of 1.417 × 10−2 compared to the RSO’s 3.954 × 10+8. These dramatic improvements indicate the MRSO’s ability to find more optimal solutions across various complex design challenges. The Gear Train Design and Cantilever Beam problems also reflect this trend, with the MRSO obtaining much lower average values (1.106 × 10−12 and 1.343, respectively) compared to the RSO’s higher costs (4.792 × 10−3 and 2.382). Across all seven problems, the MRSO provides more efficient solutions, outclassing the RSO in terms of lower average values in every case.
In terms of consistency, reflected by standard deviation, the MRSO also demonstrates superior performance in most cases. For instance, in the Pressure Vessel Design, the MRSO shows a significantly lower standard deviation (4.479 × 10+2) compared to the RSO (1.031 × 10+4), indicating that the MRSO is not only more efficient but also more stable across iterations. In the Gear Train Design, the MRSO exhibits much smaller variability with a standard deviation of 2.739 × 10−12, compared to the RSO’s higher variation of 6.392 × 10−3. However, there are instances where the RSO shows slightly better consistency, such as in the Three Bar Truss problem, where the RSO achieves a lower standard deviation (4.829) compared to the MRSO (6.522). This suggests that while the MRSO offers substantial improvements in most cases, there are still some applications where further refinement could help improve its consistency.
While the MRSO demonstrates significant improvements over the RSO, there are still cases where both algorithms exhibit weaknesses, particularly in handling complex constraints or highly nonlinear functions. For example, in the Three Bar Truss problem, despite the MRSO having a slightly better average value, both algorithms show relatively close performance with only marginal differences. This suggests that both the MRSO and the RSO may face challenges in solving certain structural design problems with high dimensionality or multiple constraints. Additionally, in the Welded Beam problem, while the MRSO outperforms the RSO, the higher standard deviation for both algorithms indicate instability, suggesting that further refinement may be necessary to improve convergence and reliability in these types of problems. These failure points highlight the need for the continued optimization of the MRSO’s parameters and potential hybridization with other algorithms to ensure robust performance across all types of engineering design applications.

6.3. Performance Comparison of MRSO with SCA, LCA, TSA, and DOA on Seven Engineering Design Problems

In this section, we compare the performance of the MRSO with four other metaheuristic algorithms—the SCA, the LCA, the TSA, and the DOA—across seven engineering design problems. These include the Pressure Vessel Design, String Design, Three Bar Truss, Gear Train Design, Cantilever Beam, Welded Beam, and Tire Design. Table 10 provides a detailed breakdown of the average, standard deviation, and ranking for each algorithm on these problems. Our evaluation was conducted by running each algorithm 30 times to ensure reliable and accurate results.
In terms of average values, the MRSO showed superior performance in four out of the seven engineering design problems. In the Pressure Vessel Design, the MRSO secured second place with an average of 6.83 × 10+3, coming close to the DOA, which achieved the best performance with 5.98 × 10+3. However, in the Three Bar Truss, the MRSO outperformed all other algorithms, achieving the lowest average value of 2.51 × 10+2. Similarly, in the Cantilever Beam, the MRSO achieved the best average value of 1.34, proving its robustness in solving this particular problem. On the other hand, the MRSO ranked second in String Design, where the SCA led with an average of 1.31 × 10−2, compared to the MRSO’s 1.38 × 10−2. In Gear Train Design, the MRSO was narrowly beaten by the DOA, placing second with an average of 1.75 × 10−13. In Tire Design, the MRSO also took first place, highlighting its strength across multiple problems.
When considering standard deviation, the MRSO displayed strong consistency across most problems. In Pressure Vessel Design, the MRSO’s standard deviation of 5.21 × 10+2 was lower than the SCA and the LCA, indicating more reliable performance. For the Three Bar Truss, the MRSO’s standard deviation of 6.12 × 10+1 was considerably higher than the DOA and the TSA, suggesting some variability in its results, despite having the best average value. The MRSO also demonstrated stable results in the Cantilever Beam, with a lower standard deviation of 2.93 × 10−3, compared to the TSA’s 1.13 × 10−2 and the LCA’s 4.63 × 10−2. This highlights the reliability of the MRSO’s performance in these cases.
Regarding consistency, as represented by standard deviation, the MRSO exhibited reliable performance across most of the engineering problems. For instance, in Pressure Vessel Design, the MRSO had a significantly lower standard deviation (5.21 × 10+2) than the SCA and the LCA, indicating more stable results across iterations. Similarly, the MRSO demonstrated strong consistency in the Cantilever Beam, where its standard deviation was 2.93 × 10−3, considerably lower than the TSA (1.13 × 10−2) and the LCA (4.63 × 10−2). In Gear Train Design, the MRSO maintained a lower standard deviation (2.08 × 10−13) compared to the SCA and the LCA, showcasing its reliability in delivering consistent results. However, in the Three Bar Truss, the MRSO exhibited a higher standard deviation of 6.12 × 10+1, indicating some variability in its performance despite having the best average value.
When analyzing the ranking based on performance across the seven engineering problems, the MRSO secured top positions in four out of seven problems. In the Three Bar Truss and the Cantilever Beam, the MRSO ranked first, outperforming all other algorithms in these tasks. In the Welded Beam and Tire Design, the MRSO also achieved the best ranking, further highlighting its robustness. However, in Pressure Vessel Design and Gear Train Design, the MRSO ranked second, being narrowly outperformed by the DOA in both cases. In String Design, the MRSO came in second again, slightly behind the SCA. Overall, the MRSO demonstrated strong competitive performance, securing first or second place in six of the seven problems, as shown in Table 10.
The convergence rates, shown in Figure 10, indicate that the MRSO converges more quickly than the other algorithms in most problems, particularly in Pressure Vessel Design, Three Bar Truss, and Cantilever Beam. This rapid convergence showcases the MRSO’s efficiency in finding optimal solutions early in the iteration process. However, the graph also reveals that for certain problems, such as the String Design and Gear Train Design, the MRSO struggles to outperform other algorithms like the DOA and the SCA, where the DOA’s performance fluctuates but often outpaces the MRSO in terms of final optimal values. These findings suggest that while the MRSO is highly effective in most cases, there are specific areas, such as high-dimensional or more complex designs, where further tuning may be required to enhance its optimization capability.
A detailed analysis of the failures reveals that the MRSO and some of the competing algorithm’s struggle with high-dimensional problems like String Design and Gear Train Design. In the case of String Design, both the MRSO and the DOA produce suboptimal solutions compared to the SCA, showing that the MRSO’s exploration capabilities might need improvement for problems with numerous variables. Similarly, in Gear Train Design, while the MRSO performs better than most algorithms, it fails to achieve the best performance, falling behind the DOA. These failures provide valuable insights into the MRSO’s limitations, particularly in handling more complex optimization landscapes. These problems could be fixed by making the algorithm better and mixing it with other metaheuristic methods. This would make it better at high-dimensional or non-convex optimization tasks.

7. Conclusions, Challenges, and Future Research

This section draws conclusions from the study, highlights the challenges encountered, and suggests areas for future research efforts.

7.1. Conclusions

In this research, we introduced the Modified Rat Swarm Optimizer (MRSO) as an improvement over the original Rat Swarm Optimizer (RSO), specifically designed to address its limitations in handling convergence and exploration. The MRSO incorporates key modifications aimed at achieving a better balance between exploration and exploitation, which are critical for solving complex optimization problems effectively.
We thoroughly tested the MRSO against eight recent and commonly used metaheuristic algorithms, including the SCA, the MRA, the LCA, the CSA, the TSA, the DOA, the EHO, and the WSO, on both classical benchmark functions and CEC 2019 benchmark functions. The results consistently demonstrated the MRSO’s superiority in avoiding local optima and yielding better results than the standard RSO across a wide range of functions. The MRSO’s competitive performance, particularly in handling multimodal and high-dimensional problems, highlights its robustness and effectiveness as an optimization tool.
Additionally, we applied the MRSO to seven real-world engineering design problems—Pressure Vessel Design, String Design, Three Bar Truss, Gear Train Design, Cantilever Beam, Welded Beam, and Tire Design. For this, we compared the MRSO’s performance with four of the algorithms (the SCA, the LCA, the TSA, and the DOA). The results indicated that the MRSO not only outperformed the RSO but also consistently outperformed the other four metaheuristic algorithms in most cases. These findings emphasize the MRSO’s reliability and efficiency in solving real-world engineering applications, where striking the right balance between exploration and exploitation is crucial to finding optimal solutions.

7.2. Limitations

Despite these promising results, there are notable limitations that must be addressed. First, our evaluation was confined to a specific set of benchmark functions and engineering problems. Although these problems are widely accepted in the optimization field, they may not represent the full spectrum of real-world challenges. Therefore, the results may not fully generalize to all problem types or industries, particularly for those that deviate from the tested scenarios.
Second, the parameter settings employed in our tests were fixed after initial tuning, which worked well under the tested conditions. However, it is conceivable that further fine-tuning or employing adaptive parameter control mechanisms could yield even better results. This underscores the need for a more detailed investigation into dynamic parameter settings that could adapt to the unique demands of different optimization problems.
Additionally, the scope of this study did not include testing the MRSO on large-scale, highly complex engineering problems, which may limit its immediate application to such contexts. Addressing large-scale optimization scenarios will be crucial for future research to determine how scalable the MRSO is when dealing with computationally intensive problems. Lastly, the comparative algorithms used in our evaluation, while competitive, were not exhaustive. A more comprehensive comparison involving newer or more established algorithms could provide deeper insights into the MRSO’s performance advantages and its areas of improvement.

7.3. Future Work

In the future, research should focus on extending the MRSO’s application to a wider range of optimization challenges, particularly those involving large-scale, real-world engineering problems. Doing so would help confirm how adaptable and scalable the MRSO is when faced with more complex, computationally demanding tasks, providing deeper insights into how well it can generalize beyond the current tests.
Another exciting area for future work is developing adaptive mechanisms that allow the MRSO to adjust its parameters automatically, depending on the specific problem it is tackling. This would not only make the MRSO more flexible but would also reduce the need for manual tuning, making it an even more powerful tool across a broader variety of optimization tasks.
Additionally, incorporating surrogate models or other techniques to lower computational costs could make the MRSO more efficient in solving high-dimensional problems. Since real-world scenarios often require a balance between computational resources and solution quality, integrating methods that improve the MRSO’s efficiency could make it more practical for large-scale industry applications. Lastly, future studies should include a wider variety of metaheuristic algorithms and configurations, which will provide a clearer understanding of the MRSO’s strengths and highlight areas where it could be further improved.

Author Contributions

Conceptualization, A.A.A.; methodology, A.A.A. and H.S.A.; software, A.A.A., H.S.A., S.I.S., I.A.M., and T.A.R.; validation, A.A.A. and T.A.R.; formal analysis, S.I.S. and I.A.M.; investigation, A.A.A.; resources, A.A.A., H.S.A., S.I.S., I.A.M., and T.A.R.; data curation, A.A.A.; writing—original draft preparation, A.A.A.; writing—review and editing, H.S.A., S.I.S., I.A.M., and T.A.R.; visualization, A.A.A.; supervision, A.A.A.; project administration, A.A.A.; funding acquisition, H.S.A., S.I.S., and I.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The MATLAB code for the Modified Rat Swarm Optimization (MRSO) algorithm used in this study is available at (https://www.mathworks.com/matlabcentral/fileexchange/172930-modified-rat-swarm-optimization-mrso-algorithm).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1, Table A2 and Table A3 present the mathematical formulations of the 23 standard benchmark functions used in this study. These tables provide a detailed overview of the equations, problem dimensions, upper and lower boundary ranges, and the minimum goal values for each function. This comprehensive information serves as the foundation for evaluating the performance of the algorithms tested [24,35].
Table A1. Unimodal test functions [24].
Table A1. Unimodal test functions [24].
FunctionDimensionRange f m i n
F 1 x = i = 1 n X i 2 10 100 ,   100 0
F 2 x = i = 1 n X i + i = 1 n X i 10 10 ,   10 0
F 3 x = i = 1 n j = 1 i X j 2 10 30 ,   30 0
F 4 x = m a x i X i ,   1 i n 10 100 ,   100 0
F 5 x = i = 1 n 1 100 x i + 1 x 1 2 2 + x i 1 2 10 30 ,   30 0
F 6 x = i = 1 n x i + 0.5 2 10 100 ,   100 0
F 7 x = i = 1 n i x i 4 + r a n d o m 0 , 1 10 1.28 ,   1.28 0
Table A2. Multimodal test functions [35].
Table A2. Multimodal test functions [35].
FunctionDimensionRange f m i n
F 8 x = i = 1 n X i sin x i 10 500 ,   500 −418.9829 × n
When n equals to dimensions
F 9 x = i = 1 n x i 2 10 cos 2 π x i + 10 10 10 ,   10 0
F 10 x = 20 e x p   i = 1 n x i 2 0.2 e x p   1 n i = 1 n cos 2 π x i + 20 + e 10 32 ,   32 0
F 11 x = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 10 600 ,   600 0
F 12 x = π n { 10 sin π y 1 + i = 1 n 1 y i 1 2 [ 1 + 10 sin π y i + 1 2 ] + y n 1 2 } + i = 1 n μ x i , 10 , 100 , 4 ,   y i = 1 + x + 1 4 ,   μ x i , a , k , m = k x i a m   x i > a 0 a < x i < a k x i a m x i < a 10 50 ,   50 0
F 13 x = 0.1 { sin 3 π x 1 2 + i = 1 n x i 1 2 1 + sin 3 π x i + 1 2 + x n 1 2 1 + sin 2 π x n 2 } + i = 1 n μ x i , 5 , 100 , 4 30 50 , 50 0
F 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 2 65 ,   65 1
F 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4 5 ,   5 0.0003
F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5,5]−1.0316
Table A3. Fixed-dimension multimodal benchmark functions [8].
Table A3. Fixed-dimension multimodal benchmark functions [8].
FunctionDimensionRange f m i n
F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2 5 ,   5 0.398
F 18 x = [ 1 + x 1 + x 2 + 1 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + 2 x 1 3 x 2 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2 2 ,   2 3
F 19 x = i = 1 4 c i e x p j = 1 3 a i j x j p i j 2 3 1 , 3 −3.86
F 20 x = i = 1 4 c i e x p j = 1 6 a i j x j p i j 2 6 0 , 1 −3.32
F 21 x = i = 1 5 X α i X α i T + C i 1 4 0 , 10 −10.1532
F 22 x = i = 1 7 X α i X α i T + C i 1 4 0 , 10 −10.4028
F 23 x = i = 1 1 0 X α i X α i T + C i 1 4 0 , 10 −10.536

References

  1. Wang, G.-G.; Zhao, X.; Li, K. Metaheuristic Algorithms: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2024. [Google Scholar]
  2. Munciño, D.M.; Damian-Ramírez, E.A.; Cruz-Fernández, M.; Montoya-Santiyanes, L.A.; Rodríguez-Reséndiz, J. Metaheuristic and Heuristic Algorithms-Based Identification Parameters of a Direct Current Motor. Algorithms 2024, 17, 209. [Google Scholar] [CrossRef]
  3. Chen, L.; Zhao, Y.; Ma, Y.; Zhao, B.; Feng, C. Improving Wild Horse Optimizer: Integrating Multistrategy for Robust Performance across Multiple Engineering Problems and Evaluation Benchmarks. Mathematics 2023, 11, 3861. [Google Scholar] [CrossRef]
  4. Ameen, A.A.; Rashid, T.A. A Tutorial on Child Drawing Development Optimization; Springer International Publishing AG: Muscat, Oman, 2022; pp. 1–15. Available online: http://iciitb.mcbs.edu.om/en/iciitb-home (accessed on 30 January 2023).
  5. Zhou, S.; Shi, Y.; Wang, D.; Xu, X.; Xu, M.; Deng, Y. Election Optimizer Algorithm: A New Meta-Heuristic Optimization Algorithm for Solving Industrial Engineering Design Problems. Mathematics 2024, 12, 1513. [Google Scholar] [CrossRef]
  6. Bibri, S.E.; Krogstie, J.; Kaboli, A.; Alahi, A. Smarter eco-cities and their leading-edge artificial intelligence of things solutions for environmental sustainability: A comprehensive systematic review. Environ. Sci. Ecotechnol. 2024, 19, 100330. [Google Scholar] [CrossRef]
  7. Leiva, D.; Ramos-Tapia, B.; Crawford, B.; Soto, R.; Cisternas-Caneo, F. A Novel Approach to Combinatorial Problems: Binary Growth Optimizer Algorithm. Biomimetics 2024, 9, 283. [Google Scholar] [CrossRef]
  8. Dhiman, G.; Garg, M.; Nagar, A.; Kumar, V.; Dehghani, M. A novel algorithm for global optimization: Rat swarm optimizer. J. Ambient Intell. Humaniz. Comput. 2021, 12, 8457–8482. [Google Scholar] [CrossRef]
  9. Awadallah, M.A.; Al-Betar, M.A.; Braik, M.S.; Hammouri, A.I.; Doush, I.A.; Zitar, R.A. An enhanced binary Rat Swarm Optimizer based on local-best concepts of PSO and collaborative crossover operators for feature selection. Comput. Biol. Med. 2022, 147, 105675. [Google Scholar] [CrossRef]
  10. Houssein, E.H.; Hosney, M.E.; Oliva, D.; Younis, E.M.G.; Ali, A.A.; Mohamed, W.M. An efficient discrete rat swarm optimizer for global optimization and feature selection in chemoinformatics. Knowl.-Based Syst. 2023, 275, 110697. [Google Scholar] [CrossRef]
  11. Toolabi Moghadam, A.; Aghahadi, M.; Eslami, M.; Rashidi, S.; Arandian, B.; Nikolovski, S. Adaptive rat swarm optimization for optimum tuning of SVC and PSS in a power system. Int. Trans. Electr. Energy Syst. 2022, 2022, 4798029. [Google Scholar] [CrossRef]
  12. Sayed, G.I. A novel multi-objective rat swarm optimizer-based convolutional neural networks for the diagnosis of COVID-19 disease. Autom. Control Comput. Sci. 2022, 56, 198–208. [Google Scholar] [CrossRef]
  13. Zebiri, I.; Zeghida, D.; Redjimi, M. Rat swarm optimizer for data clustering. Jordanian J. Comput. Inf. Technol. 2022, 8, 1. [Google Scholar] [CrossRef]
  14. Eslami, M.; Akbari, E.; Seyed Sadr, S.T.; Ibrahim, B.F. A novel hybrid algorithm based on rat swarm optimization and pattern search for parameter extraction of solar photovoltaic models. Energy Sci. Eng. 2022, 10, 2689–2713. [Google Scholar] [CrossRef]
  15. Manickam, P.; Girija, M.; Dutta, A.K.; Babu, P.R.; Arora, K.; Jeong, M.K.; Acharya, S. Empowering Cybersecurity Using Enhanced Rat Swarm Optimization with Deep Stack-Based Ensemble Learning Approach. IEEE Access 2024, 12, 62492–62501. [Google Scholar] [CrossRef]
  16. Rahab, H.; Haouassi, H.; Souidi, M.E.H.; Bakhouche, A.; Mahdaoui, R.; Bekhouche, M. A modified binary rat swarm optimization algorithm for feature selection in Arabic sentiment analysis. Arab. J. Sci. Eng. 2023, 48, 10125–10152. [Google Scholar] [CrossRef]
  17. Singla, M.K.; Gupta, J.; Alsharif, M.H.; Kim, M.-K. A modified particle swarm optimization rat search algorithm and its engineering application. PLoS ONE 2024, 19, e0296800. [Google Scholar] [CrossRef]
  18. Lou, T.; Guan, G.; Yue, Z.; Wang, Y.; Tong, S. A Hybrid K-means Method based on Modified Rat Swarm Optimization Algorithm for Data Clustering. Preprint 2024. [Google Scholar] [CrossRef]
  19. Mzili, T.; Riffi, M.E.; Mzili, I.; Dhiman, G. A novel discrete Rat swarm optimization (DRSO) algorithm for solving the traveling salesman problem. Decis. Mak. Appl. Manag. Eng. 2022, 5, 287–299. [Google Scholar] [CrossRef]
  20. Mzili, T.; Mzili, I.; Riffi, M.E. Artificial rat optimization with decision-making: A bio-inspired metaheuristic algorithm for solving the traveling salesman problem. Decis. Mak. Appl. Manag. Eng. 2023, 6, 150–176. [Google Scholar] [CrossRef]
  21. Mzili, T.; Mzili, I.; Riffi, M.E. Optimizing production scheduling with the Rat Swarm search algorithm: A novel approach to the flow shop problem for enhanced decision making. Decis. Mak. Appl. Manag. Eng. 2023, 6, 16–42. [Google Scholar] [CrossRef]
  22. Alruwais, N.; Alabdulkreem, E.; Khalid, M.; Negm, N.; Marzouk, R.; Al Duhayyim, M.; Balaji, P.; Ilayaraja, M.; Gupta, D. Modified rat swarm optimization with deep learning model for robust recycling object detection and classification. Sustain. Energy Technol. Assess. 2023, 59, 103397. [Google Scholar] [CrossRef]
  23. Gopi, P.; Alluraiah, N.C.; Kumar, P.H.; Bajaj, M.; Blazek, V.; Prokop, L. Improving load frequency controller tuning with rat swarm optimization and porpoising feature detection for enhanced power system stability. Sci. Rep. 2024, 14, 15209. [Google Scholar] [CrossRef]
  24. Ameen, A.A.; Rashid, T.A.; Askar, S. CDDO-HS: Child Drawing Development Optimization-Harmony Search Algorithm. Appl. Sci. 2023, 13, 5795. [Google Scholar] [CrossRef]
  25. Ameen, A.A.; Rashid, T.A.; Askar, S. MCDDO: Overcoming Challenges and Enhancing Performance in Search Optimization. 2023. Available online: https://ouci.dntb.gov.ua/works/7qjY8BB4/ (accessed on 16 September 2024).
  26. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  27. Desuky, A.S.; Cifci, M.A.; Kausar, S.; Hussain, S.; El Bakrawy, L.M. Mud Ring Algorithm: A new meta-heuristic optimization algorithm for solving mathematical and engineering challenges. IEEE Access 2022, 10, 50448–50466. [Google Scholar] [CrossRef]
  28. Houssein, E.H.; Oliva, D.; Samee, N.A.; Mahmoud, N.F.; Emam, M.M. Liver Cancer Algorithm: A novel bio-inspired optimizer. Comput. Biol. Med. 2023, 165, 107389. [Google Scholar] [CrossRef]
  29. Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Tostado-Véliz, M.; Jurado, F. Circle search algorithm: A geometry-based metaheuristic optimization algorithm. Mathematics 2022, 10, 1626. [Google Scholar] [CrossRef]
  30. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  31. Bairwa, A.K.; Joshi, S.; Singh, D. Dingo optimizer: A nature-inspired metaheuristic approach for engineering problems. Math. Probl. Eng. 2021, 2021, 2571863. [Google Scholar] [CrossRef]
  32. Al-Betar, M.A.; Awadallah, M.A.; Braik, M.S.; Makhadmeh, S.; Doush, I.A. Elk herd optimizer: A novel nature-inspired metaheuristic algorithm. Artif. Intell. Rev. 2024, 57, 48. [Google Scholar] [CrossRef]
  33. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  34. Montgomery, D.C. Design and Analysis of Experiments; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  35. Ameen, A.A. Metaheuristic Optimazation Algorithms in Applied Science and Engineering Applications.pdf, Erbil Polytechnic University, 2024. Available online: https://epu.edu.iq/2024/03/17/metaheuristic-optimization-algorithms-in-applied-science-and-engineering-applications-2/ (accessed on 16 September 2024).
  36. Hollander, M.; Wolfe, D.A.; Chicken, E. Nonparametric Statistical Methods; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  37. Iruthayarajan, M.W.; Baskar, S. Covariance matrix adaptation evolution strategy-based design of centralized PID controller. Expert Syst. Appl. 2010, 37, 5775–5781. [Google Scholar] [CrossRef]
  38. Abualigah, L.; Diabat, A. Advances in sine cosine algorithm: A comprehensive survey. Artif. Intell. Rev. 2021, 54, 2567–2608. [Google Scholar] [CrossRef]
  39. Eisinga, R.; Heskes, T.; Pelzer, B.; Te Grotenhuis, M. Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers. BMC Bioinform. 2017, 18, 68. [Google Scholar] [CrossRef]
  40. Kannan, B.K.; Kramer, S.N. An Augmented Lagrange Multiplier-Based Method for Mixed Integer Discrete Continuous Optimization and Its Applications to Mechanical Design. 1994. Available online: https://asmedigitalcollection.asme.org/mechanicaldesign/article-abstract/116/2/405/454458/An-Augmented-Lagrange-Multiplier-Based-Method-for?redirectedFrom=fulltext (accessed on 16 September 2024).
  41. Çelik, Y.; Kutucu, H. Solving the Tension/Compression Spring Design Problem by an Improved Firefly Algorithm. IDDM 2018, 1, 1–7. [Google Scholar]
  42. Dhiman, G. SSC: A hybrid nature-inspired meta-heuristic optimization algorithm for engineering applications. Knowl.-Based Syst. 2021, 222, 106926. [Google Scholar] [CrossRef]
  43. Gandomi, A.H.; Yang, X.S.; Talatahari, S.; Alavi, A.H. Metaheuristic Algorithms in Modeling and Optimization. Metaheuristic Appl. Struct. Infrastruct. 2013, 1, 1–24. [Google Scholar] [CrossRef]
  44. Fauzi, H.; Batool, U. A three-bar truss design using single-solution simulated Kalman filter optimizer. Mekatronika J. Intell. Manuf. Mechatron. 2019, 1, 98–102. [Google Scholar] [CrossRef]
  45. Sandgren, E. Nonlinear integer and discrete programming in mechanical design optimization. J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
  46. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  47. Nakajima, Y. Application of computational mechanics to tire design—Yesterday, today, and tomorrow. Tire Sci. Technol. 2011, 39, 223–244. [Google Scholar] [CrossRef]
  48. Nakajima, Y.; Kadowaki, H.; Kamegawa, T.; Ueno, K. Application of a neural network for the optimization of tire design. Tire Sci. Technol. 1999, 27, 62–83. [Google Scholar] [CrossRef]
  49. Ghasri, M. Benchmark Problems. MathWorks: R2022b. 2023. Available online: https://www.mathworks.com/matlabcentral/fileexchange/124810-benchmark-problems#version_history_tab (accessed on 16 September 2024).
Figure 1. A flowchart representation of the MRSO algorithm.
Figure 1. A flowchart representation of the MRSO algorithm.
Algorithms 17 00423 g001
Figure 2. Convergence curves comparing MRSO variants and the RSO on the CEC 2019 test using the fitness function.
Figure 2. Convergence curves comparing MRSO variants and the RSO on the CEC 2019 test using the fitness function.
Algorithms 17 00423 g002
Figure 3. Pressure Vessel Design diagram.
Figure 3. Pressure Vessel Design diagram.
Algorithms 17 00423 g003
Figure 4. Diagram of the Tension/Compression Spring Design.
Figure 4. Diagram of the Tension/Compression Spring Design.
Algorithms 17 00423 g004
Figure 5. Diagram of the Three Bar Truss problem.
Figure 5. Diagram of the Three Bar Truss problem.
Algorithms 17 00423 g005
Figure 6. Diagram of the gear train system.
Figure 6. Diagram of the gear train system.
Algorithms 17 00423 g006
Figure 7. Diagram of the Cantilever Beam issue.
Figure 7. Diagram of the Cantilever Beam issue.
Algorithms 17 00423 g007
Figure 8. Welded Beam Design schematic diagram.
Figure 8. Welded Beam Design schematic diagram.
Algorithms 17 00423 g008
Figure 9. Tire Design problem schematic diagram.
Figure 9. Tire Design problem schematic diagram.
Algorithms 17 00423 g009
Figure 10. Convergence rate comparison of MRSO and metaheuristic algorithms across engineering design problems.
Figure 10. Convergence rate comparison of MRSO and metaheuristic algorithms across engineering design problems.
Algorithms 17 00423 g010
Table 1. The MRSO vs. standard RSO performance on classical benchmark functions: average, standard deviation, and statistical significance.
Table 1. The MRSO vs. standard RSO performance on classical benchmark functions: average, standard deviation, and statistical significance.
FunMRSORSOStatistical Significance
Avg.Std.Avg.Std.p-Value
F11.12 × 10−64.765 × 10−61.537 × 10−25602.03 × 10−1
F24.515 × 10−342.228 × 10−332.942 × 10−1341.612 × 10−1332.716 × 10−1
F35.092 × 10+32.754 × 10+42.205 × 10−2642.205 × 10−2640
F43.486 × 10−161.91 × 10−152.868 × 10−861.571 × 10−853.215 × 10−1
F52.885 × 10+11.323 × 10−12.882 × 10+12.635 × 10−15.945 × 10−1
F68.16 × 10−14.033 × 10−13.4154.748 × 10−11.099 × 10−30
F71.981 × 10−41.438 × 10−44.861 × 10−45.636 × 10−48.792 × 10−3
F8−8.613 × 10+31.835 × 10+3−5.709 × 10+31.073 × 10+34.498 × 10−10
F900000
F102.189 × 10−58.528 × 10−51.007 × 10−156.486 × 10−161.651 × 10−1
F1100000
F125.646 × 10−24.465 × 10−23.366 × 10−11.055 × 10−12.127 × 10−19
F132.8249.216 × 10−22.8624.32 × 10−24.514 × 10−2
F141.3958.072 × 10−12.6461.7879.131 × 10−4
F158.071 × 10−44.068 × 10−41.178 × 10−36.153 × 10−47.839 × 10−3
F16−1.0321.463 × 10−5−1.0312.045 × 10−42.719 × 10−4
F173.99 × 10−13.303 × 10−34.076 × 10−19.812 × 10−32.831 × 10−5
F183.0004.028 × 10−63.0005.206 × 10−51.119 × 10−2
F19−3.8561.807 × 10−3−3.4262.942 × 10−16.275 × 10−11
F20−2.7753.885 × 10−1−1.7233.772 × 10−12.919 × 10−15
F21−2.6342.536−7.08 × 10−13.728 × 10−11.236 × 10−4
F22−3.7192.774−1.0356.255 × 10−13.033 × 10−6
F23−2.3602.440−1.3537.529 × 10−13.504 × 10−2
Table 2. The MRSO vs. standard RSO performance on the CEC 2019 benchmark functions: average, standard Deviation, and statistical Significance.
Table 2. The MRSO vs. standard RSO performance on the CEC 2019 benchmark functions: average, standard Deviation, and statistical Significance.
FunMRSORSOStatistical Significance
Avg.Std.Avg.Std.p-Value
F11.588 × 10+53.199 × 10+56.263 × 10+41.392 × 10+41.053 × 10−1
F21.835 × 10+17.231 × 10−31.848 × 10+11.981 × 10−13.797 × 10−4
F31.370 × 10+11.337 × 10−61.370 × 10+11.828 × 10−42.256 × 10−2
F49.205 × 10+33.203 × 10+38.861 × 10+32.152 × 10+36.275 × 10−1
F54.5744.133 × 10−14.6314.290 × 10−16.076 × 10−1
F61.095 × 10+11.0111.165 × 10+18.597 × 10−14.905 × 10−3
F76.113 × 10+22.291 × 10+27.898 × 10+22.154 × 10+22.915 × 10−3
F86.3114.138 × 10−16.3214.334 × 10−19.215 × 10−1
F94.967 × 10+21.493 × 10+25.866 × 10+21.362 × 10+21.791 × 10−2
F102.130 × 10+11.490 × 10−12.147 × 10+11.116 × 10−15.316 × 10−6
Table 3. Metrics-based comparison of MRSO and eight algorithms on unimodal functions (F1–F7) from classical benchmarks.
Table 3. Metrics-based comparison of MRSO and eight algorithms on unimodal functions (F1–F7) from classical benchmarks.
AlgorithmMetric/FunctionF1F2F3F4F5F6F7
MRSOAverage1.12 × 10−64.52 × 10−345.09 × 10+33.49 × 10−162.89 × 10+18.16 × 10−11.98 × 10−4
Std.4.76 × 10−62.23 × 10−332.75 × 10+41.91 × 10−151.32 × 10−14.03 × 10−11.44 × 10−4
Ranking6595563
SCAAverage3.31 × 10−125.82 × 10−106.55 × 10−33.21 × 10−32.90 × 10+14.34 × 10−12.66 × 10−3
Std.8.39 × 10−127.97 × 10−106.55 × 10−31.42 × 10−21.19 × 10+21.49 × 10−12.81 × 10−3
p_value2.03 × 10−11.83 × 10−41.77 × 10−22.2 × 10−19.93 × 10−19.06 × 10−61.18 × 10−5
Ranking5656757
MRAAverage000006.41 × 10−71.01 × 10−4
Std.000001.46 × 10−61.07 × 10−4
p_value2.03 × 10−12.72 × 10−103.21 × 10−14.69 × 10−1295.97 × 10-164.57 × 10−3
Ranking1111122
LCAAverage2.42 × 10−11.87 × 10−14.1 × 10+17.3 × 10−21.252.35 × 10−16.39 × 10−4
Std.2.99 × 10−11.1 × 10−14.1 × 10+14.68 × 10−22.075.69 × 10−16.41 × 10−4
p_value4.24 × 10−54.34 × 10−135.86 × 10+17.54 × 10−121 × 10−582.65 × 10−55.18 × 10−4
Ranking8867346
CSAAverage9.27 × 10−1851.59 × 10−971.82 × 10−2141.92 × 10−113004.34 × 10−4
Std.06.15 × 10−971.82 × 10−2141.05 × 10−112006.18 × 10−4
p_value2.03 × 10−12.72 × 10−103.21 × 10−14.69 × 10−1295.97 × 10−164.64 × 10−2
Ranking3322115
TSAAverage2.9 × 10−1966.51 × 10−1011.03 × 10−1813.77 × 10−922.87 × 10+16.217.43 × 10−5
Std.02.27 × 10−1001.03 × 10−1819.37 × 10−923.07 × 10−18.22 × 10−15.81 × 10−5
p_value2.03 × 10−12.72 × 10−103.21 × 10−15.44 × 10−39.52 × 10−395.16 × 10−5
Ranking2233481
DOAAverage3.09 × 10−541.85 × 10−381.88 × 10−584.06 × 10−432.89 × 10+15.582.71 × 10−4
Std.1.63 × 10−531.01 × 10−371.88 × 10−582.22 × 10−424.17 × 10−21.092.8 × 10−4
p_value2.19 × 10−12.72 × 10−11.03 × 10−573.21 × 10−17.87 × 10−32.61 × 10−302.08 × 10−1
Ranking4444674
EHOAverage7.47 × 10−32.04 × 10−21.39 × 10+32.41 × 10+11.19 × 10+25.13 × 10−38.74 × 10−2
Std.1.76 × 10−24.59 × 10−21.39 × 10+35.899.99 × 10+12.6 × 10−25.74 × 10−2
p_value2.34 × 10−21.8 × 10−28.86 × 10+23.15 × 10−307 × 10−68.32 × 10−161.8 × 10−11
Ranking7789838
WSOAverage2.85 × 10+24.051.34 × 10+31.3 × 10+12.13 × 10+42.17 × 10+21.45 × 10−1
Std.1.75 × 10+21.321.34 × 10+32.412.58 × 10+41.32 × 10+26.61 × 10−2
p_value1.84 × 10−125.58 × 10−246.35 × 10+21.06 × 10−363.22 × 10−51.33 × 10−122.59 × 10−17
Ranking9978999
Table 4. Metrics-based comparison of MRSO and eight algorithms on multimodal functions (F8–F16) from classical benchmarks.
Table 4. Metrics-based comparison of MRSO and eight algorithms on multimodal functions (F8–F16) from classical benchmarks.
AlgorithmMetric/FunctionF8F9F10F11F12F13F14F15F16
MRSOAverage−8.61 × 10+302.19 × 10−505.65 × 10−22.821.398.07 × 10−4−1.03
Std.1.83 × 10+308.53 × 10−504.46 × 10−29.22 × 10−28.07 × 10−14.07 × 10−41.46 × 10−5
Ranking315146311
SCAAverage−2.16 × 10+32.192.8 × 10−55.19 × 10−29.33 × 10−23.24 × 10−11.599.19 × 10−4−1.03
Std.1.47 × 10+26.691.51 × 10−49.63 × 10−23.83 × 10−28.92 × 10−29.24 × 10−13.46 × 10−46.76 × 10−5
p_value8.15 × 10−277.76 × 10−28.49 × 10−14.55 × 10−31.12 × 10−32.80 × 10−683.77 × 10−12.57 × 10−19.85 × 10−5
Ranking966654434
MRAAverage−7.34 × 10+308.88 × 10−1605.92 × 10−91.15 × 10−75.289.24 × 10−4−1.03
Std.2.57 × 10+301 × 10−3101.08 × 10−81.74 × 10−75.723.33 × 10−45.8 × 10−3
p_value3.17 × 10−2 1.65 × 10−1 3.88 × 10−91.23 × 10−795.11 × 10−42.3 × 10−15.9 × 10−6
Ranking511122757
LCAAverage−8.4 × 10+31.14 × 10−19.56 × 10−22.96 × 10−11.22 × 10−32 × 10−21.69 × 10+11.56 × 10−2−6.9 × 10−1
Std.4.7 × 10+32.02 × 10−17.48 × 10−22.45 × 10−12.19 × 10−33.23 × 10−24.25 × 10+13.33 × 10−22.16 × 10−1
p_value8.16 × 10−12.93 × 10−32.95 × 10−91.27 × 10−87.15 × 10−95.35 × 10−784.99 × 10−21.81 × 10−24.58 × 10−12
Ranking457833998
CSAAverage−1.26 × 10+408.88 × 10−1601.57 × 10−321.35 × 10−329.98 × 10−11.67 × 10−36.47 × 10−233
Std.1.85 × 10−1201 × 10−3101.11 × 10−475.57 × 10−483.39 × 10−161.1 × 10−180
p_value4.54 × 10−170 1.65 × 10−10 3.88 × 10−91.23 × 10−799.25 × 10−37.28 × 10−171.34 × 10−274
Ranking211111169
TSAAverage−3.64 × 10+31.94 × 10+14.56 × 10−151.77 × 10−31.132.631.14 × 10+13.27 × 10−3−1.03
Std.5.64 × 10+23.5 × 10+16.49 × 10−164.11 × 10−33.81 × 10−12.68 × 10−15.47 × 10−31.19 × 10−2
p_value1.58 × 10−203.63 × 10−31.65 × 10−12.17 × 10−23.94 × 10−224.52 × 10−42.27 × 10−145.94 × 10−21.92 × 10−2
Ranking874575876
DOAAverage−4.67 × 10+301.13 × 10−1506.52 × 10−12.913.536.09 × 10−3-1.03
Std.8.28 × 10+209.01 × 10−1602.32 × 10−12.04 × 10−12.888.9 × 10−33.61 × 10−5
p_value2.19 × 10−1501.65 × 10−105.25 × 10−203.48 × 10−022.5 × 10−41.94 × 10−31.19 × 10−3
Ranking713167582
EHOAverage−7.2 × 10+1274.71 × 10+14.328.11 × 10−21.283.384.058.49 × 10−4−1.03
Std.2.31 × 10+1281.91 × 10+11.962.26 × 10−11.624.9904.912.98 × 10−47.77 × 10−5
p_value8.99 × 10−21.45 × 10−191.93 × 10−175.39 × 10−21.24 × 10−45.41 × 10−14.88 × 10−36.52 × 10−18.35 × 10−5
Ranking188788624
WSOAverage−4.77 × 10+35.46 × 10+15.883.444.481.43 × 10+39.98 × 10−19.19 × 10−4-1.03
Std.1.42 × 10+32.96 × 10+19.11 × 10−11.402.054.9 × 10+33.06 × 10−103.45 × 10−52.13 × 10−5
p_value1.01 × 10−122.28 × 10−146.03 × 10−411.72 × 10−194.37 × 10−171.15 × 10−19.25 × 10−32.68 × 10−11.18 × 10−3
Ranking699999232
Table 5. Metrics-based comparison of MRSO and eight algorithms on fixed-dimension multimodal Functions (F17–F23) from classical benchmarks.
Table 5. Metrics-based comparison of MRSO and eight algorithms on fixed-dimension multimodal Functions (F17–F23) from classical benchmarks.
AlgorithmMetric/FunctionF17F18F19F20F21F22F23
MRSOAverage3.99 × 10−13−3.86−2.78−2.63−3.72−2.36
Std.3.3 × 10−34.03 × 10−61.81 × 10−33.89 × 10−12.542.772.44
Ranking1146989
SCAAverage4 × 10−13−3.85−2.92−2.9−3.1−4.03
Std.2.56 × 10−39.16 × 10−53 × 10−32.72 × 10−11.891.771.40
p_value2.9 × 10−13.36 × 10−48.21 × 10−31.05 × 10−16.44 × 10−13.11 × 10−11.89 × 10−3
Ranking3255898
MRAAverage4.26 × 10−17.62−3.7−2.73−1.02 × 10+1−1.04 × 10+1−1.05 × 10+1
Std.2.8 × 10−24.467.3 × 10−21.86 × 10−12.85 × 10−31.27 × 10−34.9 × 10−3
p_value2.77 × 10−64.68 × 10−73.53 × 10−175.38 × 10−13.08 × 10−234.1 × 10−197.97 × 10−26
Ranking7677233
LCAAverage7.7 × 10−12.35 × 10+1−3.21−1.78-5.02−4.97−4.45
Std.6.82 × 10−11.03 × 10+14.2 × 10−13.84 × 10−19.63 × 10−16.89 × 10−11.59
p_value4.24 × 10−31.23 × 10−151.01 × 10−113.73 × 10−141.11 × 10−51.95 × 10−22.28 × 10−4
Ranking8888777
CSAAverage8.45 × 10−13.27 × 10+1−1.9−1.17−1.02 × 10+1−1.04 × 10+1−1.05 × 10+1
Std.8.82 × 10−161.45 × 10−146.78 × 10−162.26 × 10−163.61 × 10−1503.61 × 10−15
p_value5.97 × 10−11702.12 × 10−1691.81 × 10−303.05 × 10−234.08 × 10−197.88 × 10−26
Ranking9999122
TSAAverage3.99 × 10−11.23 × 10+1−3.86−3.16−7.15−5.84−4.72
Std.1.81 × 10−32.26 × 10+13.25 × 10−31.31 × 10−11.272.172.77
p_value8.34 × 10−12.86 × 10−28.41 × 10−93.15 × 10−63.99 × 10−121.69 × 10−38.99 × 10−4
Ranking2734566
DOAAverage4 × 10−13.9−3.83−3.25−8.5−7.97−7.22
Std.1.27 × 10−24.931.41 × 10−17.07 × 10+22.632.783.58
p_value6.26 × 10−13.21 × 10−13.61 × 10−11.31 × 10−82.91 × 10−121.76 × 10−77.82 × 10−8
Ranking6463455
EHOAverage4 × 10−13−3.86−3.27−6.65−8.15−7.46
Std.2.65 × 10−48.22 × 10−52.71 × 10−155.92 × 10−23.453.293.86
p_value1.47 × 10−24.11 × 10−49.88 × 10−303.43 × 10−093.45 × 10−095.17 × 10−078.65 × 10−08
Ranking3212644
WSOAverage4 × 10−13.9−3.86−3.3−9.49−1.04 × 10+1−1.05 × 10+1
Std.2.71 × 10−43.12 × 10+12.71 × 10−154.84 × 10−022.070.009.03 × 10−15
p_value1.12 × 10−13.25 × 10−19.88 × 10−308.54 × 10−101.49 × 10−164.08 × 10−192.88 × 10−26
Ranking3411311
Table 6. Scores comparison of MRSO, SCA, MRA, LCA, CSA, TSA, DOA, EHO, and WSO on 23 classical benchmark functions.
Table 6. Scores comparison of MRSO, SCA, MRA, LCA, CSA, TSA, DOA, EHO, and WSO on 23 classical benchmark functions.
AlgorithmMRSOSCAMRALCACSATSADOAEHOWSO
Ranking10.212.87.515.18.111.310.612.413.2
Table 7. Metrics-based comparison of MRSO and eight algorithms on CEC-C06 2019 benchmark functions.
Table 7. Metrics-based comparison of MRSO and eight algorithms on CEC-C06 2019 benchmark functions.
AlgorithmMetric/FunctionF1F2F3F4F5F6F7F8F9F10
MRSOAverage1.59 × 10+51.83 × 10+11.37 × 10+19.20 × 10+34.571.09 × 10+16.11 × 10+26.314.97 × 10+22.13 × 10+1
Std.3.20 × 10+57.23 × 10−31.34 × 10−63.20 × 10+34.13 × 10−11.012.29 × 10+24.14 × 10−11.49 × 10+21.49 × 10−1
Ranking4136611341
SCAAverage1.22 × 10+101.85 × 10+11.37 × 10+11.68 × 10+33.221.21 × 10+17.9 × 10+26.061.13 × 10+22.15 × 10+1
Std.1.79 × 10+109.51 × 10−21.05 × 10−47.12 × 10+27.84 × 10−26.76 × 10−11.47 × 10+24.17 × 10−18.31 × 10+17.81 × 10−2
p_value4.31 × 10−41.95 × 10−102.74 × 10−93.52 × 10−185.34 × 10−253.06 × 10−66.55 × 10−42.48 × 10−28.56 × 10−188.12 × 10−8
Ranking9342242234
MRAAverage1.001.58 × 10+41.37 × 10+12.53 × 10+48.131.35 × 10+12 × 10+37.834.39 × 10+32.17 × 10+1
Std.04.3 × 10+39.52 × 10−48.93 × 10+31.216.77 × 10−12.26 × 10+23.83 × 10−17.03 × 10+21.17 × 10−1
p_value8.62 × 10−38.77 × 10−281.19 × 10−164.84 × 10−136.38 × 10−228.34 × 10−171.73 × 10−312.29 × 10−219.66 × 10−371.05 × 10−16
Ranking1978888887
LCAAverage2.46 × 10+85.08 × 10+11.37 × 10+12.19 × 10+46.741.39 × 10+11.68 × 10+37.253.04 × 10+32.18 × 10+1
Std.3.16 × 10+83.26 × 10+11.41 × 10−37.83 × 1031.047.35 × 10−12.94 × 10+24.07 × 10−18.43 × 10+21.37 × 10−1
p_value7.56 × 10−51.08 × 10−62.75 × 10−212.91 × 10−113.61 × 10−155.34 × 10−191.48 × 10−222.63 × 10−123 × 10−238.42 × 10−19
Ranking7887797778
CSAAverage6.48 × 10+51.95 × 10+11.37 × 10+14.41 × 10+49.131.25 × 10+12.1 × 10+37.894.71 × 10+32.19 × 10+1
Std.5.44 × 10−103.61 × 10−159.03 × 10−151.16 × 10−13.85 × 10−74.11 × 10−15.88 × 10−22.9 × 10−24.15 × 10−52.34 × 10−2
p_value1.49 × 10−114.3 × 10−1206 × 10−2158.9 × 10−544.59 × 10−542.94 × 10−104.48 × 10−411.18 × 10−281.43 × 10−774.5 × 10−30
Ranking5699959999
TSAAverage6.05 × 10+41.95 × 10+11.37 × 10+16.55 × 10+34.311.19 × 10+18.49 × 10+26.556.67 × 10+22.15 × 10+1
Std.1.66 × 10+46.46 × 10−11.5 × 10−34.21 × 10+38.13 × 10−18.1 × 10−12.43 × 10+23.97 × 10−15.79 × 10+21.01 × 10−1
p_value9.8 × 10−22.06 × 10−132.04 × 10−37.9 × 10−31.15 × 10−12.17 × 10−42.55 × 10−42.77 × 10−21.24 × 10−11.67 × 10−6
Ranking2765524463
DOAAverage1.16 × 10+51.84 × 10+11.37 × 10+14.37 × 10+33.651.32 × 10+11.02 × 10+36.65.62 × 10+22.16 × 10+1
Std.1.88 × 10+51.89 × 10−18.11 × 10−43.72 × 10+36.37 × 10−18.09 × 10−13.08 × 10+23.66 × 10−13.17 × 10+21.56 × 10−1
p_value5.06 × 10−17.13 × 10−21.18 × 10−21.33 × 10−61.16 × 10−81.22 × 10−132.74 × 10−75.48 × 10−33.14 × 10−17.1 × 10−12
Ranking3253466555
EHOAverage2.42 × 10+91.85 × 10+11.37 × 10+13.22 × 10+13.221.21 × 10+19.35 × 10+26.743.622.14 × 10+1
Std.4.12 × 10+99.67 × 10−29.03 × 10−151.66 × 10+11.52 × 10−22.714.71 × 10+26.06 × 10−13.52 × 10−12.08 × 10−1
p_value2.13 × 10−31.95 × 10−111.33 × 10−31.57 × 10−223.11 × 10−183.98 × 10−21.25 × 10−32.28 × 10−31.62 × 10−254.78 × 10−2
Ranking8311235612
WSOAverage2.92 × 10+71.85 × 10+11.37 × 10+14.37 × 10+32.271.32 × 10+18.16 × 10+25.53.62 × 10+12.16 × 10+1
Std.6.88 × 10+77.66 × 10−28.23 × 10−114.11 × 10+32.29 × 10−17.41 × 10−11.58 × 10+26.61 × 10−11.04 × 10+21.56 × 10−1
p_value2.43 × 10−22.71 × 10−135.9 × 10−42.57 × 10−82.92 × 10−342.11 × 10−151.69 × 10−43.93 × 10−74.85 × 10−207.1 × 10−12
Ranking6313163125
Table 8. Scores comparison of MRSO, SCA, MRA, LCA, CSA, TSA, DOA, EHO, and WSO on CEC-C06 2019 benchmark functions.
Table 8. Scores comparison of MRSO, SCA, MRA, LCA, CSA, TSA, DOA, EHO, and WSO on CEC-C06 2019 benchmark functions.
AlgorithmMRSOSCAMRALCACSATSADOAEHOWSO
Ranking33.57.27.57.94.44.43.23.1
Table 9. The MRSO vs. standard RSO performance when applied to the seven engineering design problems: average, standard deviation, and statistical significance.
Table 9. The MRSO vs. standard RSO performance when applied to the seven engineering design problems: average, standard deviation, and statistical significance.
Engineering ApplicationMRSORSO
Avg.Std.Avg.Std.
Pressure Vessel Design6.956 × 10+34.479 × 10+21.72 × 10+41.031 × 10+4
String Design1.417 × 10−29.721 × 10−43.954 × 10+84.217 × 10+8
Three Bar Truss2.665 × 10+26.5222.705 × 10+24.829
Gear Train Design1.106 × 10−122.739 × 10−124.792 × 10−36.392 × 10−3
Cantilever Beam1.3432.932 × 10−32.3828.673 × 10−1
Welded Beam1.5522.052 × 10−17.253 × 10+71.629 × 10+8
Tire Design4.683 × 10−31.764 × 10−181.741.162
Table 10. Performance comparison of MRSO, SCA, LCA, TSA, and DOA on engineering design problems.
Table 10. Performance comparison of MRSO, SCA, LCA, TSA, and DOA on engineering design problems.
Engineering ProblemsMetrics Algorithms
MRSOSCALCATSADOA
Pressure Vessel DesignAvg.6.83 × 10+37.28 × 10+31.64 × 10+47.24 × 10+35.98 × 10+3
Std.5.21 × 10+27.80 × 10+22.02 × 10+37.18 × 10+23.32 × 10+3
Rank24531
String DesignAvg.1.38 × 10−21.31 × 10−22.86 × 10+81.45 × 10−26.37 × 10+7
Std.8.98 × 10−42.42 × 10−43.19 × 10+81.72 × 10−31.65 × 10+8
Rank21534
Three Bar TrussAvg.2.51 × 10+22.67 × 10+22.79 × 10+22.64 × 10+22.64 × 10+2
Std.6.12 × 10+16.441.13 × 10+13.53 × 10−13.59 × 10−1
Rank14532
Gear Train DesignAvg.1.75 × 10−132.01 × 10−95.28 × 10−35.44 × 10−101.27 × 10−13
Std.2.08 × 10−134.63 × 10−97.29 × 10−39.02 × 10−105.87 × 10−13
Rank24531
Cantilever BeamAvg.1.341.411.571.361.4
Std.2.93 × 10−32.75 × 10−24.63 × 10−21.13 × 10−29.62 × 10−2
Rank14523
Welded BeamAvg.1.551.594.94 × 10+71.561.95
Std.2.05 × 10−14 × 10−21.36 × 10+83.64 × 10−26.13 × 10−1
Rank13524
Tire DesignAvg.4.68 × 10−34.68 × 10−32.154.68 × 10−38.24 × 10−3
Std.1.76 × 10−181.11 × 10−171.318.82 × 10−191.36 × 10−2
Rank12534
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdulla, H.S.; Ameen, A.A.; Saeed, S.I.; Mohammed, I.A.; Rashid, T.A. MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization. Algorithms 2024, 17, 423. https://doi.org/10.3390/a17090423

AMA Style

Abdulla HS, Ameen AA, Saeed SI, Mohammed IA, Rashid TA. MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization. Algorithms. 2024; 17(9):423. https://doi.org/10.3390/a17090423

Chicago/Turabian Style

Abdulla, Hemin Sardar, Azad A. Ameen, Sarwar Ibrahim Saeed, Ismail Asaad Mohammed, and Tarik A. Rashid. 2024. "MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization" Algorithms 17, no. 9: 423. https://doi.org/10.3390/a17090423

APA Style

Abdulla, H. S., Ameen, A. A., Saeed, S. I., Mohammed, I. A., & Rashid, T. A. (2024). MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization. Algorithms, 17(9), 423. https://doi.org/10.3390/a17090423

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop