Next Article in Journal
Implementation of Horizontal Connections in the Course of Mathematics by Combining Pedagogical and Digital Technologies
Previous Article in Journal
Calculation of Critical Load of Axially Functionally Graded and Variable Cross-Section Timoshenko Beams by Using Interpolating Matrix Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Parallel Reptile Search Algorithm and Snake Optimizer Approach for Feature Selection

by
Ibrahim Al-Shourbaji
1,2,
Pramod H. Kachare
3,
Samah Alshathri
4,*,
Salahaldeen Duraibi
1,
Bushra Elnaim
5 and
Mohamed Abd Elaziz
6,7,8,*
1
Department of Computer and Network Engineering, Jazan University, Jazan 45142, Saudi Arabia
2
Department of Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
3
Department of Electronics & Telecomm, Engineering, Ramrao Adik Institute of Technology, Nerul, Navi Mumbai 400706, Maharashtra, India
4
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
5
Department of Computer Science, College of Science and Humanities in Al-Sulail, Prince Sattam bin Abdulaziz University, Kharj 16278, Saudi Arabia
6
Faculty of Science & Engineering, Galala University, Suze 435611, Egypt
7
Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
8
Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2351; https://doi.org/10.3390/math10132351
Submission received: 11 June 2022 / Revised: 24 June 2022 / Accepted: 27 June 2022 / Published: 5 July 2022

Abstract

:
Feature Selection (FS) is a major preprocessing stage which aims to improve Machine Learning (ML) models’ performance by choosing salient features, while reducing the computational cost. Several approaches are presented to select the most Optimal Features Subset (OFS) in a given dataset. In this paper, we introduce an FS-based approach named Reptile Search Algorithm–Snake Optimizer (RSA-SO) that employs both RSA and SO methods in a parallel mechanism to determine OFS. This mechanism decreases the chance of the two methods to stuck in local optima and it boosts the capability of both of them to balance exploration and explication. Numerous experiments are performed on ten datasets taken from the UCI repository and two real-world engineering problems to evaluate RSA-SO. The obtained results from the RSA-SO are also compared with seven popular Meta-Heuristic (MH) methods for FS to prove its superiority. The results show that the developed RSA-SO approach has a comparative performance to the tested MH methods and it can provide practical and accurate solutions for engineering optimization problems.

1. Introduction

With the rapid development of contemporary enterprises and the digital world, the transformation process of data into useful information has become more and more difficult due to the large amount of data produced by different sources. Machine Learning (ML) can play an essential role in Knowledge Discovery, which is categorized into a number of tasks, including data preprocessing (data preparation, reduction, and transformation), pattern evaluation, and knowledge presentation [1].
FS is a major preprocessing step, which can improve the ML model’s performance by eliminating the size of features and simplifying the classification problem [2,3]. The biggest concern of the FS process is to discard the irrelevant, redundant, and noisy features from the whole set of features to derive a subset of representative features. This process is used in many areas of science such as data classification [4], image processing [5], text categorization [6], data clustering [7], and signal processing [8]. The primary objective of the FS process is to find OFS from highly discriminated features that result in high classification accuracy.
Recently, several MH methods have been introduced in the literature to simulate the behaviors of natural phenomena or living organisms for various problems. These methods show potential in selecting OFS from a given dataset and solving diverse and complex optimization problems, such as scheduling engineering design, production problems, and ML [9,10,11]. MH methods use exploration and exploitation principles [12,13]. Exploration refers to the ability to search the entire search space; this ability is linked to avoiding local optima and resolving traps in local optima. On the other hand, exploitation is the ability to investigate nearby prospective ideas to increase local quality. A proper balance between these two properties gives an excellent algorithm performance [14].
Various MH methods such as Multi-Verse Optimizer (MVO) [15], Particle Swarm Optimization (PSO) [16], Whale Optimization Algorithm (WOA) [17], Gray Wolf Optimizer (GWO) [18], and Salp Swarm Algorithm (SSA) [19] are some of the commonly applied MH methods for FS. However, the computational cost, classification accuracy, and finding a global optimum by these methods still need more focus and efforts to improve.
One can combine two or more MH approaches to develop a new one with a higher performance that can achieve a convincing balance between the two MH principles rather than using each of them alone for the problem of FS [20,21,22]. In the present work, a novel combined MH-based approach named Reptile Search Algorithm–Snake Optimizer (RSA-SO) is introduced to solve FS. The RSA-SO approach utilizes the best characteristics and capabilities of both the RSA [23] and SO [24] algorithms to obtain an optimal subset of informative features, where both are collaborated in a parallel mechanism. RSA and SO methods are among the most recent MH algorithms, and they show promising capabilities to solve FS problems with efficient balance between exploration and exploitation aspects. The parallel collaboration helps to decrease the chance of the two methods becoming stuck at local optima and it boosts the capability of both of them in balancing between exploration and explication. The contributions of this paper can be summarized as follows:
  • An efferent RSA-SO approach is introduced, which merges RSA and SO in a parallel mechanism to enhance the selection process of the OFS.
  • The developed RSA-SO is tested on twelve datasets from different fields and it is applied to solve two well-known engineering optimization problems with constraints.
  • The results show that the RSA-SO performed well when it is compared to other popular MH methods, and it can also provide a practical and accurate solution for engineering optimization problems.
The structure of the paper is as follows. The next section gives an overview of RSA and SO. The details of the proposed RSA-SO approach are described in Section 3, while Section 4 analyzes and discusses the experimental results. Finally, the conclusion and future research directions are given in the last section.

2. Materials and Methods

2.1. Reptile Search Algorithm (RSA)

RSA is a nature-inspired MH approach based on crocodiles’ encircling and hunting behavior, introduced by [23] in 2022. It is a gradient-free method that begins with generating random candidate solutions. The ith candidate solution of the jth input feature x i , j can be calculated as follows:
x i , j = r a n d U ( 0 , 1 ) ( U B j L B j ) + L B j     i { 1 , ,   N   }   and   j { 1 , ,   M }
where U B j and L B j are upper and lower boundaries of the jth feature, r a n d U ( 0 , 1 ) stands for uniformly distributed random number in the range [0, 1], N is the total number of candidate solutions, and M is the feature dimension.
Like the other MH algorithms, RSA works in two principles: exploration and exploitation. These principles are facilitated by crocodiles’ movement while searching for food. In the RSA, the total iterations are split into four stages to take advantage of the natural behavior of crocodiles. In the first two stages, RSA accomplishes the exploration based on the encircling behavior comprising the high and the belly walking movements. Crocodiles begin their encircling to search the region, facilitating a more exhaustive search of the solution space, and this can be mathematically modeled as:
x i , j ( g + 1 ) = { [ n i , j ( g )   ·   γ   · B e s t j ( g ) ] [ r a n d [ 1 ,   N ]   ·   R i , j ( g ) ] ,                                   g G 4 E S ( g )   ·   B e s t j ( g )   ·   x ( r a n d [ 1 ,   N ] , j ) ,                                       g 2 G 4       a n d   g > G 4    
where B e s t j ( g ) is the best solution for jth feature, n i , j refers to the hunting operator for the jth feature in the ith solution (calculated as in Equation (3)), and parameter γ controls the exploration accuracy and is set as 0.1. The reduce function R i , j is used to reduce the search region and is computed as in Equation (6), r a n d [ 1 ,   N ]   is a number between 1 and N used to randomly select one of the possible candidate solutions, and Evolutionary Sense E S ( g ) stands for the probability ratio which reduces from 2 to −2 over iterations, calculated as in Equation (7).
n i , j = B e s t j ( g ) × P i , j ,
where P i , j indicates the percentage difference between the jth value of the best solution to its corresponding value in the current solution and is calculated as:
P i , j = θ + x i , j μ ( x i ) B e s t j ( g ) × ( U B j L B j ) + ϵ ,  
where θ denotes a sensitive parameter that controls the exploration performance, ϵ is a small floor value, and μ ( x i ) refers to the average solutions and is defined as:
μ ( x i ) = 1 n j = 1 n x i , j ,
R i , j = B e s t j ( g ) x ( r a n d [ 1 ,   N ] , j ) B e s t j ( g ) + ϵ ,
E S ( g ) = 2 × r a n d { 1 ,   1 } × ( 1 1 G   ) ,
where the value 2 acts as a multiplier to provide correlation values in the range [0, 2], and r a n d { 1 ,   1 } is a random integer number between {−1, 1}.
In the last two stages, RSA exploits (hunting) the search space and approaches the optimal solution by using hunting coordination and cooperation. The candidate solution can update its value during the exploitation stage using the following:
x i , j ( g + 1 ) = { r a n d [ 1 , 1 ]   ·   B e s t j ( g )   · P i , j ( g ) ,                                   g 3 G 4   a n d   g > 2 G 4 [ ϵ   ·   B e s t j ( g )   ·   n i , j ( g ) ] [ r a n d [ 1 , 1 ]   ·   R i , j ( g ) ] ,           g G   a n d   g > 3 G 4    
The quality of candidate solutions at each iteration is measured using the predefined FF. the algorithm stops after G iterations, and a candidate solution with the least FF is used as OFS.

2.2. Snake Optimizer (SO)

SO is a MH algorithm proposed by [24] in 2022 to mimic the mating behavior of snakes. Mating happens when the temperature is low and food is available. The SO, like other MH methods, initializes random candidate solutions using Equation (1). This method divides the swarm into male and female groups equally using the following:
N m a l e   N 2 N f = N N m a l e
where N is the number of individuals, N m a l e refers to the male individuals, and N f e m a l e refers to the female individuals.
In each iteration, the best individual candidate solution (food position f f o o d ) is found by analyzing each group for individual best male f b e s t ,   m a l e and best female f b e s t , f e m a l e .
The Temperature (T) and the Food Quantity (FQ) can be defined as:
T = e x p ( g T ) F Q = c 1   e x p ( g T T )
where g is the current iteration, T is the total number of iterations, and c 1 is a constant equal to 0.5.
When F Q < Threshold (Threshold = 0.5), the snakes search for food by selecting a random position and then update their position. To mathematically model the exploration behavior of the male and female snakes, the following can be used:
  • For male snakes:
x i , j ( g + 1 ) = x ( r a n d [ 1 , N / 2 ] , j ) ( g ) ± c 2 × A i , m a l e ( ( U B L B ) × r a n d U ( 0 , 1 ) + L B ) , where   A i , m a l e = e x p ( f r a n d , m a l e f i , m a l e )
where x i , j is ith the male snake position, x ( r a n d [ 1 ,   N / 2 ] ,   j ) refers to the position of a random male snake, r a n d is a random number between 0 and 1, A i , m a l e is the ability to find the food by the male, f r a n d , m a l e is the fitness of the earlier selected random male snake, and f i , m a l e is the fitness of ith male in the group. The flag direction operator ± (i.e., diversity factor) can be used to randomly scan all the possible directions in the given search space.
  • For female snakes:
x i , j = x ( r a n d [ 1 ,   N / 2 ] , j ) ( g + 1 ) ± c 2 × A i , f e m a l e ( ( U B L B ) × r a n d U ( 0 , 1 ) + L B ) , where   A i , f e m a l e = e x p ( f r a n d , f e m a l e f i , f e m a l e )
where x i , j is ith the female snake position, x ( r a n d [ 1 ,   N / 2 ] , j ) is the position of a random female snake, A i , f e m a l e refers to her ability to find food, f r a n d , f e m a l e is the fitness of the earlier selected random female snake, and f i , f e m a l e is the fitness of ith individual in the female group.
In the exploitation phase, SO uses two conditions to find the best solutions and they are:
  • If F Q < Threshold (T > 0.6), then the snakes move to find only:
    x i , j ( g + 1 ) = x f o o d ± c 3 × T × r a n d × ( x f o o d x i , j ( g ) )
    where x i , j is the position of individuals, either male or female; x f o o d is the position of the best individuals; and c 3 is a constant equal to 2.
  • If F Q < Threshold (Threshold < 0.6), then the snakes will be in two modes, either fighting or mating. The fighting and mating models can be represented as the follows:
  • Fighting mode
The fighting ability of the male agent F m a l e can be written as:
x i , j ( g + 1 ) = x i , j ( g ) ± c 3 × F i ,   m a l e × r a n d × ( x b e s t , f e m a l e x i , m a l e ( g ) ) , where   F i ,   m a l e = e x p ( f b e s t , f f i )
where x i , j refers to the ith male position and x b e s t , f e m a l e refers to the position of the best individual in the female group. Similarly, the fighting ability of the male agent F i , m a l e can be written as:
x i , j ( g + 1 ) = x i , j ( g ) ± c 3 × F i , f e m a l e × r a n d × ( x b e s t , m a l e x i , f ( g + 1 ) ) , where   F i ,   f e m a l e = e x p ( f b e s t , m a l e f i )
where x i , j , refers to the ith female position, x b e s t , m a l e refers to the position of the best individual in the male group, and F i , f e m a l e is the fighting ability of the female agent.
  • Mating mode
In this mode, the male and female agents can update their positions as:
x i , m a l e ( g + 1 ) = x i , m ( g ) ± c 3 × M M i , m a l e × r a n d × ( Q × x i , f e m a l e x i , m a l e ( g ) ) , where   M M i , m a l e = e x p ( f i , f e m a l e f i , m a l e ) x i , f e m a l e ( g + 1 ) = x i , f ( g ) ± c 3 × M M i , f e m a l e × r a n d × ( Q × x i , m a l e x i , f e m a l e ( g + 1 ) ) ,   where   M M i , f e m a l e = e x p ( f i , m a l e f i , f e m a l e )
where x i , m and x i , f are the positions of ith of male and female agents, and M M i , m a l e and M M i , f e m a l e refer to the mating ability of males and females.

3. Proposed RSA-SO Method

FS is a multi-objective problem where the minimal number of OFS and higher classification accuracy are simultaneously achieved [25]. The literature survey on different MH algorithms explores various nature-inspired phenomena to effectively search for the best solutions in a given search space. A combination of these MH algorithms is reported to enhance the overall performance by complementing the other’s exploration and exploitation processes, which in turn can decrease the probability of trapping in local optima.
RSA and SO methods are among the most recent MH algorithms, showing promising capabilities to solve several problems with an efficient balance between exploration and exploitation aspects. In this work, RSA and SO methods collaborate in a parallel strategy to solve an FS problem. The primary objective of the parallel mechanism is that if one of the algorithms cannot improve the candidate solutions or becomes stuck in local optima, the other algorithm moves the current candidate solutions into another search region where some better solutions might be found.
Figure 1 provides the procedural steps of the RSA-SO. At first, the hyper-parameters of RSA, SO, and the shared ones are initialized. A uniformly distributed random number generator in the range [−1, 1] is employed to initialize N candidate solutions for M features, as described earlier in the RSA section (Equation (1)).
At the start of each iteration, the population (i.e., candidate solutions) is equally divided into two parts between the RSA and SO algorithms. For the gth iteration, candidate solutions { x i , j ( g ) ,   1 i N   and   1 j M } are split into two parts. The first half is passed to RSA and the second half is passed to SO. It can be mathematically seen as follows:
x i R S A S O = { x i R S A ,                                     1 i N / 2 x i S O ,                                       N / 2 < i N
In the first iteration, both RSA and SO are executed in a parallel manner on the respective parts, and candidate solutions are updated according to Equations (2)–(8) in the RSA and Equations (9)–(16) in SO methods. At the end of the first titration, the updated candidate solutions from both algorithms are evaluated using the Fitness Function (FF). The solutions are sorted in ascending order using the Quick Sort algorithm based on their fitness values. The candidate solutions with the smaller fitness values are selected from each part of the population. The top N / 2 solutions from the entire population are found and passed to both algorithms for the next iteration. The complete set of candidate solutions after each iteration can be generated by merging solutions from both algorithms, as in Equation (17).
The sorting finds the best N / 2 solutions from the entire population with fitness values smaller than any solution other than the selected ones. These found solutions may be distributed differently amongst the RSA and SO algorithms. A set of improved low-fitness candidate solutions x ^ i ( g ) is obtained by swapping high-fitness candidate solutions with the low-fitness candidate solutions found by the complementary algorithm. The candidate solutions can be updated as follows:
x i R S A ( g + 1 ) = x i S O ( g + 1 ) = x ^ i ( g ) ,                               1 i N / 2
where x ^ i ( g ) = x a r g m i n   ( f i ( g ) ) ( g ) .
If the found candidate solutions comprise more solutions from RSA than SO, then the high-fitness candidate solutions from SO are replaced by solutions found by RSA and vice versa. Hence, the RSA will dominate the next iteration. On the other hand, if the found candidate solutions comprise more solutions from SO than RSA, then the SO will dominate in the next iteration. Lastly, if an equal number of low-fitness candidate solutions are found by both algorithms, then the next iteration displays the codominance of both algorithms. All three cases can be summarized as,
i f   dim ( a r g m i n   f i N / 2 ) > dim ( a r g m i n   f i N / 2 )     t h e n     RSA   dominates   ( i + 1 )   iteration i f   dim ( a r g m i n   f i N / 2 ) < dim ( a r g m i n   f i N / 2 )     t h e n     SO   dominates   ( i + 1 )   iteration i f   dim ( a r g m i n   f i N / 2 ) = dim ( a r g m i n   f i N / 2 )     t h e n     RSA   &   SO   codominates   ( i + 1 )   iteration
An example of candidate solution optimization using N = 8 is shown in Figure 2. Candidate solutions from RSA (red) and SO (green) are identified using different colors of the bounding boxes. The corresponding fitness value marks each candidate solution with a maximum of 1 (darker shade fill) and a minimum of 0 (lighter shade fill). The top N / 2 = 4 found low-fitness solutions (ligher shade fill) are marked by an additional bounding box (dotted black). In the case of Figure 2a, the gth iteration marks three solutions from RSA and only one from SO as low-fitness. In the (g+1)th iteration, a high-fitness solution from RSA is replaced with a low-fitness solution from SO, while three high-fitness solutions in SO are replaced with three low-fitness solutions from RSA. Hence, the (g+1)th iteration is dominated by RSA, as observed in Figure 2a’s selected solutions. A contradictory situation is presented in Figure 2b, where three solutions from SO and only one from RSA are marked low-fitness. Hence, solutions for the (g+1)th iteration are obtained by replacing three high-fitness solutions from RSA with low-fitness solutions from SO, and vice versa. Hence, the (g+1)th iteration is dominated by SO, as observed in Figure 2b’s selected solutions. Finally, Figure 2c shows the equal number of solutions found by both algorithms. Hence, even after the replacement, both algorithms have equal shares indicating the codominance of both algorithms for the (g+1)th iteration. It should be noted that after optimizing, both algorithms will continue the next iteration using the exact same set of low-fitness candidate solutions, except for the sequence of solutions, as seen in Figure 2a,c. This can effectively coordinate and improve global exploration and local exploitation in the search space.
In the next iteration, x i ( g + 1 ) , the generated population is first split into two parts using Equation (19), and each part is passed as an input to the RSA and SO methods to simultaneously search other regions in the feature space. After finishing the second iteration, the obtained candidate solutions are sorted using FF. A new population that is composed of the best candidate solutions is obtained from each population part. This process continues until the termination condition is satisfied (i.e., the maximum number of iterations is reached). The pseudo-code of the RSA-SO is provided in Algorithm 1.
Algorithm 1: Pseudo-code of the interdicted RSA-SO approach.
1. Split the dataset into training and testing
Training Phase
2.   Load training dataset
3.   Initialize SO parameters c 1 , c 2 , c 3
4.   Initialize RSA parameters γ ,   θ , n
5.   Initialize shared parameters N ,   M ,   G ,   U B ,   L B
6.   Initialize candidate solutions Equation (1)
7.   for g = 1 to G do
8.    Split candidate solutions for RSA and SO using Equation (17)
9.    Revise candidate solutions x ^ i using RSA Equations (2)–(8) and SO Equations (9)–(16)
10.   Evaluate FF (f) using Equation (20) for revised candidate solutions
11.   Update RSA and SO solutions for next iteration using Equation (18)
12.   Calculate complete solution for next iteration
13. end for
14. Extract OFS for candidate solution with minimum FF using threshold of 0.5
Testing Phase
15. Load testing dataset
16. Select only optimum features as described in OFS Equation (21)
17. Evaluate performance using KNN classifier
The K-Nearest Neighbor (KNN) classifier with k = 5 is used as the FF. The threshold value is set to 0.5 to produce a small number of features, as recommended by the work of [26,27]. The solution with the smallest number of features and highest accuracy is the best one (smallest fitness f ) and it is defined as:
f i = α × γ + β × S F i M
where γ is the error rate of the KNN, S F i is the number of OFS, and M   is the number of features in the original dataset. α and β are two weights that control the importance of classification quality and feature reductions; the value of α in the range of [0, 1] and the value of β is 1−   α . The parameters α and β are set to 0.99 and 0.01, respectively, in this work [28,29], and each feature in the OFS follows:
S F i = { 1       i f   x i > 0.5 , 0         o t h e r w i s e ,

4. Experiments and Results

To assess the capability of the RSA-SO, its performance is compared with other MH methods, including PSO [16], GWO [18], MVO [15], WOA [17], SSA [19], RSA [23], and SO [24], on twelve datasets; the results are provided in this section. All the experiments are implemented using Python scikit-learn and conducted on a 3.13 GHz PC with 16 GB RAM and Windows 10 operating system.

4.1. Dataset

The RSA-SO is tested on eight datasets taken from the UCI data repository, and each of them is split into 80% of the samples used for training and the remaining used for testing. Table 1 summarizes the details of the used datasets.

4.2. Parameter Settings

To compare RSA-SO with other methods, six popular methods in the field of FS are selected. The population size and the maximum number of iterations are empirically set as 20 and 100, respectively. All the methods are run 20 times independently. The parameter settings of these methods are defined according to their implementations in the original work, and they are listed in Table 2.

4.3. Results and Discussion

A set of widely used performance measures is employed to assess the obtained results by the RSA-SO and other FS methods. These metrics include, classification accuracy, number of selected OFS, fitness values (best, worst, average (Avg), and standard deviation (STD)), and computational time consumed by each method. The Friedman ranking test is applied to rank each method for a fair comparison. Moreover, the convergence behavior of the introduced RSA-SO and other methods is provided in this section.
Figure 3 shows the distribution of the best-selected candidate solutions obtained by RSA (in red color) and SO (in green color) for twelve datasets. It provides the number of iterations on the x-axis and the selected solutions on the y-axis. It can be noticed in Figure 2 that the RSA and SO begin by exploring the search space, followed by exploiting the best candidate solution in the feature space. For example, in the initial 25 iterations in the KrvskpEW dataset, more candidate solutions are selected from the first half of the revised candidate solutions, indicating that high walking in the RSA is more effective than SO. Similarly, the last 25 iterations indicate that the hunting cooperation process in RSA exploits candidate solutions more effectively than SO. The dominance of SO during iterations 25 to 50 and 50 to 75 shows that exploration using belly walking and exploitation using hunting coordination in RSA are not very effective. Similar observations can be made for SpectEW, Tic-tac-toe, and Chemical Water datasets. In IonosphereEW and Votes datasets, most of the iterations are dominated by the SO. On the other hand, most iterations for the Breastcancer dataset show approximately equal candidate solutions selected from both methods, indicating the codominance of both algorithms. Similar codominance can be observed in the first 25 and the last 25 iterations for BreastEW, Churn, HeartEW, Sonar, and Zoo datasets.
Table 3 and Table 4 compare all the FS approaches in terms of the average testing accuracy and the number of OFS. In MA methods, the solution with the highest classification accuracy and minimum number of features is the best one in the population that needs to be accomplished. In Table 3, the RSA-SO scored the best accuracy compared to other techniques in eight out of twelve datasets. This can be interpreted by the improved capability of the RSA-SO in broadly searching the high-performance regions in the search space. For IonosphereEW, SO is placed first, while for SonarEW and Zoo datasets, GWO performed the best. Both WOA and RSA-SO achieved similar accuracy results on the Chemical Water dataset.
As per the results in Table 4, the introduced RSA-SO had the smallest value of the selected OFS in nine out of twelve datasets. This confirms the efficiency of the proposed RSA-SO in eliminating irrelevant features in the datasets and reducing the search space. However, RSA-SO had the same results in Breastcancer, IonosphereEW, and Tic-tac-toe datasets, while the RSA method gained the best results only in the Churn dataset and the SO attained the best results in SonarEW and Vote datasets. A similar number of OFS is selected by all the methods on Breastcancer and Tic-tac-toe datasets. GWO selected the smallest number of OFS on the SonarEW dataset.
Table 5 records a summary of the results obtained by the RSA-SO against the other MH algorithms for different datasets. It also presents ranks of MH algorithms for each dataset depending on average, STD, best, and worst of fitness values. From Table 5, it can be observed that the RSA-SO earned the first rank in nine out of twelve datasets. For Breastcancer, IonosphereEW, and Zoo datasets, PSO, MVO, and RSA achieved first ranks while the proposed RSA-SO achieved ranks of 4, 4, and 2, respectively. The RSA-SO provides the best fitness values in eight datasets while all the methods have similar average best fitness on the Tic-tac-toe dataset. RSA-SO has the smallest worst fitness value in seven datasets, while it has similar average best fitness on the Breastcancer dataset. Moreover, the RSA-SO has better Avg and STD of fitness values in eight and six datasets, respectively. RSA-SO and SSA had the same Avg and STD of fitness values on the HeartEW dataset, while WOA, SSA, RSA, and RSA-SO had the same Avg and STD on the Chemical Water dataset. These results prove the capability of the introduced RSA-SO in sustaining a stable balance between the two main principles of MH methods.
The average computational time in seconds for the RSA-SO and the other MH methods, which is computed over 20 independent runs on all the datasets, is provided in Table 6. According to the results in Table 6, the average computational time consumed by the RSA-SO is lower than PSO, GWO, MVO, WOA, SSA, RSA, and SO in five datasets. This is because both the RSA and SO run at the same time in a parallel manner at each iteration, which decreases the running time. Taking into account the accuracy rate and running time, the introduced RSA-SO proves to be superior since it gained a high accuracy rate and competitive execution time on most of the datasets. WOA ranked first for BreastEW, while SSA placed first for HeartEW and Tic-tac-toe datasets. GWO does not need much effort on SpectEW and Chemical Water, and PSO needed lower time on the Zoo dataset.
The convergence behavior of the introduced RSA-SO is shown over 100 iterations on the x-axis in Figure 4 and the average fitness values on the y-axis. Figure 4 presents the convergence curves of the best solution obtained after executing each method 20 runs. In Figure 4, one can observe that RSA-SO has a faster and better convergence than the other methods among the used twelve datasets except three of them, namely, IonosphereEW, SpectEW, and Zoo datasets. However, RSA-SO has the fastest convergence speed on nine out of twelve datasets, which proves its suitability for the problem of FS.

4.4. Performance of RSA-SO in Engineering

In this section, the performance of the RSA-SO is tested on well-known engineering problems, which are Pressure Vessel Design (PVD) and Cantilever Beam Design (CBD).

4.4.1. Pressure Vessel Design (PVD)

The optimal design of a PVD aims to reduce the total of a pressure vessel constrained by material, shaping, and welding costs [30]. The PVD problem consists of four variables, as given in Figure 5: T s denotes the thickness of the shell, T h presents the thickness of the head, R is the inner radius, and L provides the length of the cylindrical section of the vessel. The objective function of this problem can be written as:
Minimize:
f ( x ) = 0.6224 x 1 x 2 , x 3 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to:
g 1 ( x ) = x 1 + 0.0193 x 3 0 , g 2 ( x ) = x 3 + 0.00954 x 3 0 , g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1 , 296 , 000   0 , g 4 ( x ) = x 4 240 0 ,
Variable range ( 0 x i 100 ,   i = 1.2 )   and   ( 10 x i 200 ,   i = 3.4 ) .
Table 7 lists the results obtained by the RSA-SO for the PVD problem and compares it with the other methods. As listed in Table 7, the suggested RSA-SO provides a lower cost than the PSO, GWO, MVO, WOA, SSA, RSA, and SO methods, and therefore, RSA-SO is suggested as a helpful method for the PVD problem. GWO placed second, MVO and SO placed third and fourth, and RSA placed last for the PVD problem.

4.4.2. Cantilever Beam Design (CBD)

Figure 6 illustrates the design of the CBD problem. The problem tries to minimize the total weight, and this problem has five parameters: x1, x2, x3, x4, and x5 [31]. The objective function of the CBD problem can be mathematically presented as follows:
Minimize:
f ( x ) = 0.6224 (   x 1 x 2 ,   x 3 , x 4 x 5 ) ,
Subject to:
g ( x ) = 60 x 1 3 + 27 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
In Table 8, the performance results of the RSA-SO for the CBD engineering problem are given when it is compared with other MH methods. As per Table 8, the best weight obtained by RSA-SO is the smallest compared to the other methods. MVO, WOA, and SO place second, third, and fourth, respectively, while SSA and RSA are in last place.
Based on the previous results and discussion, the developed RSA-SO has a high ability to explore the feasible region which contains the optimal solution. However, the time complexity of RSA-SO still needs more improvements, especially when applied to handle high-dimensional data.

5. Conclusions and Future Works

FS is one of the key factors in improving the classifier capability in classification problems. In this paper, an FS approach based on RSA and SO, named RSA-SO, is presented. The introduced RSA-SO approach employs both RSA and SO in a parallel mechanism to tackle the problem of FS. We tested the RSA-SO approach on twelve different public datasets taken from UCI and two engineering problems. RSA-SO’s capability was evaluated using a set of evaluation measures and compared with some recently reported MH methods for FS, including SO, RSA, SSA, WOA, MVO, GWO, and PSO. The results verify that RSA-SO has a comparative performance to other MH methods for FS, and it can provide practical and accurate solutions for two engineering optimization problems. For future work, RSA-SO will be applied to address other problems in different fields, such as sentiment analysis, Big Data, smart cities, and other practical engineering problems.

Author Contributions

Conceptualization, I.A.-S., P.H.K. and M.A.E.; methodology, I.A.-S. and P.H.K.; software, I.A.-S.; validation, I.A.-S. and P.H.K.; formal analysis, I.A.-S., P.H.K., S.A., S.D. and M.A.E.; resources, I.A.-S. and P.H.K.; writing—original draft preparation, I.A.-S. and P.H.K.; writing—review and editing, I.A.-S., P.H.K., S.A., S.D., B.E. and M.A.E.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman Researchers Supporting Project number (PNURSP2022R197), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data available upon request from authors.

Acknowledgments

Princess Nourah bint Abdulrahman Researchers Supporting Project number (PNURSP2022R197), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

All authors declare that they have no conflict of interest.

Nomenclature

Acronyms
FS Feature Selection
MHMeta-Heuristic
OFSOptimal Features Subset
RSAReptile Search Algorithm
SOSnake Optimizer
Symbols
x i ,   j ith candidate solution for jth feature dimension
NNumber of candidate solutions
MFeature dimension
GTotal number of iterations for MH method
f i Fitness value of ith candidate solution
n i , j Hunting operator for the jth feature in the ith solution in RSA
x i R S A ith candidate solution vector for RSA
x i S O ith candidate solution vector for SO

References

  1. Han, J.; Pei, J.; Kamber, M. Data Mining: Concepts and Techniques; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
  2. Crone, S.F.; Lessmann, S.; Stahlbock, R. The impact of preprocessing on data mining: An evaluation of classifier sensitivity in direct marketing. Eur. J. Oper. Res. 2006, 173, 781–800. [Google Scholar] [CrossRef]
  3. Nguyen, B.H.; Xue, B.; Zhang, M. A survey on swarm intelligence approaches to feature selection in data mining. Swarm Evol. Comput. 2020, 54, 100663. [Google Scholar] [CrossRef]
  4. Qaraad, M.; Amjad, S.; Manhrawy, I.I.; Fathi, H.; Hassan, B.A.; El Kafrawy, P. A hybrid feature selection optimization model for high dimension data classification. IEEE Access 2021, 9, 42884–42895. [Google Scholar] [CrossRef]
  5. Sawalha, R.; Doush, I.A. Face recognition using harmony search-based selected features. Int. J. Hybrid Inf. Technol. 2012, 5, 1–16. [Google Scholar]
  6. Shang, W.; Huang, H.; Zhu, H.; Lin, Y.; Qu, Y.; Wang, Z. A novel feature selection algorithm for text categorization. Expert Syst. Appl. 2007, 33, 1–5. [Google Scholar] [CrossRef]
  7. Boutemedjet, S.; Bouguila, N.; Ziou, D. A hybrid feature extraction selection approach for high-dimensional non-Gaussian data clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 1429–1443. [Google Scholar] [CrossRef]
  8. Vo, T.T.; Liu, M.K.; Tran, M.Q. Identification of Milling Stability by using Signal Analysis and Machine Learning Techniques. Intern. J. Robot. 2021, 4, 30–39. [Google Scholar]
  9. Tarhan, İ.; Oğuz, C. Generalized order acceptance and scheduling problem with batch delivery: Models and metaheuristics. Comput. Oper. Res. 2021, 134, 105414. [Google Scholar] [CrossRef]
  10. Ikeda, S.; Nagai, T. A novel optimization method combining metaheuristics and machine learning for daily optimal operations in building energy and storage systems. Appl. Energy 2021, 289, 116716. [Google Scholar] [CrossRef]
  11. Band, S.S.; Ardabili, S.; Danesh, A.S.; Mansor, Z.; AlShourbaji, I.; Mosavi, A. Colonial competitive evolutionary Rao algorithm for optimal engineering design. Alex. Eng. J. 2022, 61, 11537–11563. [Google Scholar] [CrossRef]
  12. Zelinka, I. A survey on evolutionary algorithms dynamics and its complexity–Mutual relations, past, present and future. Swarm Evol. Comput. 2015, 25, 2–14. [Google Scholar] [CrossRef]
  13. Braik, M.; Sheta, A.; Al-Hiary, H. A novel meta-heuristic search algorithm for solving optimization problems: Capuchin search algorithm. Neural Comput. Appl. 2021, 33, 2515–2547. [Google Scholar] [CrossRef]
  14. Alam, S.; Dobbie, G.; Rehman, S.U. Analysis of particle swarm optimization based hierarchical data clustering approaches. Swarm Evol. Comput. 2015, 25, 36–51. [Google Scholar] [CrossRef]
  15. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  16. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November 1995. [Google Scholar]
  17. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  19. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  20. Heidari, A.A.; Aljarah, I.; Faris, H.; Chen, H.; Luo, J.; Mirjalili, S. An enhanced associative learning-based exploratory whale optimizer for global optimization. Neural Comput. Appl. 2020, 32, 5185–5211. [Google Scholar] [CrossRef]
  21. Moradi, P.; Gholampour, M. A hybrid particle swarm optimization for feature subset selection by integrating a novel local search strategy. Appl. Soft Comput. 2016, 43, 117–130. [Google Scholar] [CrossRef]
  22. Al-Shourbaji, I.; Helian, N.; Sun, Y.; Alshathri, S.; Elaziz, M.A. Boosting Ant Colony Optimization with Reptile Search Algorithm for Churn Prediction. Mathematics 2022, 10, 1031. [Google Scholar] [CrossRef]
  23. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  24. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  25. Wang, X.H.; Zhang, Y.; Sun, X.Y.; Wang, Y.L.; Du, C.H. Multi-objective feature selection based on artificial bee colony: An acceleration approach with variable sample size. Appl. Soft Comput. 2020, 88, 106041. [Google Scholar] [CrossRef]
  26. Ewees, A.A.; El Aziz, M.A.; Hassanien, A.E. Chaotic multi-verse optimizer-based feature selection. Neural Comput. Appl. 2019, 31, 991–1006. [Google Scholar] [CrossRef]
  27. Ibrahim, R.A.; Elaziz, M.A.; Ewees, A.A.; El-Abd, M.; Lu, S. New feature selection paradigm based on hyper-heuristic technique. Appl. Math. Model. 2021, 98, 14–37. [Google Scholar] [CrossRef]
  28. Song, X.F.; Zhang, Y.; Gong, D.W.; Gao, X.Z. A fast hybrid feature selection based on correlation-guided clustering and particle swarm optimization for high-dimensional data. IEEE Trans. Cybern. 2021, 1–14. [Google Scholar] [CrossRef]
  29. Arora, S.; Anand, P. Binary butterfly optimization approaches for feature selection. Expert Syst. Appl. 2019, 116, 147–160. [Google Scholar] [CrossRef]
  30. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  31. Gafar, M.G.; El-Sehiemy, R.A.; Sarhan, S. A Hybrid Fuzzy-Crow Optimizer for Unconstrained and Constrained Engineering Design Problems. Hum. Cent. Comput. Inf. Sci. 2022, 12, 1–24. [Google Scholar]
Figure 1. Structure of RSA-SO approach.
Figure 1. Structure of RSA-SO approach.
Mathematics 10 02351 g001
Figure 2. Example of optimizing the low-fitness candidate solutions in the proposed RSA-SO algorithm for dominance of (a) RSA, (b) SO, and (c) codominance of both, as shown in Equation (19).
Figure 2. Example of optimizing the low-fitness candidate solutions in the proposed RSA-SO algorithm for dominance of (a) RSA, (b) SO, and (c) codominance of both, as shown in Equation (19).
Mathematics 10 02351 g002
Figure 3. Distribution of best selected candidate solutions between RSA and SO for different datasets.
Figure 3. Distribution of best selected candidate solutions between RSA and SO for different datasets.
Mathematics 10 02351 g003aMathematics 10 02351 g003b
Figure 4. Convergence curves of the RSA-SO and the other methods.
Figure 4. Convergence curves of the RSA-SO and the other methods.
Mathematics 10 02351 g004aMathematics 10 02351 g004b
Figure 5. The PVD problem.
Figure 5. The PVD problem.
Mathematics 10 02351 g005
Figure 6. The CBD problem.
Figure 6. The CBD problem.
Mathematics 10 02351 g006
Table 1. List of the datasets used in the experiments.
Table 1. List of the datasets used in the experiments.
No.DatasetInstancesFeaturesClassesDomain
1Breastcancer69992Biology
2BreastEW569302Biology
3Churn3150162Telecom
4HeartEW270132Biology
5IonosphereEW351342Electromagnetic
6KrvskpEW3196362Game
7SonarEW208602Biology
8SpectEW267222Biology
9Tic-tac-toe95892Game
10Vote300162Politics
11Chemical Water178133Chemistry
12Zoo101166Artificial
Table 2. Parameter settings.
Table 2. Parameter settings.
AlgorithmParameters
PSO c 1   = c 2 = 2 ,   w m i n = 0.1   and   w m a x = 0.9
GWO α variable decreases linearly from 2 to 0, C is a random value   [ 0 ,   2 ] , and A linearly decreases from 1 to −1
MVO W E P m a x = 1 ,   W E P m i n = 0.2 ,   α decreases from 2 to 0 and p = 6
WOA α   decreases   from   2   to   0   and   α 2 decrease from −1 to −2
SSA c 2   and   c 2 are random values € [1, 0]
RSA γ = 0.9 ,   θ = 0.5 , UB and LB vary according to features in the dataset
SO c 1 = 0.5 ,   c 2 = 0.05 ,   c 3 = 2 ,   x m a x   and   x m i n vary according to features in the dataset
RSA-SOIt uses the parameters of the RSA and SO
Table 3. Results of RSA-SO and other methods in terms of classification accuracy.
Table 3. Results of RSA-SO and other methods in terms of classification accuracy.
DatasetPSOGWOMVOWOASSARSASORSA-SO
Breastcancer99.140199.147499.147399.125599.162099.183999.176699.2132
BreastEW95.400195.549595.373995.372695.649695.733795.372296.1579
Churn89.647695.607892.974095.587396.315094.526093.220196.4688
HeartEW73.780080.939576.183984.566199.229399.147999.358799.7400
IonosphereEW93.117993.432693.134592.323492.640892.991593.548592.5147
KrvskpEW96.322095.833596.734396.078096.484195.567997.092897.1065
SonarEW89.219591.096788.863787.324988.473787.527787.742690.2545
SpectEW87.094885.856787.281886.047586.886986.059487.443587.5323
Tic-tac-toe82.771882.787482.688282.766582.787482.615082.693482.8031
Vote64.055264.336864.351363.200663.943663.329062.416764.5815
Chemical Water99.950399.967199.960799.994499.983799.988899.971399.9944
Zoo96.711097.710996.990196.403897.363297.131096.875397.3928
Table 4. Comparison between RSA-SO and other methods in terms of average OFS.
Table 4. Comparison between RSA-SO and other methods in terms of average OFS.
Dataset PSOGWOMVOWOASSARSASORSA-SO
Breastcancer99999999
BreastEW33393732
Churn14101391181211
HeartEW13101153251
IonosphereEW44544444
KrvskpEW3132293129292723
SonarEW2717282627282023
SpectEW111111141113108
Tic-tac-toe99999999
Vote65757636
Chemical Water96723251
Zoo13811896105
Table 5. Best, worst, Avg, and STD fitness values obtained by different methods.
Table 5. Best, worst, Avg, and STD fitness values obtained by different methods.
DatasetMetricPSOGWOMVOWOASSARSASORSA-SO
BreastcancerBest0.01600.16050.01600.16050.01600.16050.01600.1605
Worst0.18950.18950.18950.18950.18950.18950.18950.1895
Avg.0.01610.01610.01610.01610.01610.01610.01610.0161
STD.0.00080.00080.00100.00100.00110.00100.00110.0010
Rank12345454
BreastEWBest0.04920.04390.04730.04360.04010.03850.04910.0382
Worst0.05620.05790.05790.05780.05790.04730.05780.0491
Avg.0.04920.04390.04730.04360.04210.04860.04910.0401
STD.0.00190.00450.00280.00440.00440.00190.00260.0018
Rank84532671
ChurnBest0.04180.04210.04220.04030.04060.04030.04150.0393
Worst0.13460.06380.13450.08170.04910.09960.13460.0484
Avg.0.04180.04210.04210.04030.04060.04030.04150.0393
STD.0.03540.00580.03530.01190.00250.02200.02760.0018
Rank56723241
HeartEWBest0.28650.28280.28640.19090.00020.00010.00030.0000
Worst0.26920.19670.24400.15660.00010.00010.00020.0000
Avg.0.19830.12500.13230.12860.00000.00010.00010.0000
STD.0.02480.07480.05280.01910.00000.00000.00010.0000
Rank85762341
IonosphereEWBest0.09040.07340.08480.10450.09030.08190.07060.0932
Worst0.06920.06610.06940.07730.07420.07060.06510.0753
Avg.0.05930.05940.05660.06500.06210.06210.06210.0594
STD.0.00720.00390.00750.00940.00800.00690.00340.0082
Rank23187654
KrvskpEWBest0.05000.05770.05180.05190.05460.05960.04530.0497
Worst0.04510.05020.04040.04750.04280.05180.03620.0350
Avg.0.02640.03730.02360.03790.02640.03070.02300.0230
STD.0.00490.00470.00770.00360.00990.00680.00890.0107
Rank57384621
SonarEWBest0.09170.07230.10100.09570.09660.08680.10050.0775
Worst0.11130.09100.11480.12910.11860.12820.12470.1004
Avg.0.09170.07230.10100.09570.09660.08680.10050.0775
STD.0.01110.00910.01280.01350.01400.02110.01540.0113
Rank42856371
SpectEWBest0.12270.13000.13900.13380.12270.13010.12270.1190
Worst0.13280.14500.13070.14440.13500.14380.12860.1271
Avg.0.12270.13000.11910.13380.12270.13010.12270.1190
STD.0.00660.01020.00990.00880.00860.01170.00460.0061
Rank57286531
Tic-tac-toeBest0.18320.18320.18320.18220.18320.18530.18320.1832
Worst0.18060.18040.18140.18060.18040.18210.18130.1802
Avg.0.17490.17750.17800.17710.17700.17800.17760.1739
STD.0.00220.00180.00210.00180.00250.00230.00170.0016
Rank25743861
VoteBest0.37560.36880.37120.38240.37340.38480.38240.3620
Worst0.35970.35640.35740.36740.36150.36650.37420.3546
Avg.0.34610.34830.34840.34840.34610.34840.35750.3461
STD.0.00740.00550.00580.00820.00740.01010.00830.0046
Rank34562781
Chemical WaterBest0.00060.00050.00060.00010.00020.00020.00040.0001
Worst0.00050.00030.00040.00010.00020.00010.00030.0001
Avg.0.00030.00020.00020.00010.00010.00010.00010.0001
STD.0.00010.00010.00010.00000.00000.00000.00010.0000
Rank64512231
ZooBest0.07310.04200.07310.06200.04210.04210.06230.0412
Worst0.04060.02790.03690.04060.03150.03230.03730.0291
Avg.0.03180.02090.02140.03110.02100.02090.02130.0209
STD.0.00900.00770.01350.01090.00660.00740.00850.0078
Rank83674152
Table 6. Comparison between RSA-SO and other methods in terms of computation time.
Table 6. Comparison between RSA-SO and other methods in terms of computation time.
Dataset PSOGWOMVOWOASSARSASORSA-SO
Breastcancer15.804315.776215.789116.936118.880912.226013.943911.3005
BreastEW16.921117.004616.849615.481516.499417.466618.083520.7034
Churn46.410265.756346.243444.200045.324745.105044.531044.1699
HeartEW15.849016.184315.861516.126613.719814.849216.683714.7071
IonosphereEW16.490616.380516.449916.054520.279918.378317.772812.0760
KrvskpEW23.337522.774820.080120.794317.602826.934721.403215.1755
SonarEW16.020615.834415.939615.606314.840413.064817.740715.8526
SpectEW15.134012.789615.030214.569724.667713.643520.652515.6375
Tic-tac-toe8.16868.25448.329015.09918.06678.504012.88828.4919
Vote6.43416.43196.60946.96726.40306. 71869.13986.2819
Chemical Water5.39004.79364.97584.891710.484916.385013.167014.0011
Zoo11.626113.619412.165812.970211.788524.325819.863415.4363
Table 7. Results of RSA-SO and other methods for solving the PVD problem.
Table 7. Results of RSA-SO and other methods for solving the PVD problem.
MethodBest Values for VariablesBest Cost
T s T h R L
PSO1.00000.00001.00001.00002758.9974
GWO1.25910.000065.229810.00002613.1828
MVO1.26140.000065.228010.15532630.2904
WOA1.26790.000065.696613.75722878.7608
SSA1.27380.000064.901211.40292734.5819
RSA1.00000.00001.00001.00004277.1962
SO1.26670.000065.447110.00002650.2554
RSA-SO1.25880.000065.225210.00002611.9240
Table 8. Results of RSA-SO and other methods for solving the CBD problem.
Table 8. Results of RSA-SO and other methods for solving the CBD problem.
MethodBest Values for VariablesBest Weight
  x 1   x 2   x 3   x 4   x 5
PSO1.00001.00001.00001.00001.000013.6384
GWO5.50915.09424.55723.66072.205313.0869
MVO5.90064.86944.45503.48982.195713.0146
WOA5.95834.95654.43213.39232.175913.0176
SSA6.37913.98718.66643.66801.798715.2484
RSA1.00001.00001.00001.00001.000015.7689
SO5.98324.79394.62473.46972.058413.0268
RSA-SO5.94814.89744.42283.50072.139613.0135
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al-Shourbaji, I.; Kachare, P.H.; Alshathri, S.; Duraibi, S.; Elnaim, B.; Abd Elaziz, M. An Efficient Parallel Reptile Search Algorithm and Snake Optimizer Approach for Feature Selection. Mathematics 2022, 10, 2351. https://doi.org/10.3390/math10132351

AMA Style

Al-Shourbaji I, Kachare PH, Alshathri S, Duraibi S, Elnaim B, Abd Elaziz M. An Efficient Parallel Reptile Search Algorithm and Snake Optimizer Approach for Feature Selection. Mathematics. 2022; 10(13):2351. https://doi.org/10.3390/math10132351

Chicago/Turabian Style

Al-Shourbaji, Ibrahim, Pramod H. Kachare, Samah Alshathri, Salahaldeen Duraibi, Bushra Elnaim, and Mohamed Abd Elaziz. 2022. "An Efficient Parallel Reptile Search Algorithm and Snake Optimizer Approach for Feature Selection" Mathematics 10, no. 13: 2351. https://doi.org/10.3390/math10132351

APA Style

Al-Shourbaji, I., Kachare, P. H., Alshathri, S., Duraibi, S., Elnaim, B., & Abd Elaziz, M. (2022). An Efficient Parallel Reptile Search Algorithm and Snake Optimizer Approach for Feature Selection. Mathematics, 10(13), 2351. https://doi.org/10.3390/math10132351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop