Next Article in Journal
A Class of Quasilinear Equations with Riemann–Liouville Derivatives and Bounded Operators
Next Article in Special Issue
A New Interior Search Algorithm for Energy-Saving Flexible Job Shop Scheduling with Overlapping Operations and Transportation Times
Previous Article in Journal
Effects of COVID-19 Pandemic on the Bulgarian Stock Market Returns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Ant Lion Optimization Method and Its Application for Instance Reduction Problem in Balanced and Imbalanced Data

1
Mathematics Department, Faculty of Science, Al-Azhar University, Cairo 11754, Egypt
2
Department of Computer Engineering, Bandirma Onyedi Eylul University, Balikesir 10200, Turkey
3
Department of Computer Science and Information Technolog, University of Kotli Azad Jammu and Kashmir, Kotli 11100, Pakistan
4
Examination Branch, Dibrugarh University, Dibrugarh 786004, India
5
Statistics Discipline, Khulna University, Khulna 9208, Bangladesh
6
Software Engineering, Firat University, Elazig 23100, Turkey
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(3), 95; https://doi.org/10.3390/axioms11030095
Submission received: 5 February 2022 / Revised: 19 February 2022 / Accepted: 22 February 2022 / Published: 24 February 2022
(This article belongs to the Special Issue Optimization Algorithms and Applications)

Abstract

:
Instance reduction is a pre-processing step devised to improve the task of classification. Instance reduction algorithms search for a reduced set of instances to mitigate the low computational efficiency and high storage requirements. Hence, finding the optimal subset of instances is of utmost importance. Metaheuristic techniques are used to search for the optimal subset of instances as a potential application. Antlion optimization (ALO) is a recent metaheuristic algorithm that simulates antlion’s foraging performance in finding and attacking ants. However, the ALO algorithm suffers from local optima stagnation and slow convergence speed for some optimization problems. In this study, a new modified antlion optimization (MALO) algorithm is recommended to improve the primary ALO performance by adding a new parameter that depends on the step length of each ant while revising the antlion position. Furthermore, the suggested MALO algorithm is adapted to the challenge of instance reduction to obtain better results in terms of many metrics. The results based on twenty-three benchmark functions at 500 iterations and thirteen benchmark functions at 1000 iterations demonstrate that the proposed MALO algorithm escapes the local optima and provides a better convergence rate as compared to the basic ALO algorithm and some well-known and recent optimization algorithms. In addition, the results based on 15 balanced and imbalanced datasets and 18 oversampled imbalanced datasets show that the instance reduction proposed method can statistically outperform the basic ALO algorithm and has strong competitiveness against other comparative algorithms in terms of four performance measures: Accuracy, Balanced Accuracy (BACC), Geometric mean (G-mean), and Area Under the Curve (AUC) in addition to the run time. MALO algorithm results show increment in Accuracy, BACC, G-mean, and AUC rates up to 7%, 3%, 15%, and 9%, respectively, for some datasets over the basic ALO algorithm while keeping less computational time.

1. Introduction

Machine Learning plays a crucial role in extracting useful information in different research domains, for instance, medical data analysis [1,2,3,4], computer vision [5], road accident analysis [6], educational data mining [7], sentiment analysis [8] and many more. Instance reduction is one of the prime pre-processing tasks in machine learning applications. Before employing instance-based learning methods, alleviating the noisy, erroneous, and redundant data are highly desirable. Instance reduction mitigates sensitivity to noise and high storage requirements. Eventually, the complication of calculation to understand a superior classification technique decline. There should not be a noteworthy change in between-class distribution after and before data decline. A flawed data reduction technique may eradicate additional instances of one class, resulting in unfair datasets.
Instance reduction can be applied for both balanced and imbalanced data to improve classification performance. Researchers studied the instance decline from class-balanced data. However, there are not many studies on class-imbalanced data. Researchers attract to this issue in recent times because of practical applications in this domain [9]. The imbalanced dataset poses difficulties in learning tasks. Traditional methods are exploited to learn from imbalanced data. Satisfactory outcomes are not yielded as these traditional methods execute excellent coverage for the mainstream class while marginal classes are disregarded. In some cases, high accuracies are demonstrated, but the outcome is not as trustworthy as the cardinality of the majority class is high contrasted to the minority class. For instance, the reduction to keep intact between-class distribution is vital. In imbalanced datasets, the minority class instances are also essential. Such instances may be treated as noise or outliers. However, these minority class instances must not be lessened while applying an instance reduction method. Hence, to take care of that, special techniques are needed.
Among different instance reduction methods for imbalanced data, data-level methods are of utmost importance. These data-level techniques can be classified into two types: oversampling, the size of the marginal class is enhanced and under-sampling, the size of the mainstream class is lessened, ensemble-based methods, and cost-sensitive methods. Various research papers suggested that evolutionary-based techniques performed better than the non-evolutionary ones in imbalanced dataset analysis and instance reduction.
Researchers have gained interest over the years to locate optimal values of the solution variables to meet definite conditions in case of global optimization concerns. Gigantic computational efforts are required for classical optimization methods that are inclined to be unsuccessful as the problem search space escalates. Meta-heuristic algorithms have come into the picture that exhibits better computational efficacy in evading local minima [10]. These meta-heuristic algorithms showcased their superiority for tackling complex issues in different domains for the subsequent causes. (i) They can circumvent local minima; (ii) the gradient of the objective function is not needed; (iii) they are easy to implement and simple; (iv) different problems of various fields can be solved by utilizing it. The increasing processing power of computers has a positive impact on the development of such algorithms.
The metaphors applied in metaheuristic techniques are plants, humans, birds, the ecosystem, water, gravitational forces, and electromagnetic forces. As described in Figure 1, it can be divided into two categories. The first category mimics physical or biological phenomena and three sub-categories come into existence from it. They are swarm-based, physics-based, and evolution-based techniques. Human phenomena are the main inspiration behind the second category.
Abbreviations: ACO: Ant Colony Optimization, PSO: Particle Swarm Optimization, ABC: Artificial Bee Colony, AFSA: Artificial Fish Swarm Algorithm, DE: Differential Evolution, GA: Genetic Algorithm, BBO: Biogeography-Based Optimizer, ES: Evolution Strategy, GSA: Gravitational Search Algorithm, SA: Simulated Annealing, CFO: Central Force Optimization, BBBC: Big-Bang Big-Crunch, FBI: Forensic-Based Investigation, GSO: Group Search Optimizer, TLBO: Teaching-Learning-Based Optimization, HS: Harmony Search.
Metaheuristic techniques were utilized for different real-world applications, Negi et al. [11] proposed a hybrid approach with PSO and GWO methods and dubbed it HPSOGWO to tackle optimization problems and reliability allocation of life support systems and Complex bridge systems. In [12], the authors devised a modified genetic algorithm (MGA) with a novel selection method, namely a generation-dependent mutation and an in-vitro-fertilization-based crossover. They applied their technique in commercial ship scheduling and routing with dynamic demand and supply environments. Their model could lessen risks and abate port time with a static load factor. Ganguly in [13] proposed a framework for simultaneous optimization of Distributed Generation (DG) network performance index and penetration level to acquire the optimal sizes, numbers, and sites of DG units. He formulated two objective functions. Network performance index and DG penetration level were the two. Multi-objective particle swarm optimization was utilized by his solution framework and was validated on a distribution system comprising 38 nodes.
In [14], the authors implemented a general type-2 fuzzy classification method for the medical assistance and the optimization of the general type-2 membership functions parameters using ALO for comparison these two type classifiers with the Framingham dataset. A general type-2 fuzzy classifier had been applied on the Jetson Nano hardware Development Board and execution time of the type-1 and type-2 fuzzy classification techniques were compared. A novel metaheuristic method called the Slime Mold Algorithm (SMA) was proposed in [15]. A fuzzy controller tuning technique is also offered and the concept of enhancing the performance of metaheuristics with information feedback approaches has been applied. The fuzzy controllers and their tuning methods were validated in real-time with angular position control of the laboratory servo framework. In [16], a survey was presented on scientific literature works that dealt with Type-2 fuzzy logic controllers devised utilizing nature-inspired optimization techniques. Their review exploited the most widespread optimizers to attain the key parameters on Type-2 and Type-1 fuzzy controllers to enhance the gained outcome. The PSO method was integrated in [17], with the Multi-Verse Optimizer (MVO) to classify endometrial carcinoma with gene expression by optimizing the parameters of the Elman neural network.
Swarm intelligence methods were also utilized for feature reduction. Gupta et al. [18] presented a revised antlion optimization procedure to better identify thyroid infection. To mitigate the computational time and enhance the classification accuracy, the proposed method was exploited as a feature reduction technique to detect the vital attributes from a large set of attributes. Their method had successfully eradicated 71.5% of irrelevant features. Based on Stochastic Fractal Search (SFS), El-Kenawy et al. [19] introduced a Modified Binary GWO (MbGWO) to determine key characteristics by attaining the exploitation and exploration balance. They tested their MbGWO-SFS method with 19 machine learning datasets from the University of California, Irvine (UCI). Comparison with the state-of-the-art optimization methods demonstrated the superiority of the method. Lin et al. [20] applied modified cat swarm optimization (CSO) that outperformed PSO. The limitation of CSO is that its computation time is long. Hence, their modified version was called ICSO. Their method selected features in big data-related text classification. To propose a feature selection method, Wan et al. [21] utilized a customized binary-coded ant colony optimization (MBACO) method in combination together with a genetic algorithm. Their technique comprised of two models, the pheromone density model (PMBACO) and the visibility density model (VMBACO). The results acquired by GA were applied as early pheromone evidence in the PMBACO model, whereas the solution attained by GA was employed as visibility information in the VMBACO model. Based on the modified grasshopper optimization method, Zakeri et al. [22] devised a new feature selection technique. The novel method dubbed Grasshopper Optimization Algorithm for Feature Selection (GOFS) replicated the duplicate features with promising features by applying statistical techniques while performing iterations.
Instance reduction is a pre-processing task devised to improve learning jobs. Nanni et al. [23] developed an effective technique based on particle swarm optimization for prototype reduction. Their technique minimized the error rate on the training set. Zhai et al. [24] introduced a novel immune binary particle swarm optimization technique for time series classification, which searched for the smallest instance combination with the highest classification accuracy. Hamidzadeh et al. [25] presented a Large Margin Instance Reduction Algorithm (LMIRA) that kept border instances and removed the non-border ones. The algorithm considered the instance reduction issue as a constrained binary optimization problem, and a filled function algorithm was exploited to tackle the issue. The reduction process’s basic was relied on the hyperplane that separated the two-class data and demonstrated large margin separation. Saidi et al. [26] proposed a novel instance selection method dubbed Ensemble Margin Instance Selection (EMIS). The ensemble margin was employed in their method. They applied their method for automatic recognition and selection of white blood cells WBC in cytological images.
Carbonera and Abel [27] devised an effective and simple density-based framework for instance collection termed local density-based instance selection (LDIS). Their technique kept the densest instances in an arbitrary neighborhood by examining the instances of each class. For evaluating the accuracy, they applied the K-Nearest Neighbor (KNN) algorithm. de Haro-García et al. [28] utilized boosting method to obtain reduced instances to achieve better accuracy. The stepwise addition of instances was performed by applying the weighting of instances from the building of ensembles of classifiers.
Numerous modified versions of Antlion optimizers have been proposed for solving different research problems. In [29], Wang et al. proposed an enhanced alternate method for Antlion Optimizer (ALO), incorporating opposition-based training with two functional operators centered on differential evolution, named MALO, which is suggested to deal with the implicit vulnerabilities of conventional ALO. Pierezan et al. [30] suggested four multi-objective ALO (MOALO) methods utilizing swarming distance, supremacy idea for choosing the elite, and tournament collection techniques with various programs to pick the chief. Assiri et al. [31] showed the benefits and different categories like Modified, Hybrid, and Multi-Objective of ALO algorithms after giving a detailed introduction of this procedure. This paper also discussed the applications and foundations of this method and finished with some suggestions and possible directions in the future.
In the literature, different metaheuristic and optimization algorithms were proposed to enhance the performance of classification using the instance reduction issue. However, as far as the authors are aware, this is the first time ALO algorithm or a modified version of it is proposed to solve the instance reduction issue in balanced and imbalanced data. In this paper, MALO will be utilized to enhance the ALO’s ability to escape the local optima while providing a better convergence rate and enhancing the classification performance in real-world datasets by optimized instance reduction.
The main contributions of this work are summarized as follows:
(1) Since the ALO algorithm suffers from local optima stagnation and slow convergence speed for some optimization problems [32], this study’s intention is to propose a new modified antlion optimization (MALO) algorithm to enhance the optimization efficiency and accuracy of ALO by adding a new parameter depends on the step length of each ant while updating the antlion position based on the parameter, upper and lower bounds of search space.
(2) The proposed MALO algorithm is tested on twenty-three benchmark functions at 500 iterations and thirteen benchmark functions at 1000 iterations. The results provide evidence that the suggested MALO escapes the local optima and provides a better convergence rate as compared to the basic ALO algorithm, some well-known and recent optimization algorithms.
(3) Furthermore, 15 balanced and imbalanced datasets were employed to test the performance of the proposed MALO algorithm on reducing instances of the training data and the results are compared with some well-known optimization algorithms.
(4) The Wilcoxon signed-rank test is also applied. The results showcase that the proposed instance reduction method statistically outperforms the MALO algorithm and other comparable optimization techniques based on the recorded Accuracy, BACC, G-Mean, and AUC metrics while keeping less computational time.
(5) Moreover, antlion optimization and MALO were used to perform training data reduction for 18 oversampled imbalanced datasets, and learning is performed using Support Vector Machines (SVM) classifier in all performed experiments. The results are compared with one novel resampling method and two recent algorithms.

2. Methodology

2.1. Antlion Optimizer

Seyedali Mirjalili proposed an Antlion Optimizer (ALO) in 2015 utilizing the ant and lions hunting process [33]. This method consists of five major steps of hunting, specifically random walk of agents, entrapment of ants in the trap, constructing traps, reconstructing traps, and capturing prey. Ant and antlion interactions are followed in the ALO algorithm where the ants are chased by antlions utilizing the traps, and ants are authorized to drift into the search area stochastically for food.
The following matrices represent the matrices for representing the place of p ants and p antlions, where q is the number of variables (dimension).
S A n t = A n t 1 , 1 A n t 1 , 2 A n t 1 , q A n t 2 , 1 A n t 2 , 2 A n t 2 ,   q A n t p ,   1 A n t p , 2   A n t p , q  
and
S A n t l i o n = A n t l i o n 1 , 1 A n t l i o n 1 , 2 A n t l i o n 1 , q A n t l i o n 2 , 1 A n t l i o n 2 , 2 A n t l i o n 2 ,   q A n t l i o n p ,   1 A n t l i o n p , 2   A n t l i o n p , q
If f indicated the fitness function for the duration of optimization, then the matrices developed the matrices for savings the fitness value (objective) of p ants (SOAnt) and p antlions (SOAntlion).
S O A n t = f A n t 11 , A n t 12 , ,   A n t 1 q f A n t 21 , A n t 22 , ,   A n t 2 q f A n t p 1 , A n t p 2 , , A n t p q
and
S O A n t l i o n = f A n t l i o n 11 , A n t l i o n L 12 , ,   A n t l i o n 1 q f A n t l i o n 21 , A n t l i o n 22 , ,   A n t l i o n 2 q f A n t l i o n p 1 , A n t l i o n p 2 , ,   A n t l i o n p q
ALO algorithm contains six operators.
(i) Random Walks of Ants. At every single stage of optimization, ants revise their places with a random walk X(t). A random walk is calculated according to Equation (5). Where the cumulative sum is computed by cumsum, the maximum number of iterations is T, the present repetition (iteration) is indicated by t and rand implies a random number using the uniform probability distribution with the range [0, 1].
X ( t ) = [ 0 ,   cumsum ( 2 r ( t 1 ) 1 ) ,   cumsum ( 2 r ( t 2 ) 1 ) , ,   cumsum ( 2 r ( t T ) 1 ) ]
where the statistic function r(t) is illustrated as in Equation (6).
r ( t ) = 1 i f r a n d > 0.5   0 i f r a n d 0.5
The ants are normalized as each search area has a province to maintain the random walks within the search area. To normalize the process the subsequent equation (min-max normalization), Equation (7) is used before updating the position of ants. Where the lowest of random walk of the ith variable is a i , q i is the highest of random walk in the ith variable, c i t implies the smallest of the ith variable at the tth repetition, q i t implies the highest of the ith variable at the tth repetition.
X i t = X i t a i × q i c i t q i t a i + c i t
(ii) Trapping in Antlion’s Pits. The traps of Antlions affect the random walks of ants. Equations (8) and (9) illustrate that ant’s random walk in a hypersphere is indicated by the c and q vectors across a chosen antlion.
In these equations, l c t implies the lowest among total variables at the tth repetition, the vector h q t suggests involving the vector of the highest of total variables at the tth repetition, c i t implies the lowest of total variables for the ith ant, q i t indicates the highest of total variables for the ith ant, and A n t l i o n j t indicates a place of chosen jth antlion of tth repetition. Where i is the index of the current ant and j is the index of the current antlion.
c i t = A n t l i o n j t + l c t
and
q i t = A n t l i o n j t + h q t
(iii) Building Trap. To choose the fitter antlions intended for capturing ants founded on the fitness value for the period of optimization, a roulette wheel in the ALO procedure is utilized.
(iv) Sliding Ants towards Antlion. Antlions can develop traps proportionately and ants change arbitrarily corresponding to the fitness values. Antlions shoot sands outwards when the ants are in the trap in the middle of the pit. This performance slides down the stuck ant, which is attempting to avoid the trap; in this circumstance, the range of the ants’ random walks hypersphere is reduced adjustably. Considering r c t and r q t as the reduced vectors of the lowest and highest random walks among total variables at the tth repetition, respectively:
r c t = l c t I
and
r q t = h q t I
where l c t implies the lowest among total variables at the tth repetition, the vector h q t suggests involving the vector of the highest of total variables at the tth repetition, and I indicate a ratio that calculated as:
I = 10 w   ( t / T )
where, w implies a fixed value indicated centered on the present reptation t [((w = 2, if t > 0.1 T), (w = 3 if t > 0.5 T), (w = 4 if t > 0.75 T), (w = 5 if t > 0.9 T) and (w = 6 if t > 0.95 T)]. Mostly, the fixed value w is able to accommodate the intensity of manipulation and precision.
Equations (10) and (11) decrease the range of the revising ants’ places and simulate the sliding method of an ant inside the pits.
(v) Catching Prey and Rebuilding the Pit. The ant gets caught by the antlion as soon as it goes to the underside of the pit. The antlion revises the place to the most recent situation of the tracked ant to enhance its probability of capturing new-found prey as in Equation (12):
A n t l i o n j t = A n t i t   i f f A n t l i o n j t < f A n t i t
where A n t l i o n j t indicated the place of the chosen jth antlion at the tth repetition and A n t i t implies a place of the ith ant at the tth repetition.
(vi) Elitism. The essential feature of the evolutionary processes to preserve the finest explanation(s) achieved at every phase of the optimization procedure is elitism. The greatest antlion is an elite in the ALO procedure. Every single ant arbitrarily walks all over a carefully chosen antlion via the roulette wheel and the elite instantaneously as defined in Equation (13).
A n t i t = R A t + R E t 2
In this equation, the random walk indicated by R A t all over the antlion is chosen by the roulette wheel at the tth repetition, the random walk all over the elite is implied by R E t at the tth repetition.
Let a function that produces the arbitrary primary results be X, Y directs the preliminary population presented by the function X, and Z comes back true when the ending principle is assured. Utilizing the above-recommended operations, the ALO procedure is identified as a three-tuple which can be shown as follows:
ALO (X, Y, Z)
where the functions X, Y, and Z are defined as follows
X S A n t ,   S O A n t ,   S A n t l i o n ,   S O A n t l i o n
S A n t ,   S A n t l i o n Y S A n t ,   S A n t l i o n  
S A n t ,   S A n t l i o n   Z t r u e ,   f a l s e
Here, S A n t implies the matrix of the place of the ants, S A n t l i o n incorporates antlions’ place, S O Ant includes the ants’ consequent fitness, and S O Antlion has antlions’ fitness.

2.2. Modified Antlion Optimization (Malo) Method and Its Adaption for Instance Reduction

The ALO algorithm updates the ants’ positions based on random walks across the antlion which is elected by the roulette wheel and the elite. Then, by updating the elite during the method of searching, the fittest one is chosen. However, it suffers from local optima stagnation and slow convergence speed for some optimization problems [32]. MALO is proposed to enhance the optimization precision and effectiveness of ALO by adding a new parameter T u which depends on the step length of each ant while updating the antlion position based on the parameter T u , upper and lower bounds of search space. The pseudocode of the MALO optimization method is presented in Algorithm 1.
Algorithm 1: Pseudocode of MALO.
Set number of antlions = number of ants = p, upper bounds of variables =   u b , lower bounds of variables = l b , the maximum number of iterations = T, the present repetition (iteration) = t, rand implies a random number, l c and h q are vectors that represent the lowest and highest ant’s random walks, R A t is a random walk all over the antlion chosen by the roulette wheel at the tth repetition and the random walk all over the elite is implied by R E t at the tth repetition.
Randomly start the first population of the ants and the antlions
Compute the fitness of the ants and the antlions
Pinpoint the best antlions and presume it as determining optimum or the elite
While  ( t < T )
For  i = 1   t o   p (number of ants or antlions)
T u = i / p
m u = 10 ^ T u 100
Choose an antlion utilizing Roulette wheel
revise c and q applying Equations (10) and (11)
Construct a random walk and normalize it applying Equations (5) and (7)
%Generate a new position based on lower, upper bounds and a random proportional to the current position.
y t = 2 r a n d   s i z e R A t 1  
d x = ( 1 + m u .   ^ a b s y t 1 / m u   . s i g n y t   .   u b l b
The position of the ant is updated using equation
A n t i t = ( R A t + R E t 2 ) / d x
End For
Compute the fitness of all ants
An antlion is substituted by its subsequent ant if it becomes fitter (Equation (12))
If an antlion becomes fitter than the elite, then the elite is updated
End While
Return elite
For validating the MALO performance, it is tested using benchmark functions and applied for instance reduction on many real-world datasets.
The test is performed on benchmark functions to prove that the proposed MALO algorithm has the ability to escape the local optima and provides a better convergence rate compared to the basic ALO algorithm and some other well-known optimizers. This test is applied on two cases: Case I is performed on twenty-three benchmark functions and results are recorded at 500 iterations. Case II is performed on thirteen benchmark functions and the results are recorded at 1000 iterations.
The application of MALO for the instance reduction problem is then performed in two scenarios: the first scenario is applying MALO to reduce the instances of a training set of both balanced and imbalanced datasets in their original form and the second scenario is performed on the oversampled imbalanced datasets. In the first scenario, the proposed MALO starts the search with randomly generated search agents (antlions) for instance reduction. The binary encoding type is used for the representation scheme of the proposed MALO in the instance reduction problem. In this encoding type, each search agent is represented as a vector of binary elements, and the data instances are treated as either present ‘1’ or absent ‘0’. The 1 s represent the remained instances while 0 s represent the removed instances. The search agents in MALO are evaluated by G-mean, which is defined in Equation (21), as fitness value. The G-mean is used as an accuracy metric for imbalanced data because it simultaneously can measure the accuracies of both classes (majority and minority).
Figure 2 depicts the flowchart of the MALO method and its implication in the instance decline challenge for the balanced and imbalanced real-world datasets. As shown in this figure, firstly, the original data (balanced or imbalanced) is divided into three subsets as training (50%), validation (25%), and testing (25%), consequently. The proposed MALO tries to find an optimal subset from the training set, tested in each iteration using the validation set and G-mean of the SVM classifier as a fitness function. The population used is vectors of zeros and ones; each vector is the same size as the training set examples. One denotes the corresponding example stay in the training set, and zero implies removing the related example from the set. The SVM classifier is trained using the training subset corresponding for each vector of population and evaluated using the validation set. MALO uses G-mean as its fitness function in each iteration till finding the optimal training set. When the termination criteria are satisfied, the best search agent (antlion) is assumed as converged. This search agent with the highest fitness value is decoded for the solution that consists of an optimal reduced training dataset. The SVM classifier is trained using the optimal training subset resulting from the MALO and tested using the testing subset.
Figure 2 also indicates the second scenario, which is performed on the oversampled imbalanced datasets. After dividing the dataset into training, validation, and testing subsets, the imbalanced training set is rebalanced by a specific oversampling algorithm Synthetic Minority Oversampling Technique (SMOTE) [34]; subsequently, the proposed MALO tries to find an optimal subset from the obtained balanced dataset.
State-of-the-art oversampling algorithms can be fully used to obtain an ideal training set. The MALO can reduce the number of instances of both majority and minority classes and provides a training set that is more suitable for a specified classifier.
To evaluate the accomplishment of the MALO instance reduction technique experiments were performed and the results were compared with state-of-the-art instance reduction methods: Grey Wolf Optimization (GWO) [35], Whale Optimization Algorithm (WOA) [36], and one recently published resampling method.
A Support Vector Machine (SVM) [37] was used in our experiments to measure the classification performance of the reduced training data resulting from the MALO instance reduction algorithm using numerous evaluation metrics as defined in Equations (18)–(22).
For evaluation metrics, let TN represents true negatives and TP represents true positives, FN implies false negatives, FP denotes false positives, TPR is the true positive rate and TNR is the true negative rate [9].
Accuracy = T P + T N T P + T N + F P + F N
Recall = Sensitivity = TPR = T P T P + F N  
Specificity = TNR = T N T N + F P  
G mean = T P R × T N R  
Balanced Accuracy (BACC) = (((TP/(TP + FN) + (TN/(TN + FP)))/2
The area under the ROC (Receiver Operating Characteristics) curve (AUC) is a performance measurement for the classification problems at numerous threshold situations. ROC is a likelihood curve and AUC is used to measure or degree of separability.

3. Results and Discussion

To prove the efficiency of the suggested MALO algorithm, we tested its performance in two types of experiments: Experiment 1 was performed using benchmark functions, and experiment 2 was conducted using balanced and imbalanced real-world datasets given below.

3.1. Experiment 1: Results of Malo on Benchmark Functions

Generally, the benchmark functions can be classified into three types: unimodal functions (Table 1), multimodal functions (Table 2), and fixed-dimension multimodal functions (Table 3). In these tables, D i m represents the dimension of variables, R a n g e denotes the range of variation of optimization variables and f m i n represents the optimal value quoted in the literature. Figure 3 shows the 2D versions of the cost function for F 1 ,   F 8 , F 15 , and F 22 test problems used in this study.
In this paper, the experiments were performed on a 64-bit Windows 10 system using 2.40 GHz frequency, 16 GB of RAM, Intel(R) Core(TM) i7, and Matlab R2018a. The proposed MALO algorithm was run 30 times independently on each benchmark function using 30 candidate solutions (antlions). The average (AV) and standard deviation (STD) of the best-obtained solution in the last iteration were recorded. To analyze the impact of the maximum number of iterations on the MALO algorithm, experiments were conducted in two cases: Table 4 indicates the comparison for the maximum number of iterations = 500 (case I), and Table 5 and Table 6 for the maximum number of iterations = 1000 (case II).

3.1.1. Case I (Maximum Number of Iterations = 500)

In this case, the general control parameters of algorithms, such as the number of candidate solutions and the maximum number of iterations were chosen the same as those given by [36]. In Table 4, simulation results for 30 candidate solutions and the maximum number of iterations = 500 are presented. For the verification of the results, the proposed MALO algorithm was compared to the basic ALO algorithm, well-known and recent algorithms, such as the Whale Optimization Algorithm (WOA) [36], Gravitational Search Algorithm (GSA) [38], PSO [39] and Archimedes optimization algorithm (AOA) [40]. The results of WOA, GSA, and PSO were obtained from [36] while MALO and basic ALO are implemented based on the same parameters which are taken from [33] and AOA is implemented based on the parameters which are mentioned in its original paper [40].
Unimodal functions ( F 1   F 7 ) have one global minimum. These functions are sufficient for testing the convergence rate and the exploitation capability of algorithms. It can be inferred from Table 4 that the MALO algorithm is very competitive with basic ALO, WOA, GSA, PSO, and AOA. In particular, for F 1   F 4 , only MALO can provide the exact optimum value. Moreover, the MALO algorithm shows the best optimization’ performance for functions F 5 and F 7 in terms of average and standard deviation. As a result, the MALO algorithm has a high exploitation capability.
In contrast to unimodal functions, multimodal functions ( F 8 F 13 ) have a large number of local minima. As a result, these kinds of benchmark functions are better for testing the exploration capability and local optima avoidance of algorithms. Fixed-dimensional multimodal functions ( F 14 F 23 ) have a pre-defined number of design variables and provide different search spaces compared to multimodal functions.
Table 4 shows that MALO, WOA, and AOA provide the exact optimum value for multimodal function F 9 while MALO and AOA achieve the exact optimum value for F 11 . For F 10 , F 12   F 13 ,   F 14 ,   F 21 , F 22 ; the MALO algorithm is better than ALO, WOA, GSA, PSO, and AOA in terms of average and standard deviation.
Moreover, the MALO algorithm is competitive with GSA for function F 23 in terms of average and standard deviation. Thus, MALO has also a high exploration capability which leads this algorithm to explore the promising areas without any disruption. In this case, out of twenty-three benchmark functions, MALO achieves the best results for fourteen functions, while GSA for five, basic ALO for three, WOA for three, AOA for three, PSO for two. The convergence curves of the ALO and MALO for some of the functions by considering the maximum number of iterations = 500 are shown in Figure 4. The iterations are shown on the horizontal axis while the average function values are shown on the vertical axis. As can be observed, the proposed MALO algorithm escapes the local optima and provides a better convergence rate as compared to the basic ALO algorithm.

3.1.2. Case II (Maximum Number of Iterations = 1000)

In this case, the number of candidate solutions and the maximum number of iterations were selected the same as those given by [33]. Results for 30 candidate solutions and the maximum number of iterations = 1000 are shown in Table 5 and Table 6. Nine optimization algorithms, including the basic ALO algorithm, Bat Algorithm (BA) [41], States of Matter Search (SMS) [42,43], Particle Swarm Optimization (PSO) [39], Flower Pollination Algorithm (FPA) [44], Genetic Algorithms (GA) [45], Firefly Algorithm (FA) [46,47], Cuckoo Search (CS) [48] and Archimedes optimization algorithm (AOA) [40] were investigated in order to compare the optimization results. Most of the comparative algorithms’ results were taken from [33].
As can be seen from Table 5, the MALO algorithm is very competitive with the basic ALO algorithm, BA, SMS, PSO, FPA, GA, FA, CS, and AOA in optimizing the unimodal functions. MALO achieves exact optimum results for F 1   F 4 and is the best efficient optimizer for functions F 5 and F 7 . Hence, the MALO algorithm has a high exploitation capability for the maximum number of iterations = 1000.
Table 6 shows the results of multimodal functions. MALO algorithm is the second efficient optimizer for function F 8 . Moreover, for F 9 only MALO gives the exact optimum value and for F 11 , MALO and AOA achieve the exact optimum value. MALO and AOA are the most efficient optimizers for F 10 , in terms of average and standard deviation. Therefore, MALO has also a high exploration capability for the maximum number of iterations = 1000. In this case, out of thirteen benchmark functions, MALO provides optimum results for nine functions, basic ALO achieves good results for three functions, AOA shows good results for three functions, while other algorithms fail to provide better results. Figure 5 provides the convergence curves of the ALO and MALO for some of the functions by considering the maximum number of iterations = 1000. As illustrated, the proposed MALO algorithm escapes the local optima with a high convergence rate.

3.2. Experiment 2: Results of Malo on Balanced and Imbalanced Real-World Datasets

In this section, some state-of-the-art methods were used in comparison with the proposed method, and the experimental results are recorded. The experiments were performed on two scenarios: The first scenario contains experiments on balanced and imbalanced real-world datasets in their original form. The second scenario contains experiments on oversampling imbalanced datasets. In these experiments, reduced datasets were classified by SVM in all experiments and the datasets with different numbers of instances and attributes were chosen from the UCI machine learning repository [49] to assess the classification performance of the MALO algorithm on balanced and imbalanced data. Table 7 and Table 8 show the characteristics of the datasets.
All experiments on data were conducted by dividing the instances of each dataset randomly into three sets: training, testing, and validation sets. Hibernation between the proposed MALO algorithm and SVM classifier was applied to perform an instance reduction of the training set and 30 runs of the algorithm were performed independently to record the performance measures Accuracy, BACC, G-mean, AUC, and the run time for each test set.

3.2.1. First Scenario

This scenario was designed to assess the effectiveness of the proposed instance reduction method on both balanced and imbalanced datasets in their original form. The proposed method was implemented, and some experiments were conducted to enhance overall performance by removing redundant instances to obtain better values for four evaluation measures: Accuracy, BACC, G-mean, and AUC in addition to the run time.
Table 9 presents the overall performance of the MALO instance reduction method, compared with the basic ALO algorithm and two state-of-the-art instance reduction techniques: Gray Wolf algorithm (GWO) and Whale Optimization (WOA). The results in Table 9 show clearly that the MALO outperforms these methods in terms of the classification accuracy and BACC values on almost all balanced and imbalanced datasets (12 datasets out of 15) which proves the stability of our proposal against other instance reduction methods.
It can also be noticed from Table 9 that our proposed instance reduction method ranks top in G-mean and AUC (9 datasets out of 15), where it provides a significantly higher number of best G-mean and AUC values than other compared methods. This superiority proves that our method has improved the trade-offs between sensitivity and specificity, which indicates the reduction obtained by our instance reduction method for false +ves and −ves, over state-of-the-art methods.
MALO algorithm also increments up to 4% in Accuracy rate, 3% in BACC rate, 15% in G-mean rate, and 9% in AUC for “Breast cancer” dataset, over the basic ALO algorithm.
Results in Table 9 also indicate that the proposed MALO is less than the ALO algorithm for all the datasets and has superiority for saving computational time (better in 9 datasets out of 15) while maintaining the best performance over the other compared algorithms. The convergence curves between the fitness function and iterations for the ALO, MALO, GWO, and WOA for the glass0 and Breast_tissue datasets are shown in Figure 6. The iterations are shown on the horizontal axis while the fitness function values are shown on the vertical axis. As can be observed, the proposed MALO algorithm provides a better convergence rate as compared to the basic ALO algorithm and other compared algorithms.
To perform the comparison, the non-parametric statistical hypothesis Wilcoxon signed-rank test (a paired difference, two-sided signed-rank test) [50] was used to perform a statistical significance analysis and derive fairly strong conclusions. All the methods were compared with MALO for each dataset. For each two compared methods, the differences were calculated and ranked from 1 (smallest) to 15 (largest). The signs (‘+’ or ‘−’) were subsequently assigned to the corresponding differences of the ranks. R+ and R− were assigned to all the +ve and −ve ranks after summing up separately, respectively. The T value was compared against a significance level α = 0.05, with a critical value, equals 25 for 15 datasets where T = min {R+, R−}. The null hypothesis was that all performance differences between any two compared methods may occur by chance and the null hypothesis was rejected only if the T value is < or = to 25 (the critical value).
Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17 present the significance test results and the addition symbol ‘+’ in Table 14, Table 15, Table 16 and Table 17 indicate that our proposal MALO outperforms the compared methods.
The G-mean comparison: Table 10 shows the significance test results of average G-mean for MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA, using the SVM classifier. In the case of MALO vs. ALO, MALO is better (+ve difference) than ALO for 12 datasets, while ALO is better (−ve difference) than MALO for three datasets. After calculating the total of +ve ranks R+ = 100 and the total −ve ranks R− = 20. As 15 datasets were used, the T value at the significance level of 0.05 should be ≤25 to reject a null hypothesis. We can conclude that MALO can statistically outperform ALO as T = min {R+, R−} = min {100, 20} = 20 < 25. Likewise, in the case of MALO vs. GWO, MALO can statistically outperform the GWO as T = min {95, 24} = 24 < 25.
In the case of MALO vs. WOA, MALO obtains better differences for 13 datasets, while WOA just obtains better differences for two datasets. We can conclude that MALO can statistically outperform WOA as T = min {106, 14} = 14 < 25.
The T values of Table 10 are listed as summary results in Table 14, which shows that our proposed instance reduction method can statistically outperform the compared.
This result is consistent with our expectations, regarding that G-mean was considered as the fitness function in our proposed instance reduction method in these experiments.
The accuracy comparison: Table 11 shows the significance test results of average accuracy for MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA, using the SVM classifier. In the case of MALO vs. ALO, MALO is better (+ve difference) than ALO for 13 datasets, while ALO is better (−ve difference) than MALO for two datasets. After calculating the total of +ve ranks R+ = 105 and the total −ve ranks R− = 15. We can conclude that MALO can statistically outperform ALO as T = min {105, 15} = 15 < 25. Likewise, in the case of MALO vs. GWO, MALO can statistically outperform the GWO as T = min {91, 23} = 23 < 25.
In the case of MALO vs. WOA, MALO obtains better differences for 14 datasets, while WOA obtains better differences for just one dataset. We can conclude that MALO can statistically outperform WOA as T = min {109, 10} = 10 < 25.
The T values of Table 11 are listed as summary results in Table 15, which shows that our proposed instance reduction method can statistically outperform the compared methods according to the average accuracy values.
The BACC comparison: Table 12 shows the significance test results of average BACC for MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA, using the SVM classifier. In the case of MALO vs. ALO, MALO is better (+ve difference) than ALO for all datasets. After calculating the total of +ve ranks R+ = 120 and the total −ve ranks R− = 0. We can conclude that MALO can statistically outperform ALO as T = min {120, 0} = 0 < 25. In the case of MALO vs. GWO, MALO obtains better differences for 14 datasets, while GWO obtains better differences for just one dataset. We can conclude that MALO can statistically outperform GWO as T = min {107, 13} = 13 < 25. In the case of MALO vs. WOA, MALO obtains better differences for 13 datasets, while WOA just obtains better differences for two datasets. We can conclude that MALO can statistically outperform WOA as T = min {104, 15} = 15 < 25.
The T values of Table 12 are listed as summary results in Table 16, which shows that our proposed instance reduction method can statistically outperform the compared methods according to the average BACC values.
The AUC comparison: Table 13 shows the significance test results of average AUC for MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA, using the SVM classifier. In the case of MALO vs. ALO, MALO is better (+ve difference) than ALO for 13 datasets, while ALO is better (−ve difference) than MALO for two datasets. After calculating the total of +ve ranks R+ = 104 and the total −ve ranks R− = 16. We can conclude that MALO can statistically outperform ALO as T = min {104, 16} = 16 < 25. Likewise, in the case of MALO vs. GWO, MALO can statistically outperform the GWO as T = min {103, 14} = 14 < 25. In the case of MALO vs. WOA, MALO obtains better differences for 12 datasets, while WOA is better than MALO for three datasets. We can conclude that MALO can statistically outperform WOA as T = min {102, 18} = 18 < 25.
The T values of Table 13 are listed as summary results in Table 17, which shows that our proposed instance reduction method can statistically outperform the compared methods according to the average AUC values.
From the preceding results of the statistical Wilcoxon signed-rank test, we can conclude that the instance reduction proposed method can statistically outperform the ALO algorithm and has strong competitiveness against other comparative methods in terms of all used performance measures.

3.2.2. Second Scenario

In this scenario, the overall performance of our algorithm MALO was compared with the results obtained in [51], ALO, GWO, and WOA and tested again using 18 imbalanced datasets: as shown in Table 18 to examine the instance reduction capability of our proposal MALO and evaluate the effect of preprocessing the datasets using SMOTE oversampling method [34] with the MALO algorithm. Authors in [51] proposed a hybrid algorithm named ACOR or Ant Colony optimization resampling method to improve the classification performance of imbalanced datasets based on improving the existing oversampling methods’ performance following two main steps: first, rebalancing the imbalanced datasets by one of four commonly used oversampling techniques (SMOTE [34], BSO [52], ADASYN [53], and ROS [54]) were used in the research; second, selecting an optimal subset from the rebalanced training datasets using ant colony optimization [55].
ACOR was experimented with using 18 real imbalanced datasets and three different classifiers the naive Bayes classifier [56], C4.5 classifier [57], and SVM-RBF (Support Vector Machine with Radial Basis Function Kernel) classifier [37,58].
In this experiment, we compared MALO with the SMOTE and ACOR-SMOTE, results obtained in [51], ALO, GWO, and WOA using the SVM-RBF classifier. The imbalanced training datasets are resampled using the common SMOTE oversampling method. Then, the proposed MALO was used to reduce the training set instances. Table 18 shows that MALO outperforms the compared methods in classification accuracy (12 imbalanced datasets out of 18), proving our technique’s stability against other instance reduction methods. Additionally, results in Table 18 indicate that our proposed instance reduction method ranks top in BACC and G-mean (9 imbalanced datasets out of 18). It provides a significantly higher number of best BACC and G-mean values than other compared methods. In comparison, ACOR-SMOTE delivers better results in terms of AUC (11 imbalanced datasets out of 18). MALO algorithm also shows increment up to 7% in Accuracy rate, 3% in BACC rate for “Sonar” dataset, 4% in G-mean rate for “Sonar”, “Wine(1-others)”, and “Vowel(3-others)” datasets, and 4% in AUC for “Sonar” dataset and “Vowel (3- others)” dataset over the basic ALO algorithm.
Figure 7 shows the box and whiskers plots for Accuracy, BACC, G-mean, and AUC of compared methods, respectively. The box and whiskers plot is a standardized way of displaying the distribution of data based on minimum value, lower quartile (25th percentile), median (50th percentile), upper quartile (75th percentile), and maximum value. The ends of the box are the lower and upper quartiles which represent the interquartile range (IQR), the two lines outside the box that extend to the minimum and maximum values are the whiskers, while the median and mean values are a horizontal line inside the box and cross sign, respectively. The box and whiskers plots in Figure 7 demonstrate that MALO outperforms the ALO in terms of all used measures except in BACC since they have almost the same performance in this term. MALO also shows a better performance in terms of accuracy and BACC than SMOTE and ACOR-SMOTE. In addition, MALO outperforms the GWO and WOA in terms of all used measures. The MALO’s mean, median, and upper quartile values are very close to their perfect thresholds, indicating the proposed algorithm’s stability against the compared algorithms.
Results in scenarios 1 and 2 demonstrate that MALO shows a high superiority in minimizing the amount of training set instances number, consequently maximizing the overall performance of classification compared to state-of-the-art methods used to reduce the original balanced and imbalanced datasets. Using MALO to reduce the instances of oversampled imbalanced datasets presents a good performance compared to the full oversampled dataset, the recent ACOR-SMOTE instance reduction method, ALO, GWO, and WOA.
The superiority of the instance reduction MALO algorithm in minimizing the number of imbalanced dataset instances in their original form gives it an important advantage since it improves the performance of imbalanced data without the need to perform oversampling pre-processing methods which consumes computational time and memory space.

4. Conclusions

An optimization-based method was proposed in this paper for the problem of instance reduction to obtain better results in terms of many metrics in both balanced and imbalanced data. A new modified antlion optimization (MALO) method was adapted for this task after validating its ability in terms of optimization compared to state-of-the-art optimizers using benchmark functions. The results obtained at 500 and 1000 iterations for twenty-three and thirteen benchmark functions, respectively, demonstrated that the proposed MALO algorithm could escape the local optima and provide a better convergence rate as compared to the basic ALO algorithm and state-of-the-art optimizers.
Additionally, instance reduction results from MALO were compared to basic antlion Optimization and some well-known optimization algorithms on 15 balanced and imbalanced datasets to test the performance on reducing instances of the training data. Furthermore, antlion optimization and MALO were used to perform training data reduction for 18 oversampled imbalanced datasets, and the reduced datasets were classified by SVM in all experiments. The results were also compared with one novel resampling method.
Obtained results demonstrated that the proposed MALO was superior in minimizing the number of training set instances, hence maximizing the classification performance while reducing the run time compared to state-of-the-art methods used to reduce the original balanced and imbalanced datasets without the need to perform oversampling pre-processing methods which consume computational time and memory space. MALO reduced the instances of oversampled imbalanced datasets with better performance compared to the full oversampled dataset and the recently proposed ACOR instance reduction method, ALO, GWO, and WOA.
The MALO algorithm results showed an increment in Accuracy, BACC, G-mean, and AUC rates up to 7%, 3%, 15%, and 9%, respectively, for some datasets over the basic ALO algorithm while keeping less computational time.
The need for determining the best values of the parameters of MALO can seem to be a limitation; however, the instance reduction problem in balanced and imbalanced data is a complex problem that can be encountered in many real-world applications and this limitation can be resolved by adjusting the parameters by adopting different statistical concepts.
Owing to the encouraging outcomes and high performance of MALO in the instance reduction challenge, numerous new evolutionary optimization algorithms can be adjusted for improved outcomes for this hot research area. Multi-objective or many-objective versions of the evolutionary optimization methods can also be adapted to obtain a wider range of non-dominating and alternative solutions that can promote more appropriate roles for this important task by instantaneously satisfying the different conflicting and contradictory objectives using only a single optimization routine.
Current or new evolutionary optimization and search methods can also be hybridized for performance increment by employing the two or more combined methods and eliminating the possible disadvantages of the methods. In this way, a good balance between exploration and exploitation can enhance the performance of the proposed MALO, for instance, the reduction problem in balanced and imbalanced data.
Different techniques for decreasing the computational cost can be embedded in the proposed MALO. Adaptive versions of the methods can also be proposed. Different initialization methods can be integrated into MALO in order to obtain better results for instance reduction problems and other complex real-world problems by obtaining a more uniform population. As another future direction of work, the proposed model can be adapted for real big data classification processes.

Author Contributions

Conceptualization, All authors; methodology, L.M.E.B. and A.S.D.; software, L.M.E.B. and A.S.D.; validation, All authors; formal analysis, L.M.E.B. and A.S.D.; investigation, S.H.; resources, M.A.I.; data curation, B.A.; writing—original draft preparation, L.M.E.B., A.S.D., S.H., B.A. and M.A.I.; writing—review and editing, B.A.; visualization, L.M.E.B., A.S.D., M.A.C. and S.K., A.S; supervision, L.M.E.B., A.S.D., B.A., S.H. and M.A.I.; project administration, L.M.E.B. and A.S.D.; funding acquisition, M.A.C. and S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data available at: https://archive.ics.uci.edu/.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abdar, M.; Nasarian, E.; Zhou, X.; Bargshady, G.; Wijayaningrum, V.N.; Hussain, S. Performance Improvement of Decision Trees for Diagnosis of Coronary Artery Disease Using Multi Filtering Approach. In Proceedings of the 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 23–25 February 2019; Institute of Electrical and Electronics Engineers (IEEE): Manhattan, NY, USA, 2019; pp. 26–30. [Google Scholar]
  2. Shoeibi, A.; Ghassemi, N.; Khodatars, M.; Jafari, M.; Hussain, S.; Alizadehsani, R.; Acharya, U.R. Epileptic seizure detection using deep learning techniques: A Review. Int. J. Environ. Res. Public Health 2021, 18, 5780. [Google Scholar] [CrossRef] [PubMed]
  3. Khodatars, M.; Shoeibi, A.; Sadeghi, D.; Ghaasemi, N.; Jafari, M.; Moridian, P.; Khadem, A.; Alizadehsani, R.; Zare, A.; Kong, Y.; et al. Deep learning for neuroimaging-based diagnosis and rehabilitation of Autism Spectrum Disorder: A review. Comput. Biol. Med. 2021, 139, 104949. [Google Scholar] [CrossRef] [PubMed]
  4. Alizadehsani, R.; Sani, Z.A.; Behjati, M.; Roshanzamir, Z.; Hussain, S.; Abedini, N.; Hasanzadeh, F.; Khosravi, A.; Shoeibi, A.; Roshanzamir, M.; et al. Risk factors prediction, clinical outcomes, and mortality in COVID-19 patients. J. Med. Virol. 2021, 93, 2307–2320. [Google Scholar] [CrossRef] [PubMed]
  5. Abdar, M.; Pourpanah, F.; Hussain, S.; Rezazadegan, D.; Liu, L.; Ghavamzadeh, M.; Fieguth, P.; Cao, X.; Khosravi, A.; Acharya, U.R.; et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf. Fusion 2021, 76, 243–297. [Google Scholar] [CrossRef]
  6. Koohestani, A.; Abdar, M.; Hussain, S.; Khosravi, A.; Nahavandi, D.; Nahavandi, S.; Alizadehsani, R. Analysis of Driver Performance Using Hybrid of Weighted Ensemble Learning Technique and Evolutionary Algorithms. Arab. J. Sci. Eng. 2021, 46, 3567–3580. [Google Scholar] [CrossRef]
  7. Hussain, S.; Hazarika, G. Educational Data Mining Model Using Rattle. Int. J. Adv. Comput. Sci. Appl. 2014, 5. [Google Scholar] [CrossRef]
  8. Basiri, M.E.; Nemati, S.; Abdar, M.; Cambria, E.; Acharrya, U.R. ABCDM: An Attention-based Bidirectional CNN-RNN Deep Model for sentiment analysis. Futur. Gener. Comput. Syst. 2021, 115, 279–294. [Google Scholar] [CrossRef]
  9. Desuky, A.S.; Hussain, S. An Improved Hybrid Approach for Handling Class Imbalance Problem. Arab. J. Sci. Eng. 2021, 46, 3853–3864. [Google Scholar] [CrossRef]
  10. Chou, J.-S.; Truong, D.-N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  11. Negi, G.; Kumar, A.; Pant, S.; Ram, M. Optimization of Complex System Reliability using Hybrid Grey Wolf Optimizer. Decis. Mak. Appl. Manag. Eng. 2021, 4, 241–256. [Google Scholar] [CrossRef]
  12. Das, M.; Roy, A.; Maity, S.; Kar, S.; Sengupta, S. Solving fuzzy dynamic ship routing and scheduling problem through new genetic algorithm. Decis. Mak. Appl. Manag. Eng. 2021. [Google Scholar] [CrossRef]
  13. Ganguly, S. Multi-objective distributed generation penetration planning with load model using particle swarm optimization. Decis. Mak. Appl. Manag. Eng. 2020, 3, 30–42. [Google Scholar] [CrossRef]
  14. Carvajal, O.; Melin, P.; Miramontes, I.; Prado-Arechiga, G. Optimal design of a general type-2 fuzzy classifier for the pulse level and its hardware implementation. Eng. Appl. Artif. Intell. 2021, 97, 104069. [Google Scholar] [CrossRef]
  15. Precup, R.-E.; David, R.-C.; Roman, R.-C.; Petriu, E.M.; Szedlak-Stinean, A.-I. Slime Mould Algorithm-Based Tuning of Cost-Effective Fuzzy Controllers for Servo Systems. Int. J. Comput. Intell. Syst. 2021, 14, 1042–1052. [Google Scholar] [CrossRef]
  16. Valdez, F.; Castillo, O.; Cortes-Antonio, P.; Melin, P. A survey of Type-2 fuzzy logic controller design using nature inspired optimization. J. Intell. Fuzzy Syst. 2020, 39, 6169–6179. [Google Scholar] [CrossRef]
  17. Hu, H.; Wang, H.; Bai, Y.; Liu, M. Determination of endometrial carcinoma with gene expression based on optimized Elman neural network. Appl. Math. Comput. 2019, 341, 204–214. [Google Scholar] [CrossRef]
  18. Gupta, N.; Jain, R.; Gupta, D.; Khanna, A.; Khamparia, A. Modified Ant Lion Optimization Algorithm for Improved Diagnosis of Thyroid Disease. In Advances in Human Error, Reliability, Resilience, and Performance; Springer Science and Business Media LLC: Singapore, 2020; pp. 599–610. [Google Scholar]
  19. El-Kenawy ES, M.; Eid, M.M.; Saber, M.; Ibrahim, A. MbGWO-SFS: Modified binary grey wolf optimizer based on stochastic fractal search for feature selection. IEEE Access 2020, 8, 107635–107649. [Google Scholar] [CrossRef]
  20. Lin, K.-C.; Zhang, K.-Y.; Huang, Y.-H.; Hung, J.C.; Yen, N.Y. Feature selection based on an improved cat swarm optimization algorithm for big data classification. J. Supercomput. 2016, 72, 3210–3221. [Google Scholar] [CrossRef]
  21. Wan, Y.; Wang, M.; Ye, Z.; Lai, X. A feature selection method based on modified binary coded ant colony optimization algorithm. Appl. Soft Comput. 2016, 49, 248–258. [Google Scholar] [CrossRef]
  22. Zakeri, A.; Hokmabadi, A. Efficient feature selection method using real-valued grasshopper optimization algorithm. Expert Syst. Appl. 2019, 119, 61–72. [Google Scholar] [CrossRef]
  23. Nanni, L.; Lumini, A. Particle swarm optimization for prototype reduction. Neurocomputing 2009, 72, 1092–1097. [Google Scholar] [CrossRef]
  24. Zhai, T.; He, Z. Instance selection for time series classification based on immune binary particle swarm optimization. Knowl. Based Syst. 2013, 49, 106–115. [Google Scholar] [CrossRef]
  25. Hamidzadeh, J.; Monsefi, R.; Yazdi, H.S. LMIRA: Large Margin Instance Reduction Algorithm. Neurocomputing 2014, 145, 477–487. [Google Scholar] [CrossRef]
  26. Saidi, M.; Bechar, M.E.A.; Settouti, N.; Chikh, M.A. Instances selection algorithm by ensemble margin. J. Exp. Theor. Artif. Intell. 2017, 30, 457–478. [Google Scholar] [CrossRef]
  27. Carbonera, J.L.; Abel, M. A Density-Based Approach for Instance Selection. In Proceedings of the 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), Vietri sul Mare, Italy, 9–11 November 2015; Institute of Electrical and Electronics Engineers (IEEE): Manhattan, NY, USA, 2015; pp. 768–774. [Google Scholar]
  28. De Haro-García, A.; Cerruela-García, G.; García-Pedrajas, N. Instance selection based on boosting for in-stance-based learners. Pattern Recognit. 2019, 96, 106959. [Google Scholar] [CrossRef]
  29. Wang, M.; Heidari, A.A.; Chen, M.; Chen, H.; Zhao, X.; Cai, X. Exploratory differential ant lion-based optimization. Expert Syst. Appl. 2020, 159, 113548. [Google Scholar] [CrossRef]
  30. Pierezan, J.; Coelho, L.d.S.; Mariani, V.C.; Goudos, S.K.; Boursianis, A.D.; Kantartzis, N.V.; Antonopoulos, C.S.; Nikolaidis, S. Multiobjective Ant Lion Approaches Applied to Electromagnetic Device Optimization. Technologies 2021, 9, 35. [Google Scholar] [CrossRef]
  31. Assiri, A.S.; Hussien, A.G.; Amin, M. Ant Lion Optimization: Variants, Hybrids, and Applications. IEEE Access 2020, 8, 77746–77764. [Google Scholar] [CrossRef]
  32. Tian, T.; Liu, C.; Guo, Q.; Yuan, Y.; Li, W.; Yan, Q. An improved ant lion optimization algorithm and its application in hydraulic turbine governing system parameter identification. Energies 2018, 11, 95. [Google Scholar] [CrossRef] [Green Version]
  33. Mirjalili, S. The antlion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  34. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  36. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  37. Wang, L. Support Vector Machines: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  38. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  39. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  40. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  41. Yang, X.-S. A New Metaheuristic Bat-Inspired Algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); González, J.R., Pelta, D.A., Cruz, C., Terrazas, G., Krasnogor, N., Eds.; Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  42. Cuevas, E.; Echavarría, A.; Ortegón, M.A.R. An optimization algorithm inspired by the States of Matter that improves the balance between exploration and exploitation. Appl. Intell. 2014, 40, 256–272. [Google Scholar] [CrossRef] [Green Version]
  43. Cuevas, E.; Echavarría, A.; Zaldívar, D.; Pérez-Cisneros, M. A novel evolutionary algorithm inspired by the states of matter for template matching. Expert Syst. Appl. 2013, 40, 6359–6373. [Google Scholar] [CrossRef]
  44. Yang, X.-S. Flower Pollination Algorithm for Global Optimization. In Proceedings of the Image Analysis and Processing—ICIAP 2017, Orléan, France, 3–7 September 2012; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar]
  45. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  46. Yang, X.-S.; Algorithm, F.; Flights, L.; Optimization, G. Research and Development in Intelligent Systems XXVI; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2009; pp. 209–218. [Google Scholar]
  47. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78. [Google Scholar] [CrossRef]
  48. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  49. Asuncion, A.; Newman, D. UCI Machine Learning Repository. 2007. Available online: https://archive.ics.uci.edu/ (accessed on 5 February 2022).
  50. Sheskin, D.J. Handbook of Parametric and Nonparametric Statistical Procedures; CRC Press: Boca Raton, FL, USA, 2003; Volume 51, p. 374. ISBN 1420036262. [Google Scholar]
  51. Li, M.; Xiong, A.; Wang, L.; Deng, S.; Ye, J. ACO Resampling: Enhancing the performance of oversampling methods for class imbalance classification. Knowl. Based Syst. 2020, 196, 105818. [Google Scholar] [CrossRef]
  52. Han, H.; Wang, W.Y.; Mao, B.H. Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In International Conference on Intelligent Computing; Springer: Berlin/Heidelberg, Germany, 2005; pp. 878–887. [Google Scholar]
  53. He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the IEEE International Joint Conference on Neural Networks, Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar] [CrossRef] [Green Version]
  54. Mease, D.; Wyner, A.J.; Buja, A. Boosted classification trees and class probability/quantile estimation. J. Mach. Learn. Res. 2007, 8, 409–439. [Google Scholar]
  55. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed optimization by ant colonies. In Proceedings of the First European Conference on Artificial Life (ECAL’91), Paris, France, 11–13 December 1991; Elsevier Publishing: Amsterdam, The Netherlands, 1991; Volume 142, pp. 134–142. [Google Scholar]
  56. Youn, E.; Jeong, M.K. Class dependent feature scaling method using naive Bayes classifier for text datamining. Pattern Recognit. Lett. 2009, 30, 477–485. [Google Scholar] [CrossRef]
  57. Quinlan, J.R. C4. 5: Programs for Machine Learning; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  58. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
Figure 1. Different categories of meta-heuristic algorithms [10].
Figure 1. Different categories of meta-heuristic algorithms [10].
Axioms 11 00095 g001
Figure 2. Flow chart of MALO method.
Figure 2. Flow chart of MALO method.
Axioms 11 00095 g002
Figure 3. 2-D versions of the cost function for some benchmark problems.
Figure 3. 2-D versions of the cost function for some benchmark problems.
Axioms 11 00095 g003
Figure 4. Convergence curves of ALO and MALO on six of the benchmark functions (500 iterations).
Figure 4. Convergence curves of ALO and MALO on six of the benchmark functions (500 iterations).
Axioms 11 00095 g004aAxioms 11 00095 g004b
Figure 5. Convergence curves of ALO and MALO on some of the benchmark functions (1000 iterations).
Figure 5. Convergence curves of ALO and MALO on some of the benchmark functions (1000 iterations).
Axioms 11 00095 g005
Figure 6. Convergence curves of ALO, MALO, GWO, and WOA on two datasets.
Figure 6. Convergence curves of ALO, MALO, GWO, and WOA on two datasets.
Axioms 11 00095 g006
Figure 7. Box and whiskers plots for Accuracy, BACC, G-mean, and AUC of compared methods.
Figure 7. Box and whiskers plots for Accuracy, BACC, G-mean, and AUC of compared methods.
Axioms 11 00095 g007
Table 1. Description of unimodal benchmark functions ( F 1   F 7 ).
Table 1. Description of unimodal benchmark functions ( F 1   F 7 ).
F u n c t i o n D i m R a n g e f m i n
F 1 x = Σ i = 1 n x i 2 30[–100, 100]0
F 2 x = Σ i = 1 n x i + Π i = 1 n x i 30[–10, 10]0
F 3 x = Σ i = 1 n Σ j 1 i x j 2 30[−100, 100]0
F 4 x = m a x i x i ,   1 i n 30[−100, 100]0
F 5 x = Σ i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 30[−30, 30]0
F 6 x = Σ i = 1 n x i + 0.5 2 30[−100, 100]0
F 7 x = i = 1 n i x i 4 + r a n d 0 ,   1 30[−1.28, 1.28]0
Table 2. Description of multimodal benchmark functions ( F 8   F 13 ).
Table 2. Description of multimodal benchmark functions ( F 8   F 13 ).
F u n c t i o n D i m R a n g e f m i n
F 8 x = Σ i = 1 n x i sin x i 30[−500, 500]−418.9829 × 5
F 9 x = Σ i = 1 n [ x i 2 cos 2 π x i + 10 ] 30[−5.12, 5.12]0
F 10 x = 20 exp 0.2 1 n Σ i = 1 n x i 2 exp 1 n Σ i = 1 n cos 2 π x i + 20 + e 30[−32, 32]0
F 11 x = 1 4000 Σ i = 1 n x i 2 π i = 1 n cos x i i + 1 30[−600, 600]0
F 12 x = π n { 10 sin π y 1 + Σ i = 1 n y i 1 2 1 + 10 sin 2 π y i + ! + y n 1 2 } + Σ i = 1 n u x i , 10 , 100 , 4
y i = 1 + x i + 1 4   u x i ,   a ,   k , m = k x i a m   x i > a 0 a < x i < a k x i a m   x i < a
30[−50, 50]0
F 13 x = 0.1 sin 2 3 π x 1 + Σ i = 1 n x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + Σ i = 1 n u x i , 5 , 100 , 4 30[−50, 50]0
Table 3. Description of fixed-dimension multimodal benchmark functions ( F 14 F 23 ).
Table 3. Description of fixed-dimension multimodal benchmark functions ( F 14 F 23 ).
F u n c t i o n D i m R a n g e f m i n
F 14 x = 1 500 + Σ j = 1 25 1 j + Σ i = 1 2 x i a i j 6 1 2[−65, 65]1
F 15 x = Σ i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F 17 x = x 2 5.1 4 π 2 x 1 2 + 2 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5, 5]0.398
F 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2[−2, 2]3
F 19 x = Σ i = 1 4 c i e x p Σ j = 1 3 a i j x j p i j 2 3[1, 3]−3.86
F 20 x = Σ i = 1 4 c i e x p Σ j = 1 6 a i j x j p i j 2 6[0, 1]−3.32
F 21 x = Σ i = 1 5 x a i x a i T + c i 1 4[0, 10]−10.1532
F 22 x = Σ i = 1 7 x a i x a i T + c i 1 4[0, 10]−10.4028
F 23 x = Σ i = 1 10 x a i x a i T + c i 1 4[0, 10]−10.5363
Table 4. Results of malo and other algorithms for benchmark functions (500 iterations).
Table 4. Results of malo and other algorithms for benchmark functions (500 iterations).
Fn.MALOALOWOAGSAPSOAOA
AVSTDAVSTDAVSTDAVSTDAVSTDAVSTD
F1008.23 × 10−096.10 × 10−091.41 × 10−304.91 × 10−302.53 × 10−169.67 × 10−170.0001360.0002023.74 × 10−831.67 × 10−82
F2005.52 × 10−011.06241.06 × 10−212.39 × 10−210.0556550.1940740.0421440.0454211.60 × 10−466.66 × 10−46
F3000.05530.10395.39 × 10−072.93 × 10−06896.5347318.955970.1256222.119241.33 × 10−735.79 × 10−73
F4002.70 × 10−030.00560.0725810.397477.354871.7414521.0864810.3170394.06 × 10−411.74 × 10−40
F50.00170.003199.4078209.434127.865580.76362667.5430962.2253496.7183260.115592.89 × 10018.33 × 10−02
F67.61 × 10−061.29 × 10−057.49 × 10−093.71 × 10−093.1162660.5324292.5 × 10−161.74 × 10−160.0001028.28 × 10−055.733.49 × 10−01
F70.00045110.00034580.0270.02020.0014250.0011490.0894410.043390.1228540.0449576.27 × 10−044.71 × 10−04
F8−4188.62.0863−2427.2490.0236−5080.76695.7968−2821.07493.0375−4841.291152.814−173,3373.00 × 1005
F90019.79968.54840025.968417.47006846.7042311.6293800
F108.882 × 10−1600.43670.76967.40439.8975720.0620870.236280.2760150.509011.60 × 10018.19 × 1000
F11000.20750.09870.0002890.00158627.701545.0403430.0092150.00772400
F124.143 × 10−067.564 × 10−062.47512.28150.3396760.2148641.7996170.951140.0069170.0263018.44 × 10−011.79 × 10−01
F131.675 × 10−052.125 × 10−050.00250.00541.8890150.2660888.8990847.1262410.0066750.0089072.911.27 × 10−01
F140.99910.00421.9551.43282.1119732.4985945.8598383.8312993.6271682.5608281.041.09 × 10−01
F150.00174.855 × 10−070.00320.00620.0005720.0003240.0036730.0016470.0005770.0002227.70 × 10−043.09 × 10−04
F16−0.21060.3333−1.03168.41 × 10−14−1.031634.2 × 10−07−1.031634.88 × 10−16−1.031636.25 × 10−16−1.031552.82 × 10−04
F170.79130.12440.39791.95 × 10−130.3979142.7 × 10−050.39788700.39788700.39801.0829 × 10−04
F1828.01448.94334.606 × 10−1334.22 × 10−1534.17 × 10−1531.33 × 10−153.431.14
F19−3.22310.4089−3.86287.48 × 10−12−3.856160.002706−3.862782.29 × 10−15−3.862782.58 × 10−15−3.851.14 × 10−02
F20−1.47660.3933−3.27410.0597−2.981050.376653−3.317780.023081−3.266340.060516−2.914942.18 × 10−01
F21−10.1190.0571−6.21233.1996−7.049183.629551−5.955123.737079−6.86513.019644−6.016141.84
F22−10.37230.0528−6.67343.4176−8.181783.829202−9.684472.014088−8.456533.087094−6.823241.97
F23−10.51160.0694−5.9573.6373−9.342382.414737−10.53642.6 × 10−15−9.952911.782786−6.041641.89
Table 5. Results of malo and other algorithms for unimodal functions (1000 iterations).
Table 5. Results of malo and other algorithms for unimodal functions (1000 iterations).
FunctionMALOALOBASMSPSO
AVSTDAVSTDAVSTDAVSTDAVSTD
F1002.59 × 10−101.65 × 10−100.7736220.5281340.0569870.0146892.70 × 10−091.00 × 10−09
F2001.84 × 10−066.58 × 10−070.3345833.8160220.0068480.0015777.15 × 10−052.26 × 10−05
F3006.07 × 10−106.34 × 10−100.1153030.7660360.9598650.823454.71 × 10−061.49 × 10−06
F4001.36 × 10−081.81 × 10−090.1921850.8902660.2765940.0057383.25 × 10−071.02 × 10−08
F50.00053090.0008980.3467720.1095840.3340770.3000370.0853480.1401490.1234010.216251
F61.90 × 10−063.31 × 10−062.56 × 10−101.09 × 10−100.7788490.673920.1253230.0849985.23 × 10−072.74 × 10−06
F71.83 × 10−041.65 × 10−040.0042920.0050890.1374830.1126710.0003040.0002580.0013980.001269
FPAGAFACSAOA
AVSTDAVSTDAVSTDAVSTDAVSTD
F11.06 × 10−071.27 × 10−070.1188420.1256060.0396150.014490.00650.0002051.2 × 10−2220
F20.00062420.0001760.1452240.0532270.0503460.0123480.2120.03983.5 × 10−1227.9 × 10−122
F35.67 × 10−083.90 × 10−080.139020.1211610.0492730.0194090.2470.02143 × 10−1960
F40.00383790.0021860.1579510.8620290.1455130.0311711.12 × 10−058.25 × 10−062.7 × 10−1065.9 × 10−106
F50.78120.3668910.7141570.9727112.1758921.4472510.0071970.0072228.76510.1116
F61.08 × 10−071.25 × 10−070.1679180.8686380.058730.0144775.95 × 10−051.08 × 10−060.94640.1763
F70.00310530.0013670.0100730.0032630.0008530.0005040.0013210.0007280.0005340.000218
Table 6. Results of malo and other algorithms for multimodal functions (1000 iterations).
Table 6. Results of malo and other algorithms for multimodal functions (1000 iterations).
FunctionMALOALO BASMSPSO
AVSTDAVSTDAVSTDAVSTDAVSTD
F8−4189.40.6265−1606.28314.4302−1065.88858.498−4.207359.36 × 10−16−1367.01146.4089
F9007.71 × 10−068.45 × 10−061.2337480.6864471.325120.3262390.2785880.218991
F108.88 × 10−1603.73 × 10−151.50 × 10−150.1293590.0432518.88 × 10−068.56 × 10−091.11 × 10−092.39 × 10−11
F11000.0186040.0095451.4515750.5703090.706090.9079540.2736740.204348
F121.781 × 10−062.90 × 10−069.75 × 10−129.33 × 10−120.3959770.9933250.123340.0408989.42 × 10−092.31 × 10−10
F133.78 × 10−065.06 × 10−062.00 × 10−111.13 × 10−110.3866310.1219860.01350.0002881.35 × 10−072.88 × 10−08
FPAGAFACSAOA
AVSTDAVSTDAVSTDAVSTDAVSTD
F8−1842.426250.42824−2091.642.47235−1245.59353.2667−2094.910.007616−6.287 × 10111.406 × 1012
F90.27329460.0685830.6592710.8157510.2634580.1828240.1273280.00265510.466110.068
F100.00739870.0070960.9561110.8077010.1683060.0507968.16 × 10−091.63 × 10−088.88 × 10−160
F110.08502170.0400460.4878090.2177820.0998150.0244660.1226780.04967300
F120.00026570.0005530.1107690.0021520.1260760.2632015.60 × 10−091.58 × 10−100.68040.496
F133.67 × 10−063.51 × 10−061.29 × 10−010.0688510.002130.0012384.88 × 10−066.09 × 10−070.75730.191
Table 7. Balanced and imbalanced datasets characteristics.
Table 7. Balanced and imbalanced datasets characteristics.
Dataset#Instances#FeaturesImbalance Ratio
1Breast cancer286102.33
2Parkinsons195223.06
3Crx307151.20
4vehicle1946183.36
5vehicle2946183.19
6Heart267131.27
7glass0214912.00
8Ionosphere351341.78
9Breast_tissue10691.94
10Tic-tac-toe95891.86
11Pima76881.86
12Wdbc569301.70
13Liver34561.38
14Wisconsin69991.86
15Bupa34561.38
Table 8. Imbalanced datasets characteristics.
Table 8. Imbalanced datasets characteristics.
Dataset#Instances#FeaturesImbalance
Ratio
1Sonar208601.28
2Ecoli (im-others)33673.35
3Ionsphere351341.78
4Pid76881.86
5Segment (BRICKFACE-others)2310196.14
6Libra (123-others)360905
7Libra (456-others)360905
8Vehicle (van-others)846183.25
9Glass (tableware-others)214922.78
10Wine (1-others)178132.03
11Wine (3-others)178132.7
12Yeast (POX-others)1484899
13Yeast (ME2-others)1484832.33
14Abalone (18-9)731816.4
15Vowel (1-others)5281010.11
16Vowel (2-others)5281010.11
17Vowel (3-others)5281010.11
18German1000242.33
Table 9. Performances of MALO, ALO, GWO, and WOA on balanced and imbalanced real-world datasets.
Table 9. Performances of MALO, ALO, GWO, and WOA on balanced and imbalanced real-world datasets.
DatasetALOMALOGWOWOADatasetALOMALOGWOWOA
Breast cancer Wisconsin
Accuracy75.3679.7179.7178.26Accuracy95.8896.4795.8895.88
BACC73.6276.2374.7875.36BACC95.659695.8895.53
G-mean53.0769.4765.0063.57G-mean96.0596.596.0596.05
AUC61.9471.3869.4467.40AUC96.0596.596.0596.05
Time0.02320.02020.01650.0231Time0.03510.02360.01880.0264
Crx Bupa
Accuracy65.6466.2665.6464.42Accuracy63.4961.8665.1263.26
BACC63.3163.4462.8261.96BACC67.4468.667.4469.77
G-mean64.6465.3964.1064.45G-mean57.8360.8359.1659.81
AUC65.0165.6864.7864.45AUC63.0664.0663.0665.44
Time0.0277 0.0248 0.0328 0.0296Time0.0343 0.0238 0.0193 0.0276
Heart Breast_tissue
Accuracy70.1571.6471.6467.16Accuracy75.3878.4669.2370
BACC65.3765.6764.7865.67BACC80.7784.6280.7773.08
G-mean68.5869.7570.4667.12G-mean76.782.8472.3162.62
AUC69.1970.5470.8667.12AUC77.4583.0174.8466.34
Time0.0339 0.0240 0.0258 0.0253Time0.0260 0.0228 0.0253 0.0282
Ionosphere glass0
Accuracy95.4096.5596.5595.40Accuracy77.748078.4979.62
BACC94.2594.9494.4894.71BACC81.1383.0281.1381.13
G-mean93.3395.0495.0493.33G-mean79.8381.1580.0379.83
AUC93.5595.1695.1693.55AUC79.981.2980.0779.9
Time0.0272 0.0242 0.0271 0.0271 Time0.0272 0.0233 0.0259 0.0251
Tic-tac-toe vehicle1
Accuracy99.1698.4798.4798.47Accuracy75.6476.476.1175.26
BACC98.5898.7498.6698.66BACC76.7877.2576.7876.3
G-mean98.7998.1898.1898.18G-mean53.5555.4857.8850.48
AUC98.8098.1998.1998.19AUC61.2862.8563.7459.78
Time0.0520 0.0274 0.0319 0.0373Time0.0333 0.0261 0.0276 0.0335
Wdbc vehicle2
Accuracy96.4897.8196.4896.48Accuracy93.1893.4692.9993.18
BACC94.5195.6395.2194.93BACC93.8493.8493.3694.31
G-mean96.4396.9996.4396.43G-mean89.8290.1289.8290.41
AUC96.4396.9996.4396.43AUC90.0790.3990.0790.71
Time0.0429 0.0250 0.0277 0.0288 Time0.0353 0.0323 0.0356 0.0349
Pima Parkinsons
Accuracy69.2769.7969.2769.79Accuracy85.4287.588.3385
BACC67.8168.9668.0268.54BACC87.589.5891.6787.5
G-mean63.4157.6358.1863.08G-mean79.3582.9281.6575.31
AUC64.0162.2662.1565.03AUC80.5683.3383.3377.78
Time0.0314 0.0327 0.0357 0.0328Time0.0515 0.0232 0.0199 0.0258
Liver
Accuracy72.0974.4269.7773.62
BACC69.7771.4067.4468.84
G-mean70.2469.9263.9066.67
AUC70.3371.7865.8369.61
Time0.0272 0.0239 0.0259 0.0270
Table 10. Significance test of average g-mean between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
Table 10. Significance test of average g-mean between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
DatasetMALOALODifferenceRankMALOGWODifferenceRankMALOWOADifferenceRank
Breast cancer69.4753.07+16.41569.4765+4.471369.4763.57+5.913
Crx65.3964.64+0.75665.3964.1+1.29965.3964.45+0.945
Heart69.7568.58+1.17769.7570.46−0.71−769.7567.12+2.639
Ionosphere95.0493.33+1.71995.0495.040195.0493.33+1.718
Tic-tac-toe98.1898.79−0.61−598.1898.180198.1898.1801
Wdbc96.9996.43+0.56496.9996.43+0.56696.9996.43+0.564
Pima57.6363.41−5.78−1357.6358.18−0.55−557.6363.08−5.45−12
Liver69.9270.24−0.32−269.9263.9+6.021469.9266.67+3.2510
Wisconsin96.596.05+0.45396.596.05+0.45496.596.05+0.453
Bupa60.8357.83+31260.8359.16+1.671060.8359.81+1.026
Breast_tissue82.8476.7+6.141482.8472.31+10.531582.8462.62+20.2215
glass081.1579.83+1.32881.1580.03+1.12881.1579.83+1.327
vehicle155.4853.55+1.931055.4857.88−2.4−1255.4850.48+511
vehicle290.1289.82+0.3190.1289.82+0.3390.1290.41−0.29−2
Parkinsons83.3380.56+2.771183.3381.65+1.681183.3375.31+8.0214
T = min {100, 20} = 20T = min {95, 24} = 24T = min {106, 14} = 14
Table 11. Significance test of average accuracy between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
Table 11. Significance test of average accuracy between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
DatasetMALOALODifferenceRankMALOGWODifferenceRankMALOWOADifferenceRank
Breast cancer79.7175.36+4.351579.7179.710179.7178.26+1.4511
Crx66.2665.64+0.62466.2665.64+0.62966.2664.42+1.8412
Heart71.6470.15+1.49971.6471.640171.6467.16+4.4814
Ionosphere96.5595.4+1.15796.5596.550196.5595.4+1.158
Tic-tac-toe98.4799.16−0.69−598.4798.470198.4798.4701
Wdbc97.8196.48+1.33897.8196.48+1.331197.8196.48+1.339
Pima69.7969.27+0.52269.7969.27+0.52769.7969.7901
Liver74.4272.09+2.331374.4269.77+4.651474.4273.62+0.86
Wisconsin96.4795.88+0.59396.4795.88+0.59896.4795.88+0.595
Bupa61.8663.49−1.63−1061.8665.12−3.26−1361.8663.26−1.4−10
Breast_tissue78.4675.38+3.081478.4669.23+9.231578.4670+8.4615
glass08077.74+2.26128078.49+1.51128079.62+0.384
vehicle176.475.64+0.76676.476.11+0.29576.475.26+1.147
vehicle293.4693.18+0.28193.4692.99+0.47693.4693.18+0.283
Parkinsons87.585.42+2.081187.588.33−0.83−1087.585+2.513
T = min {105, 15} = 15T = min {91, 23} = 23T = min {109, 10} = 10
Table 12. Significance test of average bacc between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
Table 12. Significance test of average bacc between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
DatasetMALOALODifferenceRankMALOGWODifferenceRankMALOWOADifferenceRank
Breast cancer76.2373.62+2.611476.2374.78+1.451176.2375.36+0.878
Crx63.4463.31+0.13263.4462.82+0.62763.4461.96+1.4811
Heart65.6765.37+0.3465.6764.78+0.89865.6765.6701
Ionosphere94.9494.25+0.69794.9494.48+0.46494.9494.71+0.233
Tic-tac-toe98.7498.58+0.16398.7498.66+0.08198.7498.66+0.082
Wdbc95.6394.51+1.12895.6395.21+0.42395.6394.93+0.77
Pima68.9667.81+1.15968.9668.02+0.94968.9668.54+0.424
Liver71.469.77+1.631171.467.44+3.961571.468.84+2.5614
Wisconsin9695.65+0.3559695.88+0.1229695.53+0.475
Bupa68.667.44+1.161068.667.44+1.161068.669.77−1.17−10
Breast_tissue84.6280.77+3.851584.6280.77+3.851484.6273.08+11.5415
glass083.0281.13+1.891283.0281.13+1.891283.0281.13+1.8912
vehicle177.2576.78+0.47677.2576.78+0.47577.2576.3+0.959
vehicle293.8493.840193.8493.36+0.48693.8494.31−0.47−5
Parkinsons89.5887.5+2.081389.5891.67−2.09−1389.5887.5+2.0813
T = min {120, 0} = 0T = min {107, 13} = 13T = min {104, 15} = 15
Table 13. Significance test of average auc between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
Table 13. Significance test of average auc between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
DatasetMALOALODifferenceRankMALOGWODifferenceRankMALOWOADifferenceRank
Breast cancer71.3861.94+9.441571.3869.44+1.941371.3867.4+3.9813
Crx65.6865.01+0.67565.6864.78+0.91065.6864.45+1.235
Heart70.5469.19+1.35770.5470.86−0.32−570.5467.12+3.4212
Ionosphere95.1693.55+1.611195.1695.160195.1693.55+1.618
Tic-tac-toe98.1998.8−0.61−498.1998.190198.1998.1901
Wdbc96.9996.43+0.56396.9996.43+0.56896.9996.43+0.564
Pima62.2664.01−1.75−1262.2662.15+0.11462.2665.03−2.77−10
Liver71.7870.33+1.45971.7865.83+5.951471.7869.61+2.179
Wisconsin96.596.05+0.45296.596.05+0.45796.596.05+0.453
Bupa64.0663.06+1664.0663.06+11164.0665.44−1.38−6
Breast_tissue83.0177.45+5.561483.0174.84+8.171583.0166.34+16.6715
glass081.2979.9+1.39881.2980.07+1.221281.2979.9+1.397
vehicle162.8561.28+1.571062.8563.74−0.89−962.8559.78+3.0711
vehicle290.3990.07+0.32190.3990.07+0.32690.3990.71−0.32−2
Parkinsons83.3380.56+2.771383.3383.330183.3377.78+5.5514
T = min {104, 16} = 16T = min {103, 14} = 14T = min {102, 18} = 18
Table 14. The t values of significance test on averaged g-mean between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
Table 14. The t values of significance test on averaged g-mean between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
MALO vs. ALOMALO vs. GWOMALO vs. WOA
R+10095106
R−202414
T = min {R+, R−}20 (+)24 (+)14 (+)
Table 15. The t values of significance test on averaged accuracy between MALO VS. ALO, MALO vs. GWO, and MALO vs. WOA.
Table 15. The t values of significance test on averaged accuracy between MALO VS. ALO, MALO vs. GWO, and MALO vs. WOA.
MALO vs. ALOMALO vs. GWOMALO vs. WOA
R+10591109
R−152310
T = min {R+, R−}15 (+)23 (+)10 (+)
Table 16. The t values of significance test on averaged bacc between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
Table 16. The t values of significance test on averaged bacc between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
MALO vs. ALOMALO vs. GWOMALO vs. WOA
R+120107104
R−01315
T = min {R+, R−}0 (+)13 (+)15 (+)
Table 17. The t values of significance test on averaged auc between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
Table 17. The t values of significance test on averaged auc between MALO vs. ALO, MALO vs. GWO, and MALO vs. WOA.
MALO vs. ALOMALO vs. GWOMALO vs. WOA
R+104103102
R−161418
T = min {R+, R−}16 (+)14 (+)18 (+)
Table 18. Performances of smote and results in [51] compared to performances of ALO-SMOTE, MALO-SMOTE, GWO-SMOTE, and WOA-SMOTE.
Table 18. Performances of smote and results in [51] compared to performances of ALO-SMOTE, MALO-SMOTE, GWO-SMOTE, and WOA-SMOTE.
DatasetSMOTEACOR-SMOTEALO-SMOTEMALO-SMOTEGWO-SMOTEWOA-SMOTEDatasetSMOTEACOR-SMOTEALO-SMOTEMALO-SMOTEGWO-SMOTEWOA-SMOTE
Sonar Wine (1-others)
Accuracy0.721150.735580.80770.88460.80770.7885Accuracy0.977530.994380.81820.84090.81820.8409
BACC0.721840.74380.78850.82690.80150.7800BACC0.978920.987320.83180.83180.73330.7667
G-mean0.721760.73370.75190.80420.78500.7483G-mean0.978910.987310.68310.73030.68310.7303
AUC0.822930.847680.78060.82090.80150.7800AUC0.997720.998720.73330.76670.73330.7667
Ecoli (im-others) Wine (3-others)
Accuracy0.860120.869050.89290.90480.8810.8929Accuracy0.983150.994380.70450.72730.750.7045
BACC0.863640.851180.88570.88810.78440.7922BACC0.988460.996150.72730.73180.59380.4844
G-mean0.863610.850540.76980.77570.76240.7685G-mean0.988390.996150.27950.28410.48410
AUC0.939250.952620.79090.79590.78440.7922AUC0.999680.999520.51040.5260.59380.4844
Ionsphere Yeast (POX-others)
Accuracy0.874640.911680.94250.9540.91950.931Accuracy0.991240.991240.99460.99460.99190.9946
BACC0.862060.894440.9310.93790.88710.9032BACC0.773630.773630.99410.99410.70.8
G-mean0.860910.892350.91580.93330.87990.898G-mean0.740610.740610.77460.77460.63250.7746
AUC0.92220.950720.91940.93550.88710.9032AUC0.699620.699620.80.80.70.8
Pid Yeast (ME2-others
Accuracy0.744790.760420.70310.70310.68230.6979Accuracy0.867250.85040.98650.98110.97570.9757
BACC0.733880.744150.69060.68750.59670.6087BACC0.81780.809080.98010.97840.6350.5909
G-mean0.732990.74220.5730.5730.52520.5323G-mean0.816080.807860.73850.6030.52150.4264
AUC0.829140.832620.62660.62660.59670.6087AUC0.871680.872650.77270.68180.6350.5909
Segment (BRICKFACE-others) Abalone (18-9)
Accuracy0.980950.991770.99310.99650.99480.9965Accuracy0.798910.841310.94510.94510.94510.9451
BACC0.978790.986360.99550.99580.98190.988BACC0.770350.781670.94510.94510.50.5
G-mean0.978780.986330.98080.98790.98180.9879G-mean0.769680.778760000
AUC0.997170.997920.98090.9880.98190.988AUC0.804060.837790.50.50.50.5
Libra (123-others) Vowel (1-others)
Accuracy0.8750.813890.96670.97780.97780.9667Accuracy0.84280.833330.89390.91670.87120.8864
BACC0.828130.815970.96440.96890.94440.9167BACC0.857290.870830.87120.88480.92920.9375
G-mean0.824430.815960.91290.94280.94280.9129G-mean0.857110.869630.93990.95310.92650.9354
AUC0.793520.842450.91670.94440.94440.9167AUC0.904640.922140.94170.95420.92920.9375
Libra (456-others) Vowel (2-others)
Accuracy0.930560.936110.97780.98890.97780.9889Accuracy0.840910.84280.95450.95450.93180.9394
BACC0.909720.913190.97110.980.94440.9722BACC0.865630.876040.94850.93790.96250.9667
G-mean0.909060.91240.94280.97180.94280.9718G-mean0.86510.87510.97040.97470.96180.9661
AUC0.977960.976660.94440.97220.94440.9722AUC0.91910.923740.97080.9750.96250.9667
Vehicle (van-others) Vowel (3-others)
Accuracy0.950350.945630.95730.95730.95730.9573Accuracy0.854170.827650.98480.99240.98480.9924
BACC0.955360.934880.94880.95070.92380.9238BACC0.826040.839580.98480.99240.91670.9583
G-mean0.955320.934650.92930.92930.92160.9216G-mean0.825330.839460.91290.95740.91290.9574
AUC0.982350.984420.93070.93070.92380.9238AUC0.91250.929250.91670.95830.91670.9583
Glass (tableware-others) German
Accuracy0.822430.985980.98110.98110.96230.9623Accuracy0.7310.7290.7160.7160.7160.708
BACC0.85420.992680.97740.98110.74020.7402BACC0.72310.714050.70720.70640.55710.5438
G-mean0.85350.992660.70710.70710.70010.7001G-mean0.722830.713070.41950.41950.39080.3567
AUC0.905150.996750.750.750.74020.7402AUC0.78910.794570.56480.56480.55710.5438
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

El Bakrawy, L.M.; Cifci, M.A.; Kausar, S.; Hussain, S.; Islam, M.A.; Alatas, B.; Desuky, A.S. A Modified Ant Lion Optimization Method and Its Application for Instance Reduction Problem in Balanced and Imbalanced Data. Axioms 2022, 11, 95. https://doi.org/10.3390/axioms11030095

AMA Style

El Bakrawy LM, Cifci MA, Kausar S, Hussain S, Islam MA, Alatas B, Desuky AS. A Modified Ant Lion Optimization Method and Its Application for Instance Reduction Problem in Balanced and Imbalanced Data. Axioms. 2022; 11(3):95. https://doi.org/10.3390/axioms11030095

Chicago/Turabian Style

El Bakrawy, Lamiaa M., Mehmet Akif Cifci, Samina Kausar, Sadiq Hussain, Md. Akhtarul Islam, Bilal Alatas, and Abeer S. Desuky. 2022. "A Modified Ant Lion Optimization Method and Its Application for Instance Reduction Problem in Balanced and Imbalanced Data" Axioms 11, no. 3: 95. https://doi.org/10.3390/axioms11030095

APA Style

El Bakrawy, L. M., Cifci, M. A., Kausar, S., Hussain, S., Islam, M. A., Alatas, B., & Desuky, A. S. (2022). A Modified Ant Lion Optimization Method and Its Application for Instance Reduction Problem in Balanced and Imbalanced Data. Axioms, 11(3), 95. https://doi.org/10.3390/axioms11030095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop