Next Article in Journal
Feasible Evaluation and Implementation of Shunt Active Filter for Harmonic Mitigation in Induction Heating System
Next Article in Special Issue
LRSE-Net: Lightweight Residual Squeeze-and-Excitation Network for Stenosis Detection in X-ray Coronary Angiography
Previous Article in Journal
Semi-Supervised Segmentation of Echocardiography Videos Using Graph Signal Processing
Previous Article in Special Issue
Fast 3D Liver Segmentation Using a Trained Deep Chan-Vese Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Chaotic Honey Badger Algorithm for Feature Selection

by
Samah Alshathri
1,*,
Mohamed Abd Elaziz
2,3,4,5,
Dalia Yousri 
6,
Osama Farouk Hassan 
7 and
Rehab Ali Ibrahim 
4
1
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
3
Artificial Intelligence Research Center (AIRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
4
Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
5
Department of Electrical and Computer Engineering, Lebanese American University, Byblos 13-5053, Lebanon
6
Electrical Engineering Department, Faculty of Engineering, Fayoum University, Faiyum 63514, Egypt
7
Department of Information System, Faculty of Computers and Informatics, Suez Canal University, Ismailia 41522, Egypt
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(21), 3463; https://doi.org/10.3390/electronics11213463
Submission received: 15 September 2022 / Revised: 20 October 2022 / Accepted: 20 October 2022 / Published: 26 October 2022
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, Volume II)

Abstract

:
Determining the most relevant features is a critical pre-processing step in various fields to enhance prediction. To address this issue, a set of feature selection (FS) techniques have been proposed; however, they still have certain limitations. For example, they may focus on nearby points, which lowers classification accuracy because the chosen features may include noisy features. To take advantage of the benefits of the quantum-based optimization technique and the 2D chaotic Hénon map, we provide a modified version of the honey badger algorithm (HBA) called QCHBA. The ability of such strategies to strike a balance between exploitation and exploration while identifying the workable subset of pertinent features is the basis for employing them to enhance HBA. The effectiveness of QCHBA was evaluated in a series of experiments conducted using eighteen datasets involving comparison with recognized FS techniques. The results indicate high efficiency of the QCHBA among the datasets using various performance criteria.

1. Introduction

The enormous increase in the volume of data has resulted in a range of difficulties and issues, including noisy, high-dimensional, and irrelevant data [1]. This results in significant processing costs and adversely affects the effectiveness and accuracy of machine learning systems. Approaches for feature selection (FS) have been used to lower computational costs and increase classification accuracy [2]. Feature selection is the process of limiting the number of collected features into a relevant subset that can be utilized to combat the problem of dimensionality. By choosing a sample of pertinent features, FS techniques are frequently employed to capture data qualities [3]. Additionally, they can eliminate irrelevant and distracting data [3]. FS techniques are frequently used in a variety of disciplines, including COVID-19 CT scan classification [4], text categorization, human activity detection [5,6], MR image segmentation [7,8], data analytics [9,10], parameter estimation of biochemical systems [10], investigation of neuromuscular diseases [11], and other applications [12,13,14].
There are three types of FS methods: wrapper, filter, and embedded  [15]. Filter-based methods rely on dataset characteristics without depending on learning technique, while wrapper-based methods evaluate the chosen features using the learning process. Embedded algorithms identify the elements that most effectively improve model accuracy. As a result, procedures based on filters are quicker and more effective than methods based on wrappers. A strategy that can limit the amount of features that must be chosen, while maximising classifier accuracy, is the optimal FS method [15].
Numerous MH algorithms have been successfully applied to address FS issues. These include, the particle swarm optimization (PSO) algorithm [16], the differential evolution (DE) algorithm [17], the genetic algorithm (GA) [18], the grasshopper optimization algorithm (GOA) [19], the gravitational search algorithm (GSA) [20], the slime mould algorithm (SMA) [3], the Harris hawk optimization (HHO) algorithm [21], the marine predator algorithm (MPA) [22] and others [7,8,9,10,23]. An FS strategy has been proposed that is based on the electric fish optimization (EFO) [24]. It was tested on a variety of challenging real-world datasets and performed very well, with the exception of high dimensional issues where it exhibited delayed convergence and a tendency to become stuck in local minima.
The NFL theorem states that no single algorithm can resolve every issue. As a result, a hybridization approach is frequently used to handle a variety of complicated issues, including FS. We utilize an optimizer known as the honey badger algorithm (HBA) [25] which takes its inspiration from the intelligent foraging behavior of honey badger. The authors of [26] developed the honey badger algorithm (HBA), which is an effective optimization technique for determining the ideal size and location of capacitors and other types of DG to reduce the overall active power. Since there are numerous variable quantities with various non-linear characteristics included in the proton exchange membrane fuel cell (PEMFC) design, which must be properly specified to ensure effective modelling, an alternative method to determine the PEMFC parameters was proposed in [27]. This method relies on use of the honey badger algorithm (HBA) as a novel identification approach to determine the PEMFC parameters. A new approach was suggested in [28] to address certain issues and drawbacks of the HBA which are manifest in trapping in local optima, low convergence, and imbalance between exploration and exploitation stages. The approach, known as mHBA is based on use of the dimensional learning hunting (DLH) method, an effective local search technique, which is incorporated into the HBA. In addition, HBA has proved its efficiency in quantum applications as reported in [29]. The authors proposed an IoT-based optimization scheme for task scheduling for minimizing energy in cloud computing using the HBA. The major objective of the present study was to identify the most significant cloud scheduling key. A committee of decision-makers first chose the standards for evaluation. The optimal solution was then determined after assigning weights to each criterion using an optimization technique. Last, but not least, the evaluation metrics included in the CloudSim toolbox, including makespan, energy consumption, and resource utilisation were applied. The results indicated that it might be challenging to match user demands with available cloud resources to improve performance and minimise energy use. This paper offers a work scheduling method that uses the honey badger algorithm (HBA) in a heterogeneous visualized cloud to save energy.
While application of the HBA has demonstrated a number of benefits, it still suffers from limitations that we were motivated to address. This has involved use of the quantum-based optimization (QBO) technique and the 2D chaotic Hénon map. These methods were used to enhance convergence towards a feasible solution, and to improve balancing of exploitation and exploration. To evaluate the performance of the proposed method, a set of eighteen datasets was used. The results were compared with other well-known FS methods. In general, the proposed technique, named QCHBA, starts by setting the value of a set of solutions using the QBO strategy then evaluates the performance of each using the error classification and the ratio of selected features as the fitness value. Then the best solution having the best fitness value is determined. The operators of the HBA and 2D Hénon maps are integrated to enhance the current set of solutions that represent the relevant features. This process is repeated until the terminal conditions occur. The testing sample is then used to assess the performance of the returned best solution using different performance measures. To the best of our knowledge, no integration of the QBO, the 2D Hénon map, and the operators of MH techniques has been proposed to date. This motivated us to apply this integration as an FS method.
The main contributions of this study can be summarized as follows:
  • A modified version of the honey badger algorithm is proposed and applied as a feature selection technique.
  • Quantum-based optimization and the 2D Hénon chaotic map are used to improve the HBA during the process of selecting relevant features.
  • The efficiency of the developed method is evaluated using state-of-the-art FS methods applied to eighteen datasets.

2. Background

2.1. Honey Badger Algorithm (HBA)

The characteristics of the honey badger algorithm (HBA) are described in detail in this section [25]. The design of the HBA was influenced by the foraging behaviour of honey badgers. The honey badger uses its sense of smell as one method of locating its meal, and also uses digging as a second method. The honey badger relies on honey-guide birds to locate and then enter the hives. The “digging phase” was the name used for the first tactic, and the “honey phase” was the name given by the algorithm’s designers to the second tactic. The honey badger’s olfactory sensitivity controls how it moves; when the fragrance is potent, the insect will move quickly, and vice versa [25].
The HBA’s main stages and corresponding equations are as follows:
  • Initialization process: During this stage, the problem space’s upper ( H U ) and lower ( H L ) boundaries are used to determine the first potential solution. As a result, the first solutions are stochastic sets that may be created using the following method according to Equation (1) [25].
    H i = H L + r 1 ( 1 , D ) × ( H U H L ) , i = 1 , 2 , , N .
    where D refers to the dimension of the solution, H represents the total of the potential solutions, and N is the number of solution providers (honey badgers).
  • Updating positions: Coordinates of the candidates H n e w are updated at this point. For instance, this can involve applying a technique that uses either the digging or honey stages.
    -
    Digging phase: In this phase, the movements of the potential search subjects are influenced by the potency of the predator’s odor and the separation of the honey badger (agent) from the prey (P). The honey badger excavates in a polarized circle. The following is the stated formula for its motion:
    H n e w = P + F g × β × I n × P + F g × r 3 × ( P H i ) × ( c o s 2 π r 4 ) × ( 1 c o s 2 π r 5 )
    where β measures an insect’s ability to collect food. Hashim et al. [25] reported that the maximum value of β that might exist is 6. The intensity is I n , and the r 3 , r 4 , and r 5 are random variables with a range of 0 to 1 that were selected from a uniform distribution. The F g , which serves as a search direction indication, is produced as follows:
    F g = 1 i f r 6 0.5 1 i f e l s e
    -
    Honey phase: When looking for beehives, honey badgers utilize the honey phase to alter their position relative to the honey lead bird. The following equation was used by Hashim et al. [25] to determine the honey phase:
    H n e w = P + F g × r 7 × σ × ( P H i )
    where P is the best solution obtained so far and r 7 is a random number having values between 0 and 1.
  • Modeling intensity I n : Since the honey badger perception of insect scent governs its behavior, Hashim et al. [25] created the following formulation for each contender’s scent intensity I n i of the prey.
    I n i = r 2 × ( H i H i + 1 ) 2 4 π ( P H i )
    where P is the prey’s location and r 2 is a stochastic number in the range [0, 1].
  • Modeling the density parameter ( s i g m a ): According to Hashim et al. [25], the s i g m a value serves as a regulator for transmission among the local and global search stages. Hashim et al. [25] hypothesized that b e t a is depicted throughout the iterations, as shown below:
    σ = C × e x p ( I T I T m a x )
    where IT and IT m a x refer to the current and the total number of iterations, respectively. C is a constant that was recommended to be 2.
  • Escaping from local solutions: the algorithm developers [25] employed a warning (Fg) to point out the search direction to avoid getting stuck on local solutions.
The primary structure of the HBA is described in the Algorithm 1 based on the preceding description.
Algorithm 1 Steps of HBA
1: Inputs: Agents size N, number of iterations I t e r m a x .
2: Outputs: The optimal solutions.
3: Step 1: Calculate the first set of N solutions U with dimension d (i.e., number of unknown variables).
4: Compute the fitness function of Equation (13) and the corresponding swarm matrix as the best solutions (P).
5: while (Iter ≤ Iter m a x ) do
6:    Upgrade the value of the decreasing factor through Equation (19).
7:    for (i = 1 to N) do
8:        Compute the intensity through Equation (5).
9:        if  r < 0.5  then
10:           Upgrade the location of H n e w through Equation (15).
11:        else
12:           Upgrade the location of H n e w through Equation (4).
13:        Evaluate the new solutions and compute the F i t t + 1 and assign F i t m a x t + 1 .
14:        if  F i t t + 1 F i t t  then
15:           Set H i = H n e w and F i t t = F i t t + 1 .
16:        if  F i t m a x t + 1 F i t m a x t  then
17:           Set H b e s t = H n e w and F i t m a x t = F i t m a x t + 1 .

2.2. Two-Dimensional Hénon Map

The 2D Hénon map is one of the most popular discrete-time dynamical systems that replicates chaotic behavior [30]. The Hénon map, also known as the Hénon-Pomeau attractor/map, is a discrete-time dynamical system. It is one of the most well-researched examples of chaotic behavior in dynamical systems. The Hénon map converts a point (xn, yn) in the plane to a new point. The map was first presented by Michel Hénon as a condensed form of the Poincaré section of the Lorenz model. The Hénon strange attractor is a group of points, under the classical map, from which an initial plane point would either approach or diverge to infinity. A fractal known as the Hénon attractor has a Cantor set in one direction and is smooth in the other. The mathematical formula of the Hénon map is written as in Equation (7) and its distribution is represented in Figure 1:
x t + 1 = 1 1.4 · x t 2 + y t y t + 1 = 0.3 · x t

3. Quantum Chaotic Honey Badger Algorithm (QCHBA)

In this approach, the HBA is combined with the two-dimensional Hénon map to improve the fundamental performance of the algorithm. Moreover, the quantum-based optimization technique is used to improve the ability to search to balance exploration and exploitation. In general, the proposed QCHAB approach starts by using the training set that represents 70% of the given dataset to determine the relevant features. Then the solution is generated using the quantum-based optimization technique. This is followed by computing the performance of each solution and determining the optimal solution. The next step is to update the solution according to the strengths of the 2D chaotic maps and the QBO. This updating procedure is repeated until terminal criteria are met and then the best solution is the output of this stage. The next stage involves reducing the features in the testing set to represent 30% of the dataset. Then, the performance of the selected features is evaluated using different performance metrics.

3.1. Initial Solutions

The main purpose of this stage is to generate the set of N solutions (i.e., population) using the QBO technique. These solutions consists of D Q-bits (D refers to the number of features) and this can be given as:
H i = q i 1 q i 2 q i 1 q i 2 q i D q i D = [ θ i 1 θ i 2 | θ i D ] , i = 1 , 2 , , N
In Equation (8), H i denotes the superpositions of probabilities of the features that are selected that correspond to ones and features not selected that correspond to zeros.

3.2. Updating Solution

In this stage, the QCHAB starts by obtaining the binary form of solution H i , i = 1 , 2 , , N using the following formula.
B H i , j = 1 i f rand < β 2 0 o t h e r w i s e
In Equation (9), rand [ 0 , 1 ] refers to a random value.
Thereafter, the fitness value for each H i is computed using the following equation.
F i t i = ρ × γ + 1 ρ × | B H i , j | D
In Equation (10), | B H i , j | is the number of features that correspond to ones. γ refers to the the error classification obtained from applying the KNN classifier, whereas ρ [ 0 , 1 ] refers to the parameter that is used to balance the objectives (i.e., the error classification and feature selection).
The next step to compute the fitness value for each H i then determines the best solution H b . Then, the modified version of HBA based on the 2D Hénon map is used to update the current solutions. The following is a summary of the proposed modification and controlled equations, in which there are two modifications.
  • First modification: The two-dimensional Hénon map is applied to adjust the parameters of C and β of Equation (11), respectively, to improve the functionality of the fundamental HBA optimizer. The updated values of C and β follow the equation shown below:
    C ( t ) = 4 y t + 1 β ( t ) = 7 H t + 1
    where H t + 1 and y t + 1 are vectors of the Hénon map and t is the current iteration. The Hénon map vectors are normalized using the numbers 4 and 7 to place them inside the HBA developer’s suggested range. The developers chose to use β and C values of 6 and 2, as mentioned in the previous section on the fundamental HBA.
    The variables b e t a and C undergo changes during the course of the iterations with values ranging from 0 to 7 and 0 to 4, respectively.
    The CHBA uses the values 4 and 7 to provide broad diversity. When implemented, the Hénon map’s initialization is 0 (x(1) = 0; y(1) = 0). Figure 1 shows how the map’s attractor works.
Then the digging phase and the density variable are modeled as
H n e w = P + F g × β ( t ) × I n × P + F g × r 3 × ( P H i ) × ( c o s 2 π r 4 ) × ( 1 c o s 2 π r 5 )
σ = C ( t ) × e x p ( I t e r I t e r m a x )
The process of updating the solutions is conducted until the stop conditions are met. Then the best solution H b is used as the output of this stage.

3.3. Evaluate Quality of H b

Within this phase, the best solution H b is used to remove irrelevant features from the testing set. Then, this reduced testing set is used as input to the learned KNN classifier to predict the target of the testing set. The performance of the predicted target is then computed using a set of performance criteria. The QCHBA algorithm steps are listed in Algorithm 2.
Algorithm 2 Pseudo code of QCHBA
1: Inputs: Agents size N, number of iterations I t e r m a x , the dataset.
2: Outputs: The optimal solutions.
3: Step 1: Calculate the first set of N solutions U with dimension d (i.e., number of unknown variables using QBO as in Equation (9)).
4: Compute the fitness function as in Equation (10) and the corresponding swarm matrix as the best solutions (P).
5: Calculate C, and β based on Hénon map, using Equation (11) with dimensions of 1* Iter m a x .
6: while (Iter I t e r m a x ) do
7:    Upgrade the value of the decreasing factor through Equation (13).
8:    for (i = 1 to N) do
9:        Compute the intensity through Equation (5).
10:        if  r < 0.5  then
11:           Upgrade the location of H n e w through Equation (12).
12:        else
13:           Upgrade the location of H n e w through Equation (4).
14:        Evaluate the new solutions and compute the F i t t + 1 and assign F i t m a x t + 1 .
15:        if  F i t t + 1 F i t t  then
16:           Set H i = H n e w and F i t t = F i t t + 1 .
17:        if  F i t m a x t + 1 F i t m a x t  then
18:           Set H b e s t = H n e w and F i t m a x t = F i t m a x t + 1 .
19: Evaluate the performance of the best solution using the testing set.

4. Experimental Results

4.1. Data Description

The evaluation of the performance of QCHBA was performed using eighteen UCI datasets [31]. The characteristics of these datasets are detailed in Table 1. From these characteristics, it can be observed that there are different classes, features, and samples, and that they derive from different domains.

4.2. Performance Measures

Six performance metrics are presented in this section that were used to assess how well the proposed methods performed. These metrics included the averages of the accuracy, standard deviation (Std), fitness value, minimum fitness value (Min), and maximum fitness value (Max). All algorithms were used 25 times to obtain the performance metric averages.
  • Accuracy (Acc): The corrected classified data ratio was calculated using this metric. It was determined using Equation (14).
    A c c = T P + T N T P + F N + F P + T N
    where FN, TN, FP and TP denote false negative, true negative, false positive, and true positive, respectively.
  • Fitness value: This metric assesses the effectiveness of the techniques using the fitness function as in Equation (10).
  • Maximum of the fitness value: This metric captures the highest result that the fitness function for each method can achieve.
    M a x = max 1 i N r F i t b i
  • Minimum of the fitness value: This metric captures the lowest result that the fitness function for each method can achieve.
    M i n = min 1 i N r F i t b i
  • Selected features: This metric keeps track of how many chosen features each algorithm is able to produce.
  • Standard deviation: This metric assesses an algorithm’s consistency throughout numerous executions. The calculation is as in Equation (17).
    S t d = 1 N r i = 1 N r ( F i t i F i t a ) 2
    The parameter N r refers to the number of runs and F i t i refers to the given fitness value. Its average is given by F i t a . Whereas, F i t b i refers to the best F i t at run i.
The results of the QCHBA were compared with the electric fish optimization (EFO) [24], the sinusoidal parameter adaptation incorporated with L-SHADE (LSEpSin) [32], the grey-wolf optimization algorithm (GWO) [33], the reptile search algorithm (RSA) [34], the L-SHADE with semi-parameter adaptation (LSHADECS) [35], and the self-adaptive differential evolution (SaDE) [36].

4.3. Results and Discussion

The results obtained for the proposed QCHBA and the other FS methods are introduced in this section. The comparison was performed with the traditional HBA, the chaotic HAB, GWO, EFO, RSA, LSHADE, LSHADECS, and SaDE. In the initial investigations, the parameter settings of these FS algorithms were set as for the original implementation. The common parameters were set as N = 15 and t m a x = 50 . To ensure a fair comparison, each method was run 25 times. The dataset was split into a training set representing 80% of the total and a testing set representing 20% of the total.
The results for the QCHBA and the other methods based on the average fitness values are given in Table 2. It can be seen from these results that the QCHBA had the smallest value at seven datasets, which represented nearly 39% of the the tested data. Both the CHBA and the HBA showed best performance of nearly 22% of the datasets. Figure 2 shows the average of each algorithm among the tested datasets based on the average fitness value. From this figure, it can be seen that the QCHBA had the smallest average overall compared to the other algorithms. This was followed by the traditional HBA and CHBA which had the second and third best fitness value averages, respectively. LSHADECS was the worst algorithm based on the average fitness value obtained in this study.
The stability of the competitive methods is given in Table 3. From the values for the standard deviation (STD) of the fitness values for each method, it can be seen that the LSHADE was the most stable method overall. The STD results for the QCHBA indicated that its stability was better than the remaining methods with the second best performance. The EFO, SaDE, HBA, RSA, and LSHADECS methods occupied the subsequent ranks, respectively, based on their STD values. The same conclusions may be drawn from Figure 3.
Based on the results for the best fitness values given in Table 4 and Figure 4, the following observations can be made: First, the QCHBA method had the lowest results at nearly 61% from the eighteen datasets. This was followed by the HBA and CHBA methods which had best values of nearly 27% and 22%, respectively. The GWO and RSA methods achieved better performance in terms of best fitness values at 16% and 11%, respectively. This indicates that the operators of the HBA had the ability to reach smaller fitness values than the other algorithms. However, adding quantum and chaotic maps to these operators increased the probability of being optimal cases.
From the worst fitness value results for each method that are presented in Table 5 and Figure 5, the following observations can be made: The HBA and RSA showed better fitness values in the worst instances of each compared to the other methods. However, the proposed QCHBA approach maintained its superiority over the other methods with better fitness values for eight datasets, which represented nearly 45% of the total number of datasets tested. The convergence curves of the algorithms are shown in Figure 6. These curves indicate the high efficiency of the proposed method, which converged faster towards the optimal subset of features than the other methods.
The efficiency of the performance of the QCHBA based on the accuracy measure is shown in Table 6. For example, it obtained the highest accuracy level at 10 datasets, which represents nearly 56% of the total. Each of the HBA, CHBA, GWO, EFO, and RSA methods had best accuracy for only two datasets, including the dataset S18 (which was common to all of them). The average accuracy of the tested datasets is given in Figure 7. It can be seen that the accuracy of the QCHBA was better than the close second algorithm, CHAB, with a difference of nearly 1.45%.
Table 7 and Figure 8 show the average of selected features from each dataset using the FS method. From these results, it can be seen that the QCHBA had the ability to select the smallest number of features, at nearly 50% of the total. RSA was the second best algorithm according to the number of selected features, with the smallest number of features at three datasets. This was followed by the GWO and CHBA methods with the smallest number of features for two datasets.
From the previous results and discussion, it is evident that integration of the quantum-based optimization (QBO) and the 2D chaotic Hénon map with the honey badger algorithm led to an enhanced ability to discover the subset of relevant features. The QBO and 2D Hénon maps have the ability to balance switching between exploration and exploitation during the search process for the relevant features. Despite these advantages, the proposed QCHBA still suffers from certain limitations that can effect its performance. For example, the stability of QCHBA still requires improvement, and this is critical. In addition, the time complexity of the QCHBA needs to be decreased.

5. Conclusions

This paper proposes a modified version of the honey badger algorithm (HBA) using a quantum optimization technique and a 2D chaotic Hénon map. The 2D Hénon map was applied to adjust the parameters of C and β , as well as to update the digging phase and the density variable. The quantum-based optimization was applied to improve convergence towards the optimal solution. To assess the performance of the proposed approach, a set of experimental investigations was conducted. This was achieved though the use of eighteen datasets with the results compared with other well-known FS-based methods, including, for example, RSA, EFO, GWO, LASHADE, LSHADECS, SaDE, and the traditional HBA. The comparative results indicated strong performance of the proposed method. Moreover, the influence of the QBO and chaotic maps for enhancing the performance of the proposed method was apparent.
In the future, the proposed QCHBA can be applied to solve different optimization problems. For example, it can be used to improve the performance of medical classification, the quality of service in cloud computing environments, investigation of plant diseases and other applications. In addition, it can be applied to handle multi-objective optimization problems.

Author Contributions

Conceptualization, D.Y., R.A.I., O.F.H. and M.A.E.; methodology, D.Y., R.A.I. and M.A.E.; software, D.Y., R.A.I., O.F.H. and M.A.E.; validation, D.Y., R.A.I. and M.A.E.; formal analysis, S.A., D.Y., R.A.I. and M.A.E.; investigation, S.A., D.Y., R.A.I. and M.A.E.; writing—original draft preparation, D.Y., R.A.I. and M.A.E.; writing—review and editing, S.A., D.Y., R.A.I., O.F.H. and M.A.E.; visualization, S.A., D.Y., R.A.I. and M.A.E.; supervision, M.A.E., R.A.I. and O.F.H.; project administration, S.A., D.Y., R.A.I. and M.A.E.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R197), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data available upon request from corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. Tubishat, M.; Idris, N.; Shuib, L.; Abushariah, M.A.; Mirjalili, S. Improved Salp Swarm Algorithm based on opposition based learning and novel local search algorithm for feature selection. Expert Syst. Appl. 2020, 145, 113–122. [Google Scholar] [CrossRef]
  2. Hancer, E.; Xue, B.; Karaboga, D.; Zhang, M. A binary ABC algorithm based on advanced similarity scheme for feature selection. Appl. Soft Comput. 2015, 36, 334–348. [Google Scholar] [CrossRef]
  3. Ewees, A.A.; Abualigah, L.; Yousri, D.; Algamal, Z.Y.; Al-qaness, M.A.; Ibrahim, R.A.; Abd Elaziz, M. Improved Slime Mould Algorithm based on Firefly Algorithm for feature selection: A case study on QSAR model. Eng. Comput. 2021, 38, 2407–2421. [Google Scholar] [CrossRef]
  4. Yousri, D.; Abd Elaziz, M.; Abualigah, L.; Oliva, D.; Al-Qaness, M.A.; Ewees, A.A. COVID-19 X-ray images classification based on enhanced fractional-order cuckoo search optimizer using heavy-tailed distributions. Appl. Soft Comput. 2021, 101, 107052. [Google Scholar] [CrossRef] [PubMed]
  5. Al-qaness, M.A. Device-free human micro-activity recognition method using WiFi signals. Geo-Spat. Inf. Sci. 2019, 22, 128–137. [Google Scholar] [CrossRef]
  6. Dahou, A.; Elaziz, M.A.; Zhou, J.; Xiong, S. Arabic sentiment classification using convolutional neural network and differential evolution algorithm. Comput. Intell. Neurosci. 2019, 2019, 2537689. [Google Scholar] [CrossRef] [Green Version]
  7. Rundo, L.; Tangherloni, A.; Cazzaniga, P.; Nobile, M.S.; Russo, G.; Gilardi, M.C.; Vitabile, S.; Mauri, G.; Besozzi, D.; Militello, C. A novel framework for MR image segmentation and quantification by using MedGA. Comput. Methods Programs Biomed. 2019, 176, 159–172. [Google Scholar] [CrossRef]
  8. Ortiz, A.; Górriz, J.; Ramírez, J.; Salas-Gonzalez, D.; Llamas-Elvira, J.M. Two fully-unsupervised methods for MR brain image segmentation using SOM-based strategies. Appl. Soft Comput. 2013, 13, 2668–2682. [Google Scholar] [CrossRef]
  9. Cheng, S.; Ma, L.; Lu, H.; Lei, X.; Shi, Y. Evolutionary computation for solving search-based data analytics problems. Artif. Intell. Rev. 2021, 54, 1321–1348. [Google Scholar] [CrossRef]
  10. Nobile, M.S.; Tangherloni, A.; Rundo, L.; Spolaor, S.; Besozzi, D.; Mauri, G.; Cazzaniga, P. Computational intelligence for parameter estimation of biochemical systems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  11. Ibrahim, R.A.; Abualigah, L.; Ewees, A.A.; Al-Qaness, M.A.; Yousri, D.; Alshathri, S.; Abd Elaziz, M. An electric fish-based arithmetic optimization algorithm for feature selection. Entropy 2021, 23, 1189. [Google Scholar] [CrossRef]
  12. Ibrahim, R.; Ewees, A.; Oliva, D.; Abd Elaziz, M.; Lu, S. Improved salp swarm algorithm based on particle swarm optimization for feature selection. J. Ambient. Intell. Hum. Comput. 2019, 10, 3155–3169. [Google Scholar] [CrossRef]
  13. Aziz, M.A.E.; Hassanien, A.E. Modified cuckoo search algorithm with rough sets for feature selection. Neural Comput. Appl. 2018, 29, 925–934. [Google Scholar] [CrossRef]
  14. Abd Elaziz, M.; Moemen, Y.S.; Hassanien, A.E.; Xiong, S. Toxicity risks evaluation of unknown FDA biotransformed drugs based on a multi-objective feature selection approach. Appl. Soft Comput. 2020, 97, 105509. [Google Scholar] [CrossRef]
  15. Ibrahim, R.A.; Abd Elaziz, M.; Ewees, A.A.; El-Abd, M.; Lu, S. New feature selection paradigm based on hyper-heuristic technique. Appl. Math. Model. 2021, 98, 14–37. [Google Scholar] [CrossRef]
  16. Xue, B.; Zhang, M.; Browne, W.N. Particle swarm optimization for feature selection in classification: A multi-objective approach. IEEE Trans. Cybern. 2012, 43, 1656–1671. [Google Scholar] [CrossRef] [PubMed]
  17. Hancer, E. Differential evolution for feature selection: A fuzzy wrapper–filter approach. Soft Comput. 2019, 23, 5233–5248. [Google Scholar] [CrossRef]
  18. Tsai, C.F.; Eberle, W.; Chu, C.Y. Genetic algorithms in feature and instance selection. Knowl.-Based Syst. 2013, 39, 240–247. [Google Scholar] [CrossRef]
  19. Sayed, G.I.; Hassanien, A.E.; Azar, A.T. Feature selection via a novel chaotic crow search algorithm. Neural Comput. Appl. 2019, 31, 171–188. [Google Scholar] [CrossRef]
  20. Oh, I.S.; Lee, J.S.; Moon, B.R. Hybrid genetic algorithms for feature selection. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1424–1437. [Google Scholar]
  21. Abdel-Basset, M.; Ding, W.; El-Shahat, D. A hybrid Harris Hawks optimization algorithm with simulated annealing for feature selection. Artif. Intell. Rev. 2021, 54, 593–637. [Google Scholar] [CrossRef]
  22. Sahlol, A.T.; Yousri, D.; Ewees, A.A.; Al-Qaness, M.A.; Damasevicius, R.; Elaziz, M.A. COVID-19 image classification using deep features and fractional-order marine predators algorithm. Sci. Rep. 2020, 10, 15364. [Google Scholar] [CrossRef] [PubMed]
  23. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  24. Yilmaz, S.; Sen, S. Electric fish optimization: A new heuristic algorithm inspired by electrolocation. Neural Comput. Appl. 2020, 32, 11543–11578. [Google Scholar] [CrossRef]
  25. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  26. Elseify, M.A.; Kamel, S.; Abdel-Mawgoud, H.; Elattar, E.E. A Novel Approach Based on Honey Badger Algorithm for Optimal Allocation of Multiple DG and Capacitor in Radial Distribution Networks Considering Power Loss Sensitivity. Mathematics 2022, 10, 2081. [Google Scholar] [CrossRef]
  27. Almodfer, R.; Abd Elaziz, M.; Alshathri, S.; Abualigah, L.; Mudhsh, M.; Shahzad, K.; Issa, M. Improving Parameters Estimation of Fuel Cell Using Honey Badger Optimization Algorithm. Front. Energy Res. 2022, 10, 875332. [Google Scholar] [CrossRef]
  28. Nassef, A.M.; Houssein, E.H.; Helmy, B.E.D.; Rezk, H. Modified honey badger algorithm based global MPPT for triple-junction solar photovoltaic system under partial shading condition and global optimization. Energy 2022, 254, 124363. [Google Scholar] [CrossRef]
  29. Kumar, D.S.R.; Kumar, K.P.; Raju, K.G.; Gowsalya, S.; Balraj, L.; Srivastava, A.K. An IoT-based Optimization scheme on task scheduling for minimizing energy in Cloud Computing. In Proceedings of the 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 5–26 March 2022; Volume 1, pp. 1–6. [Google Scholar]
  30. Hénon, M. A two-dimensional mapping with a strange attractor. In The Theory of Chaotic Attractors; Springer: New York, NY, USA, 1976; pp. 94–102. [Google Scholar]
  31. Asuncion, A.; Newman, D. UCI Machine Learning Repository. 2010. Available online: https://archive.ics.uci.edu/ml/datasets/SML2010 (accessed on 14 September 2022).
  32. Awad, N.H.; Ali, M.Z.; Suganthan, P.N.; Reynolds, R.G. An ensemble sinusoidal parameter adaptation incorporated with L-SHADE for solving CEC2014 benchmark problems. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2958–2965. [Google Scholar]
  33. Ibrahim, R.A.; Abd Elaziz, M.; Lu, S. Chaotic opposition-based grey-wolf optimization algorithm based on differential evolution and disruption operator for global optimization. Expert Syst. Appl. 2018, 108, 1–27. [Google Scholar] [CrossRef]
  34. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  35. Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastián, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar]
  36. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Scotland, UK, 2–5 September 2005; Volume 2, pp. 1785–1791. [Google Scholar]
Figure 1. Hénon attractor, and x, y distributions.
Figure 1. Hénon attractor, and x, y distributions.
Electronics 11 03463 g001
Figure 2. Competitive results of QCHBA and other methods in terms of average fitness values.
Figure 2. Competitive results of QCHBA and other methods in terms of average fitness values.
Electronics 11 03463 g002
Figure 3. Average of stability results of QCHBA and other algorithms overall for the tested datasets.
Figure 3. Average of stability results of QCHBA and other algorithms overall for the tested datasets.
Electronics 11 03463 g003
Figure 4. The efficiency of QCHBA and other methods according to the best fitness value measure.
Figure 4. The efficiency of QCHBA and other methods according to the best fitness value measure.
Electronics 11 03463 g004
Figure 5. Performance of QCHBA and other methods in terms of worst fitness values.
Figure 5. Performance of QCHBA and other methods in terms of worst fitness values.
Electronics 11 03463 g005
Figure 6. Convergence curves of the competitive algorithms for S1, S2, S7, and S17.
Figure 6. Convergence curves of the competitive algorithms for S1, S2, S7, and S17.
Electronics 11 03463 g006
Figure 7. Performance of ACHBA and others based on the average accuracy.
Figure 7. Performance of ACHBA and others based on the average accuracy.
Electronics 11 03463 g007
Figure 8. Efficiency of QCHBA and other methods based on the average number of selected features.
Figure 8. Efficiency of QCHBA and other methods based on the average number of selected features.
Electronics 11 03463 g008
Table 1. Dataset descriptions.
Table 1. Dataset descriptions.
DatasetsNumber of FeaturesNumber of InstancesNumber of ClassesData Category
Breastcancer (S1)96992Biology
BreastEW (S2)305692Biology
CongressEW (S3)164352Politics
Exactly (S4)1310002Biology
Exactly2 (S5)1310002Biology
HeartEW (S6)132702Biology
IonosphereEW (S7)343512Electromagnetic
KrvskpEW (S8)3631962Game
Lymphography (S9)181482Biology
M-of-n (S10)1310002Biology
PenglungEW (S11)325732Biology
SonarEW (S12)602082Biology
SpectEW (S13)222672Biology
Tic-tac-toc (S14)99582Game
Vote (S15)163002Politics
WaveformEW (S16)4050003Physics
Water (S17)131783Chemistry
Zoo (S18)161016Artificial
Table 2. Results of QCHBA based on average of fitness values.
Table 2. Results of QCHBA based on average of fitness values.
HBACHABQCHBAbGWOEFORSALSHADELSHADECSSaDE
S10.05680.05510.07360.06270.08230.08240.08330.13250.0917
S20.06720.05520.03180.07960.08840.07520.12540.19960.1114
S30.04430.07070.08490.04730.09890.01910.06550.16090.1063
S40.05780.07100.05390.09890.07930.11030.25040.29420.3853
S50.22140.17830.24960.21520.31340.28680.22580.37480.2285
S60.18830.15570.18350.19200.16620.17330.20190.35830.2417
S70.07940.07050.09230.09730.14570.08870.11600.16600.1148
S80.07370.08420.08400.10240.09920.09530.39040.40100.3658
S90.19440.18780.11500.10500.22420.14240.25670.26670.2516
S100.05460.06890.05190.07660.08080.11810.21180.35030.2706
S110.03410.08150.00800.08100.20060.13080.32000.33300.3500
S120.08230.07640.06550.08400.13740.12130.28330.41670.3333
S130.11880.20020.15180.12350.23450.20580.16300.27310.2417
S140.22900.24090.23420.26780.24200.22760.26350.32340.2992
S150.07140.05720.05040.11140.10430.03530.05670.14500.0850
S160.27280.28100.27080.29140.31150.29690.35740.45060.4094
S170.05650.07320.06470.06340.07960.06920.18330.18190.1583
S180.03030.04000.03030.04810.04250.03380.33330.21330.0833
Table 3. Stability of competitive algorithms based on fitness values.
Table 3. Stability of competitive algorithms based on fitness values.
HBACHABQCHBAbGWOEFORSALSHADELSHADECSSaDE
S10.01100.00740.00920.00880.00370.00730.00000.01060.0000
S20.01390.01290.01080.01120.01050.01220.00000.01920.0236
S30.01300.01010.00160.01020.01380.00560.00000.02930.0236
S40.01800.04520.00410.04760.02820.05210.06450.00000.0000
S50.01000.03640.01750.02420.00950.00730.00000.01480.0236
S60.02400.02810.01410.03150.03400.01430.00000.02230.0092
S70.02120.02360.00250.01070.00330.01400.00000.02360.0070
S80.01260.01510.01070.01070.01090.01190.00540.00890.0000
S90.03990.03480.01830.02240.00940.00700.00940.02360.0096
S100.00830.03370.00180.03080.02080.05650.00000.04220.0211
S110.01370.04910.01520.03460.00070.03990.00000.01230.0236
S120.02110.02390.01960.01900.01130.00970.00000.00000.0000
S130.00610.02390.00240.02150.01380.01550.00000.01440.0196
S140.00930.01620.00200.01210.01190.00840.00000.02060.0026
S150.02160.01660.01650.01330.01060.00140.00000.04710.0236
S160.01600.01390.01060.00940.00770.00800.00000.00320.0000
S170.01400.01070.00210.01120.01720.00610.00000.01370.0196
S180.00580.01000.00430.00810.01120.01630.00000.00000.0236
Table 4. Best fitness values generated using competitive methods.
Table 4. Best fitness values generated using competitive methods.
MinHBACHABQCHBAbGWOEFORSALSHADELSHADECSSaDE
S10.04620.04150.06080.04620.07660.07660.08330.12500.0917
S20.04820.02910.01330.06160.07820.05950.12540.18600.0947
S30.03320.05600.06640.02910.07910.01660.06550.14020.0897
S40.04620.04620.04620.05380.05380.05380.20480.29420.3853
S50.21920.15620.23270.20570.29680.27940.22580.36430.2118
S60.14740.15510.12050.13210.12820.15510.20190.34260.2352
S70.04890.05180.03030.07810.14080.07510.11600.14930.1099
S80.05860.06280.06150.08530.08640.07580.38660.39470.3658
S90.12330.14000.08100.08000.21630.13220.25000.25000.2448
S100.04620.04620.04620.04620.06600.05380.21180.32050.2557
S110.00250.00830.00370.02460.19970.07600.32000.32420.3333
S120.03000.03170.02170.05480.12620.10570.28330.41670.3333
S130.09550.15150.10610.08480.22120.18180.16300.26300.2278
S140.22140.22780.20090.25240.22430.21960.26350.30890.2974
S150.04250.03380.02750.08880.09500.03380.05670.11170.0683
S160.25010.25950.24400.27110.30310.28770.35740.44830.4094
S170.03080.05380.03080.04620.06350.06150.18330.17220.1444
S180.02500.02500.01880.02500.02500.01880.33330.21330.0667
Table 5. Worst fitness values generated using competitive methods.
Table 5. Worst fitness values generated using competitive methods.
MaxHBACHABQCHBAbGWOEFORSALSHADELSHADECSSaDE
S10.07950.07310.09410.07660.08600.09410.08330.14000.0917
S20.09950.07820.06460.10320.10490.09070.12540.21320.1281
S30.07690.09140.12030.06850.11640.02910.06550.18160.1230
S40.10200.21640.10200.19390.11420.18940.29600.29420.3853
S50.26400.25770.31670.30580.31990.29610.22580.38530.2452
S60.23590.21030.21150.23850.21030.19100.20190.37410.2481
S70.12720.11250.10930.11720.14960.13760.11600.18260.1197
S80.11030.10600.10490.12140.11290.11160.39420.40730.3658
S90.25220.25220.14890.16300.23890.15980.26330.28330.2583
S100.07950.16000.08720.18040.10970.18040.21180.38020.2855
S110.06460.19170.10650.14920.20150.18430.32000.34170.3667
S120.10900.13100.10450.11950.15260.12950.28330.41670.3333
S130.12120.23180.19390.16820.25610.22120.16300.28330.2556
S140.24010.28700.27340.30570.25590.24180.26350.33800.3010
S150.12250.09250.09250.13630.12250.03630.05670.17830.1017
S160.31040.30810.29640.30930.32100.30620.35740.45280.4094
S170.07880.09420.10380.08850.10190.07690.18330.19170.1722
S180.04380.05630.04380.06250.05000.05630.33330.21330.1000
Table 6. Average of accuracy obtained using QCHNA and other methods.
Table 6. Average of accuracy obtained using QCHNA and other methods.
HBACHABQCHBAbGWOEFORSALSHADELSHADECSSaDE
S10.94790.97390.97640.96430.96290.95290.92860.95360.9429
S20.96050.97020.99300.94910.96840.95790.86840.88160.9035
S30.93100.96380.97470.97590.96090.98850.95400.93680.9655
S40.99350.98230.99650.96150.98200.95100.73750.67500.6700
S50.76550.82330.74700.77500.72700.76000.72500.65500.7300
S60.83520.85460.88980.82690.88890.85190.75930.75000.7593
S70.92540.94230.95770.92110.91270.92960.92960.92960.9437
S80.92860.96090.95980.95130.97130.92750.58520.64140.5594
S90.83330.83170.93610.93400.82990.91330.80000.83330.8333
S100.99880.98380.99750.98450.98200.94400.74500.69000.7100
S110.97330.92620.99670.94310.86670.86140.73330.73860.8846
S120.95950.96430.97380.95600.93810.90000.69050.54760.5357
S130.81850.85830.87690.89540.81110.80370.81480.82410.8519
S140.81080.82070.80700.77660.80520.81880.71880.77600.8750
S150.94670.97080.97830.92170.94670.97330.96670.98330.9083
S160.74510.75250.76240.73680.73940.71180.53700.52300.5170
S170.97640.99440.98190.98470.98330.97780.83330.93060.9861
S181.00001.00001.00001.00001.00001.00000.66670.93330.8333
Table 7. Number of selected features.
Table 7. Number of selected features.
HBACHABQCHBAbGWOEFORSALSHADELSHADECSSaDE
S12.853.22.42.754.43.6557
S28.59.57.6510.151811.21313.514.5
S36.13.453.654.110.25.466.57.5
S47.156.756.68.358.28.68612
S52.54.352.851.658.89.2899.5
S67.355.26.854.78.65.268.58
S79.9514.058.558.9522.88.6109.513
S817.6516.3517.221.126.410.82019.521
S96.551410.358.212.811.611.510.515
S107.056.66.88.158.48.8101112.5
S1148.926.423.9596.9526219.622.52126
S1226.5527.525.1526.64926.832.531.536
S138.16.755.356.4514.26.47.599.5
S146.356.15.45665.88.59.512.5
S154.954.753.956.5596.89.710.711.2
S1620.719.3522.7521.830.81520.420.923.4
S176.756.76.36.458.46.48.58.511.5
S186.44.854.857.76.85.411.4511.4513.95
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alshathri, S.; Abd Elaziz, M.; Yousri , D.; Hassan , O.F.; Ibrahim , R.A. Quantum Chaotic Honey Badger Algorithm for Feature Selection. Electronics 2022, 11, 3463. https://doi.org/10.3390/electronics11213463

AMA Style

Alshathri S, Abd Elaziz M, Yousri  D, Hassan  OF, Ibrahim  RA. Quantum Chaotic Honey Badger Algorithm for Feature Selection. Electronics. 2022; 11(21):3463. https://doi.org/10.3390/electronics11213463

Chicago/Turabian Style

Alshathri, Samah, Mohamed Abd Elaziz, Dalia Yousri , Osama Farouk Hassan , and Rehab Ali Ibrahim . 2022. "Quantum Chaotic Honey Badger Algorithm for Feature Selection" Electronics 11, no. 21: 3463. https://doi.org/10.3390/electronics11213463

APA Style

Alshathri, S., Abd Elaziz, M., Yousri , D., Hassan , O. F., & Ibrahim , R. A. (2022). Quantum Chaotic Honey Badger Algorithm for Feature Selection. Electronics, 11(21), 3463. https://doi.org/10.3390/electronics11213463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop