Next Article in Journal
Unsupervised Anomaly Detection with Distillated Teacher-Student Network Ensemble
Next Article in Special Issue
About the Definition of the Local Equilibrium Lattice Temperature in Suspended Monolayer Graphene
Previous Article in Journal
Error Exponents and α-Mutual Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Bootstrap Framework for Aggregating within and between Feature Selection Methods

Department of Mathematics and Statistics, American University of Sharjah, Sharjah P.O. Box 26666, United Arab Emirates
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(2), 200; https://doi.org/10.3390/e23020200
Submission received: 8 December 2020 / Revised: 1 February 2021 / Accepted: 3 February 2021 / Published: 6 February 2021
(This article belongs to the Collection Maximum Entropy and Its Applications)

Abstract

:
In the past decade, big data has become increasingly prevalent in a large number of applications. As a result, datasets suffering from noise and redundancy issues have necessitated the use of feature selection across multiple domains. However, a common concern in feature selection is that different approaches can give very different results when applied to similar datasets. Aggregating the results of different selection methods helps to resolve this concern and control the diversity of selected feature subsets. In this work, we implemented a general framework for the ensemble of multiple feature selection methods. Based on diversified datasets generated from the original set of observations, we aggregated the importance scores generated by multiple feature selection techniques using two methods: the Within Aggregation Method (WAM), which refers to aggregating importance scores within a single feature selection; and the Between Aggregation Method (BAM), which refers to aggregating importance scores between multiple feature selection methods. We applied the proposed framework on 13 real datasets with diverse performances and characteristics. The experimental evaluation showed that WAM provides an effective tool for determining the best feature selection method for a given dataset. WAM has also shown greater stability than BAM in terms of identifying important features. The computational demands of the two methods appeared to be comparable. The results of this work suggest that by applying both WAM and BAM, practitioners can gain a deeper understanding of the feature selection process.

1. Introduction

Over the years, feature selection has become a fundamental preprocessing tool in data mining and machine learning. Otherwise known as attribute or variable selection, feature selection refers to the process of reducing the number of input variables for predictive modeling. The objective of feature selection is three-fold: improving the prediction performance of the features, providing faster and more cost-effective features and gaining a better understanding of the underlying process that generated the data. This objective is achieved through the characterization and elimination of irrelevant and redundant features, leaving only a subset of the most useful features to be used in further analysis. As such, features represent the individual independent variables typically used as predictors within a model. Throughout this paper, we use the terms variable and feature exchangeably.
There are generally three classes of feature selection methods based on how the method interacts with the learning algorithm: filter, wrapper and embedded methods [1]. Filter methods evaluate the importance of features as a pre-processing operation to the learning algorithm and select the best feature subsets through some information metrics without direct input from the target variable. Filters are known to have high computational efficiency compared to the wrapper and embedded methods. Alternatively, the wrapper methods apply some search in the feature space and use the learning algorithm to evaluate the importance of feature subsets. Thus, wrappers are deemed to be computationally expensive and can be slow due to the need to apply the learning algorithm to each new feature subset. The embedded methods perform the feature selection internally to the learning algorithm. In this approach, a predefined importance criterion is integrated into the learning algorithm, and features which meet the set criteria are selected. Embedded methods have less computational cost than wrappers but are more likely to suffer from over-fitting.
One of the most common challenges encountered in feature selection is choosing the most suitable method for a given problem. In general, there is no single feature selection method that outperforms all others across most applications. The three different classes of feature selection methods are frequently suitable under varying conditions. For instance, the greedy randomized adaptive heuristic (GRASP) filter technique introduced in [2] provides efficient results for problems with a large number of nominal features or with simulated instances, when the bias introduced by the classification methods is minimized. In this context, an overall understanding of various types of feature selection algorithms is often needed before an appropriate method selection could be made. Often, such a choice is based on massive empirical evaluations of diverse feature selection methods ([3,4]). Alternatively, meta-learning and ensemble methods are two widely used approaches for determining the most appropriate feature selection algorithm for a problem or circumventing it entirely.
Meta-learning predicts the most appropriate feature selection method for a given dataset. In meta-learning, the feature selection task is treated as a supervised learning problem in which the datasets are objects and the target function maps a dataset to the feature selection method that shows the highest performance based on a certain criterion. With the increase in feature selection techniques and introduction of new methods, the implementation of meta-learning can provide a valuable way of weighing the most suitable feature selection methods. However, like other learning techniques, in order to make reliable recommendations, metalearning can be limited by its need for a suitable metadatabase that is representative of the given problem domain [5]. An excellent review of meta-learning algorithms can be found in [6].
Ensemble methods were originally developed to enhance classification performance [7]. Ensemble feature selection has two main components: diversification, to create varying feature selection outputs; and aggregation, to combine the generated outputs. Diversification can be achieved by data resampling [8]. In data resampling, several randomly generated subsamples are drawn from the original dataset, and then a feature selection technique is applied on each generated subsample. The ensemble method combines the results generated by each feature selection method [9]. Bootstrap sampling is commonly used to generate the random subsamples [10,11,12]. When the amount of available data is sufficient, in [13], the partitioning of the data into non-overlapping chunks was proposed.
Although a number of aggregation techniques have been proposed in the literature, there is no clear rule to determine which one of them should be chosen for a specific feature selection task. However, the simple approach of mean-based aggregation seem to be efficient and compelling in most cases [14]. In [15], Kolde et al. (2012) proposed a novel rank aggregation method based on order statistics and applied it to gene selection. The approach detects genes (features) that are ranked consistently better than the expected behavior of uncorrelated features and assigns a significance score to each gene. In [16], Ditzler et al. (2014) developed a statistical testing framework in which a statistical test is performed for the number of times that each feature has been selected to determine whether it belongs to the relevant feature set or not.
The robustness or stability of the feature selection method is of paramount importance to reducing dimensionality and improving the performance of the learning algorithm. Stability measures the insensitivity of the feature selection method to variations in the training set. In other words, unstable feature selection methods can produce varied feature rankings when a single feature selection technique is applied to different training samples generated from the same dataset. Instability can also occur when very different feature selection techniques applied to the same dataset produce different feature rankings. The stability of feature selection has attracted a plethora of research in machine learning and data mining communities over the last decade [8]. It has become crucial to supplement the investigation of model performance with stability analysis in order to ensure the quality of feature selection [17].
This paper introduces a general framework for a bootstrap ensemble in which feature selection results are aggregated within and between multiple feature selection methods. Multiple subsamples are generated from the original dataset by bootstrapping, and the feature selection techniques are applied on each subsample. The aggregation is thus two-fold; first, the single feature selection method is aggregated across subsamples, and second, different feature selection methods are combined into a single set. The mean-based aggregation rule is used where the generated importance scores are combined within and between the feature selection methods. Because filter feature selection methods are known for their computational efficiency, we chose four traditional filter techniques for our experimental framework, although wrappers and embedded methods are also applicable. Our experimental results with real data sets demonstrated that ensembling multiple feature selection methods improves the performance of the learning algorithm, guides the selection of the optimal feature subset and facilitates the identification of the most appropriate feature selection method for a given dataset. Furthermore, comparisons of the two folds of aggregation revealed that, on the whole, aggregating within a single feature selection method outperforms aggregating between multiple feature selection methods.
The rest of this paper is organized as follows. In Section 2, we introduce the general framework for the bootstrap aggregation within and between multiple feature selection methods. In Section 3, we describe the robustness metrics used for the evaluation of the feature selection stability. In Section 4, we present and analyze the results of our experimental work. Finally, conclusions and some insights into future work are presented in Section 5.

2. Bootstrap Aggregation Framework

Let us consider a dataset S ( X , Y ) , with n observations and p features such that n , p Z > 0 ; that is, X = [ x i j ] n × p R n , p is the matrix of observations and Y is the target variable (i.e., the rows are the observations and the columns are the variables). Moreover, x i j denotes observation i of the feature j. The goal of this work is to reduce the number of features in the dataset X in order to predict the target variable Y. Now, let { V 1 , , V p } denote the set of features (variables) in X . The dataset S is divided into a training dataset X , and a testing dataset T . Here, X = X r n , p and T = X n r n , p with 0 < r < 1 . For instance, considering the Philippine dataset from Table 1, Philippine has n = 5832 observations and p = 309 features. Moreover, { V 1 , , V 309 } denote the set of features in the Philippine dataset. Two-thirds of the Philippine dataset is taken for training whereas one-third is used for testing. Then, the training data are denoted as X = X 3888 , 309 and the testing data are denoted as T = X 1944 , 309 . We use feature selection to reduce the 309 features in the Philippine dataset, leading to the dataset only containing the most relevant features.
Let F S 1 , , F S t denote the feature selection methods used, where t Z > 0 . In addition, we assume that each feature selection method F S q { F S 1 , , F S t } generates a feature importance score j R for every feature V j { V 1 , , V p } . Although rank aggregation could also be used for the feature selection process as an alternative to score aggregation, aggregating the ranks from different feature selection algorithms might result in ties. For instance, three feature selection algorithms might rank one feature as 2, 1 and 3 and another feature as 2, 3 and 1. The average rank for both features would be 2. A merit of this study is that it uses the importance scores to aggregate feature selection methods, as the scores have a stronger scale than the rank and can better differentiate between the features. Furthermore, averaging the actual importance scores is simple to implement and perform. For the meaningful comparison of the scores derived from different feature selection algorithms, a normalization technique has been implemented. In this section, we discuss the use of bootstrap techniques in order to combine the feature importance scores within each F S q and between F S q for all q { 1 , , t } . For the convenience of presentation and interpretation, the following terminologies are used in the rest of the paper:
-
The Within Aggregation Method (WAM) refers to aggregating importance scores within a single feature selection method.
-
The Between Aggregation Method (BAM) refers to aggregating importance scores between different feature selection methods.
Due to the simplicity and the efficiency of the arithmetic mean [14], it is used in this paper to aggregate the feature importance scores resulting from the different feature selection methods. However, the proposed framework allows the application of other score or rank aggregation rules, including the median, geometric mean and others.

2.1. Feature Selection Based on WAM

Let us consider a training dataset X and a feature selection method F S . Let X 1 , X 2 , , X m be bootstrapped samples from X , where m Z > 0 . Then, we apply F S on each X s , s = 1 , , m , which in turn generates feature scores { s 1 , , s p } that correspond to the set of features { V 1 , , V p } . Therefore, a score matrix L = [ s j ] R m × p is generated after applying the feature selection method F S on each bootstrap sample. In L , column j represents the F S importance scores for variable V j over the m bootstrap sample datasets. The final aggregated score for the feature V j is defined to be the mean of column j in L . We use the notation . j ¯ = s = 1 m s j m to denote the aggregated scores of V j . Then, a rank vector r = ( r 1 , , r p ) , r j { 1 , 2 , , p } is assigned to the feature set { V 1 , , V p } based on the aggregated scores { . 1 ¯ , , . p } ¯ . The feature set is then sorted from the most to the least important based on the rank vector r . Now, based on a threshold parameter, 0 < k 1 , we keep only the most important 100 k % of the feature set (determined by the rank vector r ). The WAM approach can be used to compare the performance of different feature selection techniques based on various supervised learning methods for a given dataset. The flowchart in Figure 1 explains the WAM. The following gives the associated computational Algorithm 1.
Algorithm 1WAM Algorithm:
  • Given a training dataset X with p features, a testing dataset T , a feature selection method F S , a threshold parameter k , and a learning algorithm M .
    (i)
    For s = 1 , , m , generate bootstrap samples, X 1 , , X m of the training dataset X .
    (ii)
    Based on F S , get the features score matrix L .
    (iii)
    Get the aggregated score set { ¯ . 1 , , ¯ . p } .
    (iv)
    For the aggregated score set { ¯ . 1 , , ¯ . p } , get the corresponding rank vector r = ( r 1 , , r p ) .
    (v)
    Based on the rank vector r , keep only the top 100 k % of the variable set { V 1 , , V p } .
    (vi)
    Based on the selected feature set in (v), use the testing dataset T and a cross-validation technique to train and test the model M .

2.2. Feature Selection Based on BAM

Let us consider a training dataset X and feature selection methods { F S 1 , , F S t } . Let X 1 , X 2 , , X m be bootstrapped samples from X , where m Z > 0 . Then, we apply F S q , q = 1 , , t on each X s , s = 1 , , m , which in turn generates feature scores { s 1 , , s p } ( q ) that correspond to the set of features { V 1 , , V p } . Therefore, a score matrix L ( q ) = [ s j ( q ) ] R m × p is generated after applying the feature selection method F S q on each bootstrap sample. In L ( q ) , column j represents the F S q scores for variable V j over the m bootstrap sample datasets. Then, each column in the score matrix L ( q ) is normalized using min–max normalization; that is,
L ( q ) = [ s j ( q ) ] , where s j ( q ) = s j ( q ) min s s j ( q ) max s s j ( q ) min s s j ( q )
After the normalization, we use the arithmetic mean to combine the normalized aggregated scores across F S q , q = 1 , t into one score matrix L ¯ = q = 1 t L ( q ) t . That is, column j in the score matrix L ¯ represents the average column j between all considered feature selection methods F S 1 , , F S t . Then, the final aggregated scores, { . 1 ¯ , , . p } ¯ , for the feature set { V 1 , , V p } are the arithmetic mean of columns 1 , , p in L ¯ . The rank vector r = ( r 1 , , r p ) , r j { 1 , 2 , , p } is assigned to the feature set { V 1 , , V p } based on the final aggregated scores. The feature set is then sorted from the most to the least important based on the rank vector r . The top 100 k % features are thus retained for further analysis. The flowchart in Figure 2 explains the BAM. The following gives the associated computational Algorithm 2.
Algorithm 2BAM Algorithm:
  • Given a training dataset X with p features, a testing dataset T , feature selection methods { F S 1 , , F S t } , a threshold parameter k , and a learning algorithm M .
    (i)
    For i = 1 , , m , generate bootstrap samples, X 1 , , X m of the training dataset X .
    (ii)
    For each feature selection method F S q { F S 1 , , F S t } , get features score matrix L ( q ) .
    (iii)
    Normalize the score matrices in (ii) as L ( q ) , q = 1 , , t .
    (iv)
    Use the arithmetic mean scores to combine the matrices in (iii) into one score matrix L ¯ .
    (v)
    Use the score matrix in (iv) to compute the aggregated scores { . 1 ¯ , , . p } ¯ .
    (vi)
    Based on the aggregated scores in (v), compute the corresponding rank vector r = ( r 1 , , r p ) .
    (vii)
    Based on the rank vector r , keep the top 100 k % of the variable set { V 1 , , V p } .
    (viii)
    Based on the selected feature set in (vii), use the testing dataset T and a cross-validation technique to train and test the model M .

3. Stability Analysis

In order to measure the stability of feature rankings for each feature selection method in { F S 1 , , F S t } , we implement the similarity-based approach proposed in [18]. This approach depends on the representation language of the produced feature rankings. Considering a training dataset X and a feature selection method F S , let X 1 , X 2 , , X m be bootstrapped samples from X . Then, we apply F S on each X s , s = 1 , , m . This, in turn, produces any of the following three representations with respect to the feature set { V 1 , , V p } and the sample dataset X s :
  • An importance scores vector s = { s 1 , , s p } , s j R .
  • A rank vector r s = { r s 1 , , r s p } , r s j { 1 , 2 , , p } .
  • A subset of features represented by an index vector w s = { w s 1 , , w s p } , w s j { 1 , 0 } , where 1 indicates feature presence and 0 indicates feature absence.
Naturally, it is possible to transform any feature importance scores vector into a rank vector r by sorting the importance scores. On the other hand, a rank vector r may be converted into an index vector w by selecting the top 100 k % features. The most common way to quantify the stability of a feature selection method is by simply taking the average of similarity comparisons between every pair of feature rankings derived from the different bootstrap samples as follows:
S t a b i l i t y = 2 m ( m 1 ) s = 1 m 1 v = s + 1 m Φ ( f s , f v )
where Φ ( f s , f v ) is the similarity measure between a pair of feature rankings from any two training samples X s , X v ( 1 s , v m ) . Note that the feature rankings ( f s , f v ) can be represented as a pair of importance scores vectors, rank vectors or index vectors. Moreover, the multiple 2 m ( m 1 ) stems from the fact that there are m ( m 1 ) 2 possible pairs of feature rankings between the total m samples.
Several similarity measures have been introduced in the literature [8]. In this paper, we use some popular measures of similarity for each of the representations described above. Accordingly, we will use feature selection to produce importance score vectors { 1 , m } that will be converted into rank vectors { r 1 , r m } and index vectors { w 1 , w m } . We implement the following:
i.
Pearson’s correlation coefficient: In the case of similarity between two importance score vectors ( s , v ) produced by one of the feature selection methods, the Pearson’s correlation coefficient computes the similarity measure as
Φ P C C ( s , v ) = j = 1 p ( s j μ s ) ( v j μ v ) j = 1 p ( s j μ s ) 2 ( v j μ v ) 2 ,
where s is the row s in the score matrix L ; that is, the feature importance scores that correspond to the set of features { V 1 , , V p } , obtained from X s . Furthermore, μ s is the mean of the row vector s . Here, Φ P C C ( s , v ) [ 1 , 1 ] .
ii.
Spearman’s rank correlation coefficient: With regard to the similarity between two rank vectors ( r s , r v ) produced by one of the feature selection methods, Spearman’s rank correlation coefficient measures the similarity between the two rank vectors as
Φ S R C C ( r s , r v ) = 1 6 j = 1 p ( r s j r v j ) 2 p ( p 2 1 ) ,
where r s is the rank vector that corresponds to the set of features { V 1 , , V p } , such that r s is derived from s . Here, Φ S R C C ( r s , r v ) [ 1 , 1 ] .
iii.
Canberra’s distance: Another measure used to quantify the similarity between two rank vectors ( r s , r v ) is Canberra’s distance [19]. This metric represents the absolute difference between two rank vectors as
Φ C D ( r s , r v ) = j = 1 p | r s j r v j | | r s j | + | r v j | .
For easier interpretation, Canberra’s distance is normalized by dividing by p.
iv.
Jaccard’s index: Jaccard’s index measures the similarity between two finite sets; it is taken as the size of the intersection divided by the size of the union of the two sets. Given the index vectors ( w s , w v ) used to represent the two sets, Jaccard’s index is given by
Φ J I ( w s , w v ) = | w s w v | | w s w v | = | w s w v | | w s | + | w v | | w s w v | ,
Φ J I ( w s , w v ) [ 0 , 1 ] .
In addition to the stability scores based on similarity, one can compute the average standard deviation of the feature importance scores across all bootstrap samples for every feature selection method. By definition, the standard deviation measures the dispersion or instability of the feature selection scores under different training bootstraps. Similar to the work in [20], we define the average standard deviation given a feature selection method F S as
A S D = 1 p j = 1 p S D ( c j ) ,
where c j represents column j in the standardized score matrix L . In other words, S D ( c j ) is the standard deviation of the standardized F S importance scores for variable V j over the m bootstrap samples. Generally, a low average standard deviation would imply high stability, whereas a high average standard deviation would suggest lower stability.

4. Experimental Evaluation

4.1. Experimental Datasets

Without the loss of generality of the applicability of the proposed framework, the experimental evaluation in this study focuses on the binary classification conferred in the datasets illustrated in Table 1. The target variable for each dataset encompasses two classes (e.g., “males” and “females”). Here, the class refers to the group categorization of some observations under the nominal target variable Y (i.e., the value of Y). The datasets contain both numerical and nominal features with various dimensions and numbers of observations. Overall, the number of features across datasets ranges from 34 to 309, while the number of observations ranges from 351 to 6598 observations. The features/observations ratio ranges from as low as 0.007 to as high as 0.194. Furthermore, the binary class distribution spans from deeply imbalanced to perfectly balanced. On that account, these datasets provide an interesting benchmark for investigating the performance of the proposed framework and its characteristics.

4.2. Experimental Design

In this experiment, we compared the performance of the two aggregation methods: WAM and BAM. The experimental environment was Windows 10, 64-bit, 16 GB RAM, Intel(R) Xeon E-2124 (3.30 GHz). For the implementation of the proposed framework (Section 2), we selected four filter selection methods: Information Gain (IG), Symmetrical Uncertainty (SU), Minimum Redundancy Maximum Relevance (MRMR) and the Chi-squared method (CS). For the Chi-squared method, numeric features were discretized based on a fixed-width binning. Each dataset was divided into training and testing datasets, where two-thirds of the dataset was used to obtain the feature rankings and one third was used for testing. For the training phase, m = 1000 bootstrap samples were used. These bootstraps were utilized to obtain final rank vectors using the aggregation of feature importance scores within each feature selection method (WAM algorithm) and between the different feature selection methods (BAM algorithm). For the testing stage, 10 different k thresholds were used, resulting in subsets containing the top { 10 % , 20 % , , 100 % } of the total features. Here, k = 100 % refers to the baseline model where all features were used and none of the feature selection methods were implemented. We should note here that other values of k can be chosen. The existing literature in this regard provides some guidelines on how to choose k for some specific scenarios. For example, we refer interested readers to [21,22,23,24,25]. However, it is difficult to generalize directives on the basis of specific cases, and in practice, several values of k were chosen and the classification accuracy using cross-validations or independent test data was used to evaluate the quality of the chosen subsets [26,27].
In the experiment, a five-fold cross-validation procedure was implemented in the testing phase. Accordingly, we divided the testing set into five stratified samples. Thus, the five folds were selected such that the distribution of the target variable Y was approximately equal in each of the folds. For instance, given a testing data with 500 observations such that the distribution of the binary target variable was 7:3, each fold in the stratified five-fold cross-validation would be expected to have 70 observations in one class and 30 in the other. Accordingly, stratification was used to ensure that each class was equally represented among the different folds. For every iteration, one of these five stratified samples was used as a testing set, while the remaining four samples were used to train the model. On each iteration, the testing sample was used to evaluate the performance of the selected feature subsets based on the following classification algorithms:
  • Logistic regression: A statistical model used to model the probability of the occurrence of a class or an event using a logistic (sigmoid) function. It is a widely used classification algorithm in machine learning. The objective of logistic regression is to analyze the relationship between the categorical dependent variable and a set of independent variables (features) in order to predict the probability of the target variable. The maximum likelihood estimation method is usually used to estimate the logistic regression coefficients.
  • Naive Bayes: A probabilistic classifier based on the Bayes theorem [28]. It assumes that the occurrence of each input feature is independent from other features. It can be used for both binary and multiclass classification problems. Due to its simplicity, it is a fast machine learning algorithm which can be used with large datasets.
  • Random Forest: An ensemble model of decision trees in which every tree is trained on a random subsample to provide class prediction. The subsamples are drawn with replacements from the training dataset. The results from all the decision trees are then averaged to yield the model prediction [29]. Random Forest is useful to prevent over-fitting, but it can be complex to implement.
  • Support Vector Machine (SVM): A supervised learning algorithm in which each observation is plotted as a point in p-dimensional space (with p being the number of features). SVM aims to identify the optimal hyperplane which segregates the data into separate classes. The selected hyperplanes thus maximize the distance between data points of different classes [30].
The entire experimental framework was performed using the open-source statistical programming language R. Note that features of near-zero variance were removed prior to the analysis. After the final feature subsets were selected and utilized in building each classifier across the five-folds, we evaluated the performance of the classifiers through the estimation of the area under the receiver operator curve (ROC) [31], hereafter referred to as the AUC. The AUC is a popular metric used to evaluate a model’s ability to distinguish between classes.

4.3. Discussion of the Results

Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7, Figure A8, Figure A9, Figure A10, Figure A11, Figure A12 and Figure A13 in the Appendix A shows the resulting AUC values after applying the WAM and the BAM to each dataset. The results of WAM are depicted by the curves corresponding to the individual feature selection methods: IG, SU, MRMR and CS. On the other hand, the results of BAM are represented by the single BAM curve in each plot. The AUC values, averaged over the five-folds in the cross-validation, are plotted against the 10 different k 100 % thresholds (ranging between 10–100% of features used) in order to reveal the classification performance for different feature subsets in the testing stage. The running times for WAM and BAM, respectively, are also shown in Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7, Figure A8, Figure A9, Figure A10, Figure A11, Figure A12 and Figure A13. The average running time for WAM was 2563 s and that for BAM was 2579 s. Overall, BAM was marginally slower than WAM across all datasets because of the fact that BAM involves an additional aggregation step between the different feature selection methods. It should be emphasized that the computational costs of the proposed framework are mostly dependent on the feature selection methods used and the dataset composition. In the following two subsections, we analyze and compare the performance of the WAM and BAM algorithms in terms of their classification accuracy and their identification of optimal feature subsets (Section 4.3.1). In Section 4.3.2, we analyze and compare the stability behavior of the two algorithms. Furthermore, the association between stability and accuracy is discussed.

4.3.1. Classification Performance

In most datasets, we note that the ensembling of bootstrap samples and aggregation of feature importance scores within and between feature selection methods improved the baseline ( k = 100 % features are used) classification performance. In particular, the effectiveness of WAM was noticeable under the Naive Bayes classifier, where the AUC for most aggregated feature selection methods showed a steeper increase than in other classifiers across the six datasets. For instance, the Naive Bayes accuracy increased from around 0.79 baseline AUC to nearly 0.86 in the Scene dataset (Figure A5) and from the baseline of 0.60 AUC to nearly 0.70 in the Fri dataset (Figure A4). Following the removal of the least significant features, a similar increase in the performance of the logistic regression model could be observed in the Jasmine and Philippine datasets (Figure A1 and Figure A7). At baseline performance, it seems that the Random Forest algorithm performed better than the other classifiers, with SVM achieving the lowest AUC scores in the majority of cases. Although the aggregated feature selection methods present some highly changing patterns depending on the number of features retained, the general trend showed an increase in accuracy scores. Moreover, the variability in the AUC scores between the different feature selection methods tended to be more pronounced in the high-dimensional datasets (e.g., Philippine, Jasmine, Scene, HIVA) in comparison to the low-dimensional datasets (e.g., Satellite, Ada, Splice).
Likewise, it can be observed that BAM also improved the baseline accuracy of the classification models, especially under logistic regression. For instance, BAM improved the logistic regression AUC scores from around 0.83 at baseline to over 0.86 in Jasmine (Figure A1) and from around 0.55 at baseline to nearly 0.67 in Fri (Figure A4).
A general observation from the figures is that the aggregated feature selection methods under the WAM outperformed the BAM. In particular, for most of the datasets, there was at least one feature selection method aggregated under WAM that produced better AUC values than BAM. For instance, it is clear that the aggregated Chi-squared (CS) method outperformed BAM in the Philippine and Scene datasets (Figure A7 and Figure A5, respectively). In other datasets such as Fri (Figure A4) or Ionosphere (Figure A8), the aggregated IG and SU tended to produce the highest AUC scores across the classifiers. The performance of BAM appeared to be comparable to that of the aggregated feature selection methods in the datasets Image, Satellite, and Spectrometer. BAM was among the best-performing methods with respect to AUC in these datasets. The figures show that, in general, the aggregated IG and CS demonstrate the greatest correlation with BAM curves.
For the selection of the optimal k 100 % threshold based on AUC values, the WAM and BAM produced nearly consistent results. In the Philippine, Jasmine and Musk datasets, there was a noticeable pattern in which the classification AUC dropped sharply around the 30% mark, indicating that removing more than 70% of the features results in the reduced accuracy of the trained model. Nevertheless, the exact feature reduction percentage is dependent on the dataset used. In the Scene dataset, most of the classifiers agreed on retaining around 50% of the top features. In the Splice and Optidigits datasets, the optimal k 100 % was 70%, while in the Spectrometer and HIVA datasets, this was almost 80% of the features.
It should be mentioned that the experimental results in this paper affirmed the fact that the performance of a given feature selection method is data-dependent. Overall, none of the feature selection methods produced the best AUC values across all datasets. For example, the aggregated MRMR showed the worst performance across multiple classifiers in the Scene, Spectrometer and Philippine datasets, whereas in the Optdigits dataset (Figure A9), it showed improved performance when most features were removed. A similar pattern was seen for the aggregated Chi-squared feature selection method, which showed the best performance in a number of datasets (e.g., Image, Scene, Philippine) and worse performance in others (e.g., Fri, Ionosphere). The BAM curve was seen to be the middle-best performing technique. This is of course because the BAM scores are simply the aggregated averages over the individual feature selection methods. In most cases, however, there was an obvious overlap in the AUC scores between the different feature selection techniques resulting from the fact that dominant features were able to maintain similar rankings under different feature selection algorithms.
In summary, the analysis of the AUC results reveals that WAM can be used as a powerful tool to help practitioners to identify the most suitable feature selection method for a given data set. It can also guide the selection of the optimal level of feature reduction while achieving the maximum level of learning accuracy.

4.3.2. Stability Analysis Results

Table 2 presents the results of the stability analysis of the four aggregated filters using WAM and BAM. Stability scores derived from the pairwise Pearson’s correlation coefficient are calculated using the importance scores in the matrix L = [ s j ] R m × p resulting from applying the feature selection method F S on every bootstrap sample. The calculated Pearson’s correlation coefficients are then averaged over all possible pairs. On the other hand, the averaged pairwise Spearman’s rank correlation coefficients and Canberra’s distance are calculated from the rank matrix R = [ r s j ] R m × p , which is obtained by sorting the features from most to least important based on their importance scores L = [ s j ] R m × p . The Jaccard’s index is computed from the top 25% ranked feature subsets (represented by index vectors). Finally, the average standard deviation (ASD) is computed using the importance scores, after normalization, averaged over the 1000 bootstraps.
The bolded scores in Table 2 represent the best stability value for each dataset. In the Jaccard’s index, Spearman’s, and Pearson’s correlation coefficient, this refers to the highest stability score, whereas for Canberra’s distance and ASD, it represents the lowest stability score. Unsurprisingly, none of the feature selection methods demonstrated the best stability behavior consistently across the 13 datasets. In other words, the stability of the aggregated feature selection methods was data-dependent. As such, even though all the stability measures agreed on a “winning” feature selection method in some datasets, it is not surprising that these methods varied throughout. For example, IG was the most stable method across all stability measures in the Ada and Philippine datasets, but Symmetrical Uncertainty was the most stable in Ionosphere and MRMR in the Spectrometer and HIVA datasets. Thus, none of the aggregated feature selection methods can be declared the most stable in every measure.
Contrasting the stability behavior of the BAM to that of the WAM, Table 2 shows that at least one feature selection method of the four feature selection methods used generally exhibited higher stability when aggregated using WAM than when aggregated using BAM. With the exception of the Image dataset, the BAM stability scores were seen to be the middle-most stable across every experimental stability measure. For the Image dataset, the BAM stability outperformed the stability of all the aggregated feature selection methods. It is noteworthy that BAM also achieved the best classification accuracy in the Image dataset (see Figure A3). Similarly, the IG and Chi-squared methods, which demonstrated the highest stability in the Fri and Scene datasets, also achieved the best classification performance in these two datasets, respectively (see Figure A4 and Figure A5). A similar pattern can be seen with respect to the higher stability of MRMR in the Optdigits and HIVA datasets. These observations suggest that there may exist a positive association between the classification performance of a feature selection method and its stability behavior. Intuitively, this presumed association depends on the stability metrics used and also on the characteristics of the dataset and the learning algorithm—a topic that constitutes an interesting line of further research.
In summary, although BAM demonstrates a comparable stability behavior to that for each single feature selection method under WAM, it appears that, in 11 out of 12 datasets used in this paper, the stability of at least one individual method outperformed BAM. Interestingly, there appears to be a positive association between the classification performance of a feature selection method and its stability behavior. A feature selection method that outperforms others in terms of classification accuracy may also outperform them in terms of stability behavior.

5. Conclusions

Over the years, datasets have grown increasingly large in terms of their size and dimensionality. As a result, feature selection has become a necessary reprocessing tool in machine learning applications and the focus of a wide range of literature and research across many domains. This study has explored the full potential of a bootstrap ensemble approach for feature selection in which the ensemble aggregation is performed within and between the multiple feature selection methods.
The extensive experimental analysis of 13 different datasets selected from different domains has demonstrated that the Within Aggregation Method (WAM) is highly efficient in guiding the selection of the most suitable feature selection method for a given problem. As for reducing the dimensionality of the problem, our analysis showed that WAM and the Between Aggregation Method (BAM) were comparable in determining the optimal percentage reduction in the number of features. They are also comparable in terms of the computational costs. It important to emphasize that optimizing the feature subsets and the computational costs of the techniques depends largely on the dataset characteristics and learning algorithm implemented.
In terms of stability, the WAM demonstrated better stability behavior than BAM in most datasets (11 out of 12). Overall, the BAM stability scores fell in the middle range of the computed values on each of the score-based and rank-based stability metrics, suggesting a desirable stability behavior of the method.
The experimental analysis also showed that there exists a positive association between the classification performance of a feature selection method and its stability behavior. In other words, the feature selection method that outperforms others in terms of classification accuracy may also outperform them in terms of stability behavior. This association can, however, depend on the stability metric used, the learning algorithm and the characteristics of the dataset. Due to the extent to which the feature selection results in terms of learning accuracy and stability can depend on the data composition and learning algorithm, it is essential that both BAM and WAM methods be implemented in order to achieve a better understanding of and more useful insights into the underlying application domain.
The observed association between learning accuracy and stability indeed merits further investigation in the future. Additionally, in future work, we will extend the application of the framework to other types of feature selection methods, such as wrappers. Particular interest will be given to the application using a mix of filter and wrapper feature selection methods and establish a wider scope of comparisons to better evaluate the ensemble framework. Furthermore, datasets with diverse performance and various learning algorithms will be implemented. Another line of future research would be the application and analysis of additional aggregation methods, such as the median or geometric mean, and the comparison of them against the mean aggregation used in this paper.

Author Contributions

R.S.: Formal analysis, Investigation, Methodology, Data curation, Software, Validation, writing-original draft, writing—review & editing. A.A.: Conceptualizations, Formal analysis, Investigation, Methodology, Supervision, writing-original draft, writing—review & editing. H.S.: Conceptualizations, Investigation, Methodology, Supervision, writing-original draft, writing—review & editing. S.F.: Investigation, writing—original draft. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by American University of Sharjah: AUS Open Access Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors are grateful for the comments and suggestions by the referees and the handling Editor. Their comments and suggestions have greatly improved the paper. The authors also gratefully acknowledge that the work in this paper was supported, in part, by the Open Access Program from the American University of Sharjah.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Jasmine dataset classification results. Running time = 1838 s (Within Aggregation Method (WAM)); 1840 s (Between Aggregation Method (BAM)).
Figure A1. Jasmine dataset classification results. Running time = 1838 s (Within Aggregation Method (WAM)); 1840 s (Between Aggregation Method (BAM)).
Entropy 23 00200 g0a1
Figure A2. Spectrometer dataset classification results. Running time = 884 s (WAM); 889 s (BAM).
Figure A2. Spectrometer dataset classification results. Running time = 884 s (WAM); 889 s (BAM).
Entropy 23 00200 g0a2
Figure A3. Image dataset classification results. Running time = 1816 s (WAM); 1843 s (BAM).
Figure A3. Image dataset classification results. Running time = 1816 s (WAM); 1843 s (BAM).
Entropy 23 00200 g0a3
Figure A4. Fri classification results. Running time = 996 s (WAM); 1007 s (BAM).
Figure A4. Fri classification results. Running time = 996 s (WAM); 1007 s (BAM).
Entropy 23 00200 g0a4
Figure A5. Scene dataset classification results. Running time = 4277 s (WAM); 4332 s (BAM).
Figure A5. Scene dataset classification results. Running time = 4277 s (WAM); 4332 s (BAM).
Entropy 23 00200 g0a5
Figure A6. Musk dataset classification results. Running time = 4307 s (WAM); 4379 s (BAM).
Figure A6. Musk dataset classification results. Running time = 4307 s (WAM); 4379 s (BAM).
Entropy 23 00200 g0a6
Figure A7. Philippine dataset classification results. Running time = 7595 s (WAM); 7601 s (BAM).
Figure A7. Philippine dataset classification results. Running time = 7595 s (WAM); 7601 s (BAM).
Entropy 23 00200 g0a7
Figure A8. Ionosphere dataset classification results. Running time = 377 s (WAM); 377 s (BAM).
Figure A8. Ionosphere dataset classification results. Running time = 377 s (WAM); 377 s (BAM).
Entropy 23 00200 g0a8
Figure A9. Optdigits dataset classification results. Running time = 1112 s (WAM); 1117 s (BAM).
Figure A9. Optdigits dataset classification results. Running time = 1112 s (WAM); 1117 s (BAM).
Entropy 23 00200 g0a9
Figure A10. Satellite dataset classification results. Running time = 786 s (WAM); 799 s (BAM).
Figure A10. Satellite dataset classification results. Running time = 786 s (WAM); 799 s (BAM).
Entropy 23 00200 g0a10
Figure A11. Ada dataset classification results. Running time = 698 s (WAM); 698 s (BAM).
Figure A11. Ada dataset classification results. Running time = 698 s (WAM); 698 s (BAM).
Entropy 23 00200 g0a11
Figure A12. Splice dataset classification results. Running time = 1009 s (WAM); 1019 s (BAM).
Figure A12. Splice dataset classification results. Running time = 1009 s (WAM); 1019 s (BAM).
Entropy 23 00200 g0a12
Figure A13. HIVA dataset classification results. Running time = 7622 s (WAM); 7624 s (BAM).
Figure A13. HIVA dataset classification results. Running time = 7622 s (WAM); 7624 s (BAM).
Entropy 23 00200 g0a13

References

  1. Sulieman, H.; Alzaatreh, A. A Supervised Feature Selection Approach Based on Global Sensitivity. Arch. Data Sci. Ser. A (Online First) 2018, 5, 3. [Google Scholar]
  2. Bertolazzi, P.; Felici, G.; Festa, P.; Fiscon, G.; Weitschek, E. Integer programming models for feature selection: New extensions and a randomized solution algorithm. Eur. J. Oper. Res. 2016, 250, 389–399. [Google Scholar] [CrossRef]
  3. González-Navarro, F. Review and evaluation of feature selection algorithms in synthetic problems. CORR 2011, 1101, 2320. [Google Scholar]
  4. Liu, Y.; Schumann, M. Data mining feature selection for credit scoring models. J. Oper. Res. Soc. 2005, 56, 1099–1108. [Google Scholar] [CrossRef]
  5. Lemke, C.; Budka, M.; Gabrys, B. Metalearning: A survey of trends and technologies. Artif. Intell. Rev. 2015, 44, 117–130. [Google Scholar] [CrossRef] [Green Version]
  6. Parmezan, A.R.S.; Lee, H.D.; Wu, F.C. Metalearning for choosing feature selection algorithms in data mining: Proposal of a new framework. Expert Syst. Appl. 2017, 75, 1–24. [Google Scholar] [CrossRef]
  7. Dietterich, T.G. Ensemble methods in machine learning. In International Workshop on Multiple Classifier Systems; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–15. [Google Scholar]
  8. Khaire, U.M.; Dhanalakshmi, R. Stability of feature selection algorithm: A review. J. King Saud Univ. Comput. Inf. Sci. 2019. [Google Scholar] [CrossRef]
  9. Chatterjee, S. The scale enhanced wild bootstrap method for evaluating climate models using wavelets. Stat. Probab. Lett. 2019, 144, 69–73. [Google Scholar] [CrossRef]
  10. Abeel, T.; Helleputte, T.; Van de Peer, Y.; Dupont, P.; Saeys, Y. Robust biomarker identification for cancer diagnosis with ensemble feature selection methods. Bioinformatics 2010, 26, 392–398. [Google Scholar] [CrossRef]
  11. Zhou, Q.; Ding, J.; Ning, Y.; Luo, L.; Li, T. Stable feature selection with ensembles of multi-relieff. In Proceedings of the 2014 10th International Conference on Natural Computation (ICNC), Xiamen, China, 19–21 August 2014; pp. 742–747. [Google Scholar]
  12. Diren, D.D.; Boran, S.; Selvi, I.H.; Hatipoglu, T. Root cause detection with an ensemble machine learning approach in the multivariate manufacturing process. In Industrial Engineering in the Big Data Era; Springer: New York, NY, USA, 2019; pp. 163–174. [Google Scholar]
  13. Shen, Q.; Diao, R.; Su, P. Feature Selection Ensemble. Turing-100 2012, 10, 289–306. [Google Scholar]
  14. Wald, R.; Khoshgoftaar, T.M.; Dittman, D. Mean aggregation versus robust rank aggregation for ensemble gene selection. In Proceedings of the 2012 11th International Conference on Machine Learning and Applications, Boca Raton, FL, USA, 12–15 December 2012; Volume 1, pp. 63–69. [Google Scholar]
  15. Kolde, R.; Laur, S.; Adler, P.; Vilo, J. Robust rank aggregation for gene list integration and meta-analysis. Bioinformatics 2012, 28, 573–580. [Google Scholar] [CrossRef] [Green Version]
  16. Ditzler, G.; Polikar, R.; Rosen, G. A bootstrap based neyman-pearson test for identifying variable importance. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 880–886. [Google Scholar] [CrossRef] [PubMed]
  17. Goh, W.W.B.; Wong, L. Evaluating feature-selection stability in next-generation proteomics. J. Bioinform. Comput. Biol. 2016, 14, 1650029. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Kalousis, A.; Prados, J.; Hilario, M. Stability of feature selection algorithms: A study on high-dimensional spaces. Knowl. Inf. Syst. 2007, 12, 95–116. [Google Scholar] [CrossRef] [Green Version]
  19. Jurman, G.; Riccadonna, S.; Visintainer, R.; Furlanello, C. Canberra distance on ranked lists. In Proceedings of the Advances in Ranking NIPS 09 Workshop, Citeseer, Whistler, BC, Canada, 11 December 2009; pp. 22–27. [Google Scholar]
  20. Shen, Z.; Chen, X.; Garibaldi, J.M. A Novel Weighted Combination Method for Feature Selection using Fuzzy Sets. In Proceedings of the 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), New Orleans, LA, USA, 23–26 June 2019; pp. 1–6. [Google Scholar]
  21. Seijo-Pardo, B.; Bolón-Canedo, V.; Alonso-Betanzos, A. On developing an automatic threshold applied to feature selection ensembles. Inf. Fusion 2019, 45, 227–245. [Google Scholar] [CrossRef]
  22. Seijo-Pardo, B.; Bolón-Canedo, V.; Alonso-Betanzos, A. Testing different ensemble configurations for feature selection. Neural Process. Lett. 2017, 46, 857–880. [Google Scholar] [CrossRef]
  23. Khoshgoftaar, T.M.; Golawala, M.; Van Hulse, J. An empirical study of learning from imbalanced data using random forest. In Proceedings of the 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2007), Patras, Greece, 29–31 October 2007; Voluem 2, pp. 310–317. [Google Scholar]
  24. Bolón-Canedo, V.; Sánchez-Maroño, N.; Alonso-Betanzos, A. A review of feature selection methods on synthetic data. Knowl. Inf. Syst. 2013, 34, 483–519. [Google Scholar] [CrossRef]
  25. Hua, J.; Xiong, Z.; Lowey, J.; Suh, E.; Dougherty, E.R. Optimal number of features as a function of sample size for various classification rules. Bioinformatics 2005, 21, 1509–1515. [Google Scholar] [CrossRef] [Green Version]
  26. Sánchez-Marono, N.; Alonso-Betanzos, A.; Tombilla-Sanromán, M. Filter methods for feature selection–a comparative study. In Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning, Birmingham, UK, 16–19 December 2007; pp. 178–187. [Google Scholar]
  27. Wang, J.; Xu, J.; Zhao, C.; Peng, Y.; Wang, H. An ensemble feature selection method for high-dimensional data based on sort aggregation. Syst. Sci. Control Eng. 2019, 7, 32–39. [Google Scholar] [CrossRef]
  28. John, G.H.; Langley, P. Estimating continuous distributions in Bayesian classifiers. arXiv 2013, arXiv:1302.4964. [Google Scholar]
  29. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  30. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27:1–27:27. [Google Scholar] [CrossRef]
  31. Sing, T.; Sander, O.; Beerenwinkel, N.; Lengauer, T. ROCR: Visualizing classifier performance in R. Bioinformatics 2005, 21, 3940–3941. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Framework for the Within Aggregation Method (WAM).
Figure 1. Framework for the Within Aggregation Method (WAM).
Entropy 23 00200 g001
Figure 2. Framework for the Between Aggregation Method (BAM).
Figure 2. Framework for the Between Aggregation Method (BAM).
Entropy 23 00200 g002
Table 1. Description of datasets.
Table 1. Description of datasets.
Dataset Name and SourceNo. ObservationsNo. FeaturesNo. ClassesDimensionality *
Jasmine 1 2984 (1492/1492)14520.048592
Spectrometer 2 531 (476/55)10320.193974
Image 2 2000 (1420/580)14020.07
Fri 2 1000 (564/436)10120.101
Scene 3 2407 (1976/431)29520.122559
Musk 4 6598 (5581/1017)17020.025765
Philippine 1 5832 (2916/2916)30920.052984
Ionosphere 4 351 (126/225)3420.096866
Optdigits 2 5620 (572/5048)6420.011388
Satellite 2 5100 (75/5025)3720.007255
Ada 1 4147 (1029/3118)4920.011816
Splice 2 3190 (1535/1655)6220.019436
HIVA 2 4229 (149/4080)161720.382359
* Dimensionality is the ratio of features to number of observations. Superscripts indicate the data sources as follows: 1 automl.chalearn.org, 2 www.openml.org, 3 mulan.sourceforge.net, 4 archive.ics.uci.edu.
Table 2. Stability analysis results across all datasets. MRMR: Minimum Redundancy Maximum Relevance.
Table 2. Stability analysis results across all datasets. MRMR: Minimum Redundancy Maximum Relevance.
DatasetStability MeasureInformation GainSymmetrical UncertaintyMRMRChi-SquaredBAM
JasmineAverage Pearson Correlation0.2997050.2582700.9027480.3602890.335806
Average Spearman Rank Correlation0.3190090.3780170.7304140.2311010.334655
Average Jaccard’s Index0.2546830.3194110.3087120.2865860.252964
Average Canberra Distance0.2781130.2695000.1241800.3371220.294955
Average Standard Deviation0.7448170.7535040.1494290.7470540.765530
SpectrometerAverage Pearson Correlation0.8983480.9425540.6980450.9167110.927582
Average Spearman Rank Correlation0.8316020.8379030.7654040.8187450.833578
Average Jaccard’s Index0.7599080.9170550.4369050.8517900.872271
Average Canberra Distance0.1536740.1526070.1711770.1869880.182109
Average Standard Deviation0.3086870.2234750.4562420.2813110.258388
ImageAverage Pearson Correlation0.7684040.7602570.4618670.7849700.793634
Average Spearman Rank Correlation0.7029700.6902120.5414420.7169710.671209
Average Jaccard’s Index0.5347390.5143740.2754110.5574560.560470
Average Canberra Distance0.1403670.1425000.2364400.2426400.254949
Average Standard Deviation0.4379070.4621530.6433650.4592680.434591
FriAverage Pearson Correlation0.9696420.9121080.9559660.8187950.948385
Average Spearman Rank Correlation0.4586190.4599330.3010280.3273510.308725
Average Jaccard’s Index0.6007910.6007910.2637230.2886600.280655
Average Canberra Distance0.0578120.0577520.3290300.3285570.330035
Average Standard Deviation0.1463390.2680180.2013060.4225680.218823
SceneAverage Pearson Correlation0.8984250.8879530.6528950.9330140.908673
Average Spearman Rank Correlation0.8716330.8636200.7054090.9045990.881263
Average Jaccard’s Index0.7255800.7181570.4290040.8340320.761022
Average Canberra Distance0.1506220.1565750.2062400.1691750.182344
Average Standard Deviation0.3092850.3283350.5018680.2530480.296812
MuskAverage Pearson Correlation0.9530280.9398190.9836220.9729100.971086
Average Spearman Rank Correlation0.8971720.9207540.9781640.9581890.932817
Average Jaccard’s Index0.2546830.3194110.3087120.2865860.252964
Average Canberra Distance0.2781130.2695000.1241800.3371220.294955
Average Standard Deviation0.1985880.2305490.1060960.1538810.164122
PhilippineAverage Pearson Correlation0.9923810.9871850.9493370.9743120.990140
Average Spearman Rank Correlation0.9483220.9459420.8762910.7944290.826292
Average Jaccard’s Index0.9075780.8958550.5994760.8820730.898057
Average Canberra Distance0.0366550.0378830.1235650.2164030.199559
Average Standard Deviation0.0655570.0938650.1940330.1337560.090117
IonosphereAverage Pearson Correlation0.3983510.5832030.8034450.6890030.678480
Average Spearman Rank Correlation0.3915800.5835660.7793000.6343630.621247
Average Jaccard’s Index0.3229840.4184900.5880960.5118710.514258
Average Canberra Distance0.2843480.2542200.1856600.2456000.249635
Average Standard Deviation0.7314820.6065030.3972750.5469230.549206
OptdigitsAverage Pearson Correlation0.9747330.9560470.9461920.9781120.976264
Average Spearman Rank Correlation0.9653570.9593200.9134430.9688900.967125
Average Jaccard’s Index0.7769350.6871900.6218000.6994400.740367
Average Canberra Distance0.0872710.0945720.1127310.0778470.090535
Average Standard Deviation0.1504980.1963080.1887860.1415310.146916
SatelliteAverage Pearson Correlation0.9621020.7355360.7355550.9623240.932846
Average Spearman Rank Correlation0.9137030.7370370.8862790.9411410.912465
Average Jaccard’s Index0.8891710.5237330.5792170.7116440.540189
Average Canberra Distance0.1281590.2066560.0933910.1173660.126052
Average Standard Deviation0.1892400.4296930.2896690.1862150.235742
AdaAverage Pearson Correlation0.9987320.9986550.9979060.9923480.998004
Average Spearman Rank Correlation0.9563920.9527970.8239950.9554610.952162
Average Jaccard’s Index0.9199470.8662220.6072140.8307390.863409
Average Canberra Distance0.1062990.1089890.1552150.1229420.125797
Average Standard Deviation0.0288350.0315220.0291370.0839380.042076
SpliceAverage Pearson Correlation0.9922990.9931560.9909260.9746060.989386
Average Spearman Rank Correlation0.8418820.8423850.7388890.8431150.836391
Average Jaccard’s Index0.7614420.7628140.5977700.7605290.742453
Average Canberra Distance0.1877470.1875560.2248460.1873340.190992
Average Standard Deviation0.0705570.0654470.0810510.1573570.096031
HIVAAverage Pearson Correlation0.7382930.7647710.8665380.7465450.746723
Average Spearman Rank Correlation0.6033920.6218290.8044670.6486840.639793
Average Jaccard’s Index0.6542800.5839140.7522300.6185690.623986
Average Canberra Distance0.2771700.23690580.1477820.2603810.252760
Average Standard Deviation0.4574200.3951260.3185630.4669030.456952
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Salman, R.; Alzaatreh, A.; Sulieman, H.; Faisal, S. A Bootstrap Framework for Aggregating within and between Feature Selection Methods. Entropy 2021, 23, 200. https://doi.org/10.3390/e23020200

AMA Style

Salman R, Alzaatreh A, Sulieman H, Faisal S. A Bootstrap Framework for Aggregating within and between Feature Selection Methods. Entropy. 2021; 23(2):200. https://doi.org/10.3390/e23020200

Chicago/Turabian Style

Salman, Reem, Ayman Alzaatreh, Hana Sulieman, and Shaimaa Faisal. 2021. "A Bootstrap Framework for Aggregating within and between Feature Selection Methods" Entropy 23, no. 2: 200. https://doi.org/10.3390/e23020200

APA Style

Salman, R., Alzaatreh, A., Sulieman, H., & Faisal, S. (2021). A Bootstrap Framework for Aggregating within and between Feature Selection Methods. Entropy, 23(2), 200. https://doi.org/10.3390/e23020200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop