Next Article in Journal
Artificial Intelligence-Enhanced Decision Support for Informing Global Sustainable Development: A Human-Centric AI-Thinking Approach
Previous Article in Journal
Opportunistic Multi-Technology Cooperative Scheme and UAV Relaying for Network Disaster Recovery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Feature Aggregation and Combination for Two-Dimensional Ensemble Feature Selection

by
Machmud Roby Alhamidi
* and
Wisnu Jatmiko
Faculty of Computer Science, Universitas Indonesia, Jawa Barat 16424, Indonesia
*
Author to whom correspondence should be addressed.
Information 2020, 11(1), 38; https://doi.org/10.3390/info11010038
Submission received: 11 November 2019 / Revised: 9 January 2020 / Accepted: 9 January 2020 / Published: 10 January 2020

Abstract

:
Feature selection is a way of reducing the features of data such that, when the classification algorithm runs, it produces better accuracy. In general, conventional feature selection is quite unstable when faced with changing data characteristics. It would be inefficient to implement individual feature selection in some cases. Ensemble feature selection exists to overcome this problem. However, with the advantages of ensemble feature selection, some issues like stability, threshold, and feature aggregation still need to be overcome. We propose a new framework to deal with stability and feature aggregation. We also used an automatic threshold to see whether it was efficient or not; the results showed that the proposed method always produces the best performance in both accuracy and feature reduction. The accuracy comparison between the proposed method and other methods was 0.5–14% and reduced more features than other methods by 50%. The stability of the proposed method was also excellent, with an average of 0.9. However, when we applied the automatic threshold, there was no beneficial improvement compared to without an automatic threshold. Overall, the proposed method presented excellent performance compared to previous work and standard ReliefF.

1. Introduction

Feature selection is a way of reducing the dimensions/features of data such that, when the classification algorithm runs, it produces better accuracy. The common thing to do is to recognize the domain of the data and to form a set of more relevant features. However, as the amount of data increases, it becomes exhausting to sort relevant features manually. There are several benefits of feature selection, i.e., facilitating data visualization and data understanding, reducing computing time and data storage, and reducing overfitting due to the phenomenon of the curse of dimensionality and improving the performance [1].
There are many ways of building feature selection algorithms, but most feature selection algorithms are categorized into three types. Filter types use feature rank to determine the relevance of each feature [2,3,4,5,6,7,8]. Feature rank is obtained by calculating the correlation between each feature and its predictor class. Consequently, this type has a minimum of computational time. The second type is the wrapper. In this type, a classification algorithm used to determine the most relevant features, which are obtained by looking at the results of the classification algorithm [9,10,11,12]. In line with the wrapper type, the embedded type also uses a classification algorithm to determine the relevant features. The difference is that the feature selection algorithm is embedded in the classification algorithm, such as decision tree, random forest, and neural network [13,14,15].
There were many researches on feature selection in recent years. The research focused on how to optimize the feature selection algorithm. Some used the addition of optimization algorithm [16,17,18,19,20], i.e., genetic algorithm or particle swarm optimization, while some used the fuzzy approach [21,22,23,24,25] and, most recently, the ensemble approach [17,21,26,27,28,29,30,31,32,33]. In general, a conventional feature selection is quite unstable when faced with changing data characteristics. Well, mostly, each algorithm is applied to different cases. Therefore, it would be inefficient to implement a conventional feature selection in some cases, especially when it concerns big data. Ensemble feature selection exists to overcome this problem. With a reasonably simple approach like an ensemble classification, according to Pardo et al. [27], there are two categories, namely, homogeneous and heterogeneous.
Ensemble feature selection can reduce computing time and improve accuracy. The concept of ensemble feature selection is to divide the feature search space of the data into several subsets so as to reduce the complexity of the algorithm. At the end, each subset is combined to get the full results. However, with these advantages, some problems need to be overcome. Bolón–Canedo and Alonso–Betanzos [28] mentioned that some of the problems with ensemble feature selection are as follows:
  • Optimal number of ensembles: because the basis of an ensemble is a partition, it is necessary to know the optimal number of partitions. Our research [32] on ensemble feature selection showed that five partitions are better than three and seven.
  • Stability of feature selection: this relates to how well the ensemble feature selection produces the same selected features each time.
  • Scalability: a conventional feature selection is less efficient in handling big data problems. Logically, ensemble feature selection can handle this problem because of the partition.
  • Threshold for rankers: the problem of each feature selection algorithm that uses a filter approach is determining the threshold for the ranker. This threshold determines the number of reduced features.
  • Feature aggregation: this problem is related to how to combine features from each subset in the ensemble to produce the most relevant features.
  • Explainability: the main problem faced by each algorithm beyond feature selection is clarity of the results obtained. Researchers usually use two approaches, i.e., mathematical proofing or empirical proofing.
Our previous research [32], which focused on how to improve accuracy and computational time, still had a few limitations. The first involved how to calculate the stability of the ensemble feature. The second involved the determination of the threshold for the ranker. The third involved how to aggregate the subsets of features to produce the best result. The focus of this research is creating a new framework that can overcome the problems of stability, threshold, and aggregation of features.
The organization of this paper is as follows: Section 2 describes the dataset, evaluation measurement, and the proposed technique. Section 3 displays the results obtained from several experiments and contains a discussion of the results obtained. Finally, Section 4 concludes the paper.

2. Materials and Methods

2.1. Resources

In this research, an experiment was carried out using a Hewlett-Packard Laptop with an Intel (R) Core (TM) Processor i5-7200U central processing unit (CPU) @ 2.50 GHz, 2712 MHz, with two cores and four logical processors with 8 GB of random-access memory (RAM). This research used MATLAB with several libraries included.

2.2. Dataset

The dataset used in this research was taken from three sources: UCI Machine Learning Repository, Arizona State University feature selection dataset, NIPS 2003 challenge dataset, and Vanderbilt University’s gene expression dataset. There were 14 different datasets with multivariate characteristics and no missing data. These datasets were chosen based on differences in the number of samples, features, and classes, as well as because the datasets had different fields of knowledge. There are three categories or fields of knowledge, for example, artificial data, image data, and medical record data. The aim was to see whether the proposed method could overcome variations of these characteristics. Table 1 shows the characteristics of the datasets and their sources.

2.2.1. Artificial Data

MADELON is an artificial dataset consisting of 32 clusters. MADELON has five hypercube dimensions (an analog n-dimensional square and cube) and is labeled +1 and −1 at random. Five dimensions represent the five informative features. Then, out of the five features, 15 additional combinations are made to produce a total of 20 informative and redundant sets of features. The sequence of features and patterns in this dataset is randomized. MADELON is also one of five datasets in NIPS 2003.

2.2.2. Image Data

In this research, the proposed method was tested on five image datasets with different criteria, one of which was the number of classes. The first dataset was the Columbia University Image Library (COIL20). COIL 20 is a face image dataset consisting of 20 objects. Each object has 72 images that were taken five degrees apart when the object rotated on a turntable. The size of each image is 32 × 32 pixels, represented by a 1024-dimensional vector.
The second data was GISETTE. GISETTE is a handwritten number recognition dataset. The problem involves differentiating between numbers four and nine. The data are processed in such a way (normalized and centered) leading to a fixed size of 28 × 28. The sequence of features and patterns in this dataset is randomized, where information from the features is not provided to avoid bias in the feature selection process. GISETTE is one of five datasets in NIPS 2003.
The third dataset was USPS. USPS is also a digit handwritten dataset. It is similar to GISETTE, but the digits used in USPS are all digits from 0–9. The digits are converted to a 16 × 16 image. Figure 1 shows sample images from the USPS dataset.
The fourth dataset was YALE. YALE is a face image dataset from 15 individuals. Each individual has 11 image variations, which are center-light, with glasses, happy, left-light, without glasses, normal, right-light, sad, sleepy, surprised, and winking. The total dataset includes 165 grayscale images in GIF format. Figure 2 shows sample images from the YALE dataset.
Similar to YALE, ORL is also a face image dataset. ORL contains 10 different images each of 40 distinct subjects. The images were taken several times, varying the illumination, facial looks (open/closed eyes), facial emotions (smiling/not smiling), and facial appearances (glasses/no glasses). The images were taken against a dark background with the subjects facing the camera (with tolerance for some side movement). Figure 3 shows sample images from the ORL dataset.

2.2.3. Medical Record Data

The proposed method was also tested using medical record datasets. There were six datasets tested, five of which were gene expression datasets. The first one was a cardiotocography (CTG) dataset. CTG includes medical record data for fetal heart rate and uterus contraction. CTG measures the fetal heart rate and, at the same time, monitors contractions in the uterus (uterus). CTG is different from an electrocardiogram (ECG). An ECG detects the heart rate by measuring the electrical activity produced by the heart during contractions. CTG uses ultrasound waves called Doppler waves to measure fetal movements. The way it works is by sending ultrasound waves into the mother’s body; then, when it hits the fetus, the ultrasound waves bounce back with varying strength. The bouncing waves are measured as the fetal heart rate. Contractions can be measured using the tocodynamometer found on CTG. The tocodynamometer measures the tension in the mother’s abdominal wall.
The 11-TUMORS dataset was from the Gene Expression Model Selector. The 11-TUMORS consists of 11 types of tumors in humans placed in a microarray. The 11 classes in this dataset included prostate, bladder/ureter, breast, colorectal, gastroesophageal, kidney, liver, ovary, and pancreatic cancer, as well as lung adenocarcinoma and lung squamous cell carcinoma.
LUNG CANCER was a dataset from the Gene Expression Model Selector. This dataset consisted of four types of lung cancer and normal samples. The total data is 203 specimens with 186 lung tumors and 17 healthy lung specimens. Of these, 125 adenocarcinoma samples were associated with clinical data and with histological slides from adjacent parts.
The other gene expression datasets were TOX_171, PROSTATE_GE, GLI_85, LYMPHOMA, and SMK_CAN_187. TOX_171 dataset is a kind of influenza disease effect on plasmacytoid dendritic cells. PROSTATE_GE is a prostate cancer dataset. GLI_85 stands for glioma, which is a malignant tumor of the glial tissue of the nervous system. LYMPHOMA is a cancer of the lymph nodes. SMK_CAN_187 is cancer caused by smoking.

2.3. Methods

Firstly, the training data were partitioned into several subsets. Then, feature selection was performed on each subset of the data. The results of feature selection and feature ranking were then aggregated to get several new subsets of selected features. Subsets of selected features were then combined to get the most optimal feature subset. Guyon and Elisseeff [1] showed that selecting a subset of features is more useful for excluding redundant features than selecting the most relevant feature. Figure 4 shows a detailed illustration of the proposed framework.

2.3.1. Data Partitioning

Data normalization was carried out before partitioning the data. The purpose of data normalization is to uniform the distribution of values of the data. Equation (1) shows the simplest way of achieving data normalization.
n o r m _ d a t a = data min ( data ) max ( data ) min ( data ) .
the normalized data were then divided into training data and testing data with a ratio of 7:3. The training data with the Nsamples × Mfeatures dimension were then divided into several subsets. Equation (2) shows how the data partition was achieved.
t r . d a t a p a r t i t i o n = 1 p ( N ) × 1 q ( M ) ,
where p | N and q | M; both p and q are non-zero positive integers = {1, 2, …, N/M}; if p = q, then the equation becomes
t r . d a t a p a r t i t i o n = 1 p ( N × M ) = 1 q ( N × M ) .

2.3.2. Feature Ranker

ReliefF [5] is a Relief [3] filter method. The ReliefF feature selection method is an improvement of Relief that can deal with noisy, multiclass datasets with low bias. This algorithm works by estimating the features according to how well they distinguish neighbor samples. ReliefF is a ranker method; thus, a threshold is needed to obtain the subset of features. The following equation shows how to calculate the weight on Relief:
W i = W i d i f f ( x , n e a r H i t ) + d i f f ( x , n e a r M i s s ) ,
where W is the weight, x is the feature vector, nearHit is the feature vector closest to x with the same class, and nearMiss is the feature vector closest to x with a different class. Weight W decreases if the difference between feature vectors in the same class is higher than feature vectors in different classes, and vice versa.
The calculation of diff(x, nearHit) and diff(x, nearMiss) using ReliefF is different from that using standard Relief. Whereas standard Relief uses Euclidean distance, ReliefF uses Manhattan distance. Equation (5) shows the calculation formulation using Manhattan distance using ReliefF.
d i f f ( x , n e a r H i t | n e a r M i s s ) = i = 1 n | x i n e a r H i t i | + | x i n e a r M i s s i | .
After the weight W obtained, the next step is to sort W by the most significant value to get feature ranking using the following equation:
R e l i e f F r a n k i n g ( p , q ) = i = 1 p j = 1 q s o r t   ( w i , j ,   a s c e n d i n g ) .

2.3.3. Ranked Feature Aggregator

After ranking features in all subsets, the next step is to aggregate each of these features according to the index. Let us assume that the number of partitions in a row and column is the same (p = q). If the number of partitions is four, then there are 16 subsets formed {D11, D12, D13, D14, D21, D22, D23, D24, …, D44}. Aggregation is performed for each subset of the same column {(D11, D21, D31, D41), (D12, D22, D32, D42), …}; this is because the same column has the same feature index and, thus, they can be compared. Figure 5 shows an illustration of feature aggregation.
As illustrated in Figure 5, a group of new features “New.Feat.Idxj” was obtained by finding the mode value of the feature in each subset D in the i row and j column. Equation (7) shows how the feature aggregation works.
N e w . F e a t . I d x j = j = 1 q i = 1 p m o d e   ( D i j ) .
The threshold k was then applied to these groups. This threshold is a percentage value of how many features reduce. There is a difference between the use of thresholds in ensemble and non-ensemble feature selection. In non-ensemble feature selection, a threshold is applied in all features. In ensemble feature selection, a threshold is applied in the subset of features.

2.3.4. Feature Combinator

In our previous research [32], a combination was done by combining all features in each subset. Apparently, combining all subsets of features does not produce the best performance. Thus, to solve this problem, we looked for min.loss from all possible combinations of the subsets of features. Figure 6 shows all possible feature combinations if n = 4. Equation (7) shows all possible combination for subsets of features with n subsets.
A l l . C o m b = k = 1 n n ! k ! ( n k ) ! ,
B e s t . F e a t S u b s = m i n . L o s s ( A l l . C o m b ) ,
where n has the same value as p and q. If n = 4, the total possible combination is 15.

2.4. Evaluations

There are several ways to evaluate the performance of ensemble feature selection. The first involves the overall performance of the algorithm. In this evaluation, we can use calculation metrics such as accuracy, precision, recall, specificity, and F1-score.
A c c u r a c y   ( A C C ) = T P + T N T P + T N + F P + F N ,
P r e c i s i o n   ( P R E ) = T P T P + F P ,
R e c a l l   ( R E C ) = T P T P + F N ,
S p e c i f i c i t y   ( S P E ) = T N T N + F P ,
F 1 s c o r e   ( F 1 ) = 2 ( P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l ) ,
where TP is true positive, TN is true negative, FN is false negative, and FP is false positive.
The second evaluation approach involves the stability of the ensemble feature selection itself. There are three categories for stability measurement, which are stability by index/subset, stability by rank, and stability by weight [38,39]. Stability by rank and weight has a major drawback that does not allow stability calculations on two subsets of features that have different numbers of features. On the contrary, stability by index/subset can deal with different sizes of feature vectors. The mechanism involves the subset of a feature represented as a binary vector, where selected features are represented as 1 and non-selected features are represented as 0. However, stability by rank and weight is more representative when measuring the stability of ranking-based feature selection.
We used these three types of stability to see variations in their stability values. Equation (15) shows a measure of stability by index/subset, i.e., Hamming distance.
H a m m i n g ( S i ,   S j ) = k = 1 M | S i k S i k | ,
N o r m a l i z e _ H a m m i n g ( S i ,   S j ) = 1 H a m m i n g ( S i ,   S j ) M ,
where Si is subset feature i, and S2 is subset feature j. M is the total number of features in the dataset. The drawback of this stability measure is that it does not depend on feature rank.
Equation (17) shows a measure of stability by rank, i.e., Spearman’s correlation.
S p e a r m a n ( R i , R j ) = 1 6 d 2 M ( M 2 1 ) ,
where Ri is ranked feature i, and Rj is ranked feature j. The distance between the same feature in Ri and Rj is notated by d. The drawback of this stability measure is that it cannot handle subsets of features from different cardinality, and that two features must at the same size.
Equation (18) shows a measure of stability by weight, i.e., Pearson’s correlation. For Spearman and Pearson correlations, we use the interpolation method to overcome the problem of differences in the number of features.
P e a r s o n ( W i , W j ) = ( W i t μ w i ) ( W j t μ w j ) ( W i t μ w i ) 2 ( W j t μ w j ) 2 ,
where Wi is weight feature i, and Wj is ranked feature j. μ w i is the mean of Wi between the same feature in Ri and Rj. The drawback of this stability measure is that two subsets of features must have the same size.

3. Results and Discussion

In this section, we describe some of the results obtained. We evaluated the proposed method based on several criteria. First, the overall performance was judged based on the values of accuracy, recall, specificity, precision, F1-score, and the number of features selected. In this evaluation, we compared the proposed method with previous two-dimensional (2D) ensemble methods and the standard ReliefF. The most important thing from feature selection is knowing which features/subsets of features are relevant. By using a combination method to combine subsets of features and obtain features that produce the smallest loss, we could deduce which subset of features was the most relevant. The next evaluation approach involved measuring the stability of the proposed method. The last evaluation approach involved looking at the effect of the automatic threshold on the proposed method.

3.1. Feature Selection Performance

Table 2 shows the performance evaluation of feature selection. There were four feature selection methods compared, including ReliefF as a baseline, correlation feature selection (CFS), minimum-redundancy maximum-relevancy (mRMR), and fast correlation-based filter (FCBF). We tested them in five datasets representing each field of knowledge. From the comparison results, it was found that ReliefF had the best performance among other methods in three datasets. Therefore, ReliefF was used as a baseline in this paper.

3.2. Overall Performance

Table 3 shows the performance evaluation of the proposed method. The proposed method was compared with the previous 2D ensemble methods and the standard ReliefF. We can see that the proposed method outperformed the two comparison methods in all datasets except one, MADELON. When viewed in the MADELON dataset, the proposed method improved the accuracy results from the previous method by 3%, although it was still inferior to the ReliefF standard by a difference of 2%. Exploring further, we found that there were some unsatisfactory results, especially for F1-score. The F1-score for the YALE and ORL datasets was very low, ranging from 0.05 to 0.17. These results were obtained because the recall was too high, but the precision was small. This problem could be overcome using other classification methods.
Another point of performance evaluation was the number of relevant features selected. From these results, the proposed method produced the fewest number of features compared to the other two methods. This result relates to the aggregation and combination method used. As stated earlier, aggregation was done for each subset of features, not the full features in the data. This mechanism is akin to doing multiple thresholds in the ensemble partition. For combinations, the mechanism is to choose a subset of features that have a minimum loss, and those selected have the smallest combination, automatically having the fewest number of features. Overall, the proposed method outperformed the two other methods with a difference of 0.5–14% in terms of accuracy and reduced 50% of features compared other methods.

3.3. Subset of Relevant Features

The primary purpose of feature selection is to determine the features/subset of features that are relevant and not relevant in a dataset. Therefore, in this evaluation, we described which subsets of features were relevant in the tested dataset. Table 4 shows the results of the most relevant subsets of features (with a minimum loss) for 10 trials of each dataset.
For the MADELON dataset, the #1 run resulted in minimum loss with the 10th combination; referring to Figure 6, this means that the feature subsets contained in the combination were the second and fourth feature subsets. Then, for 10 trials, we found that the highest intersection involved the second and fourth subset features. For the CTG dataset, most intersections were in the subsets of the first and fourth features. The features listed in the first subset were the first features, and those listed in the fourth subset were the 20th and 22nd features. If observed further, the first feature in the CTG dataset was the Fetal Heart Rate (FHR) baseline, the 20th feature was the variance histogram, and the 22nd feature was the FHR pattern. These results indicate that, by using this combination, we could also determine which subsets of features were most relevant in a dataset.

3.4. Stability Measurement

Each stability measurement has its advantages and disadvantages. This evaluation was carried out to measure the stability of the proposed method. This also elaborated on the capabilities of the considered stability measures. By using Hamming distance, we converted the feature ranking into a binary representative. Table 5 shows the feature generated on the CTG dataset from the proposed method in 10 iterations.
Table 6 shows the performance comparison of stability measurement on the CTG dataset. From this result, it can be said that the measurement of stability using Hamming distance had an outstanding value. This is because the difference was based only on binary values. Spearman stability showed that, if the features had the same amounts and similarities, the result was 1. Stability using Pearson’s correlation in this experiment had more variation values. Overall, the proposed method had excellent stability, ranging from 0.8–1.

3.5. Applying Automatic Threshold

We also applied an automatic threshold to the proposed method. The automatic threshold used was the mean of the ranking weight. Figure 7 and Figure 8 show the results of a comparison between the proposed method without an automatic threshold and that using the automatic threshold. The result was not significant; in some cases, the results with an automatic threshold surpassed those without an automatic threshold, and vice versa.

4. Conclusions and Future Works

In this paper, we presented an improvement of the homogeneous distribution ensemble feature selection with a two-dimensional partition method. The improvement was in the feature aggregation and feature combination. From the results obtained, the proposed method optimally always produced the best performance in terms of both accuracy and feature reduction. The accuracy comparison between the proposed method and other methods was 0.5–14%, and it reduced more features than other methods by 50%. The stability of the proposed method was also excellent, with an average of 0.95. Finally, using the proposed method, we could determine which combination of subsets of features produced a better result.
Although the proposed method gave excellent performance, there were still some limitations that need to be addressed. The future work of this research will focus on how to implement an effective and efficient automatic threshold using this method. we will also study how to improve F1-scores by implementing other classification methods such as deep learning.

Author Contributions

Conceptualization, M.R.A.; methodology, M.R.A.; validation, W.J.; formal analysis, M.R.A.; writing—original draft preparation, M.R.A.; writing—review and editing, M.R.A.; visualization, M.R.A.; supervision, W.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Ministry of Research, Technology, and Higher Education Republic of Indonesia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guyon, I.; Elisseeff, A. An Introduction to Variable and Feature Selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  2. Durgabai, R.P.L. Feature Selection using ReliefF Algorithm. Int. J. Adv. Res. Comput. Commun. Eng. 2014, 3, 8215–8218. [Google Scholar] [CrossRef]
  3. Kira, K.; Rendell, L.A. Feature selection problem: Traditional methods and a new algorithm. In Proceedings of the Tenth National Conference on Artificial Intelligence, San Jose, CA, USA, 12–16 July 1992. [Google Scholar]
  4. Kononenko, I. Estimating attributes: Analysis and extensions of RELIEF. In Proceedings of the Machine Learning: ECML-94; Bergadano, F., De Raedt, L., Eds.; Springer: Berlin/Heidelberg, Germany, 1994; pp. 171–182. [Google Scholar]
  5. Kononenko, I.; Šimec, E.; Robnik-Šikonja, M. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Appl. Intell. 1997, 7, 39–55. [Google Scholar] [CrossRef]
  6. Urbanowicz, R.J.; Meeker, M.; La Cava, W.; Olson, R.S.; Moore, J.H. Relief-based feature selection: Introduction and review. J. Biomed. Inform. 2018, 85, 189–203. [Google Scholar] [CrossRef]
  7. Hall, M. Correlation-Based Feature Selection for Machine Learning. Master’s Thesis, University of Waikato Hamilton, Hamilton, New Zealand, 1999. [Google Scholar]
  8. Chormunge, S.; Jena, S. Correlation based feature selection with clustering for high dimensional data. J. Electr. Syst. Inf. Technol. 2018, 5, 542–549. [Google Scholar] [CrossRef]
  9. Kohavi, R.H.; John, G. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef] [Green Version]
  10. Foithong, S.; Pinngern, O.; Attachoo, B. Feature subset selection wrapper based on mutual information and rough sets. Expert Syst. Appl. 2012, 39, 574–584. [Google Scholar] [CrossRef]
  11. Lee, S.J.; Xu, Z.; Li, T.; Yang, Y. A novel bagging C4.5 algorithm based on wrapper feature selection for supporting wise clinical decision making. J. Biomed. Inform. 2018, 78, 144–155. [Google Scholar] [CrossRef]
  12. Wang, A.; An, N.; Chen, G.; Li, L.; Alterovitz, G. Accelerating wrapper-based feature selection with K-nearest-neighbor. Knowl.-Based Syst. 2015, 83, 81–91. [Google Scholar] [CrossRef]
  13. Chen, Y.; Wang, Z.B. Feature selection based convolutional neural network pruning and its application in calibration modeling for NIR spectroscopy. Chemom. Intell. Lab. Syst. 2019, 191, 103–108. [Google Scholar] [CrossRef]
  14. Zhang, X.; Wu, G.; Dong, Z.; Crawford, C. Embedded feature-selection support vector machine for driving pattern recognition. J. Frankl. Inst. 2015, 352, 669–685. [Google Scholar] [CrossRef]
  15. Rajeswari, K.; Vaithiyanathan, V.; Neelakantan, T.R. Feature Selection in Ischemic Heart Disease identification using feed forward neural networks. Procedia Eng. 2012, 41, 1818–1823. [Google Scholar] [CrossRef] [Green Version]
  16. Ghaemi, M.; Feizi-Derakhshi, M.-R. Feature selection using Forest Optimization Algorithm. Pattern Recognit. 2016, 60, 121–129. [Google Scholar] [CrossRef]
  17. Das, A.K.; Das, S.; Ghosh, A. Ensemble feature selection using bi-objective genetic algorithm. Knowl.-Based Syst. 2017, 123, 116–127. [Google Scholar] [CrossRef]
  18. Singh, S.; Singh, A.K. Web-Spam Features Selection Using CFS-PSO. Procedia Comput. Sci. 2018, 125, 568–575. [Google Scholar] [CrossRef]
  19. Kar, S.; Sharma, K.D.; Maitra, M. Gene selection from microarray gene expression data for classification of cancer subgroups employing PSO and adaptive K-nearest neighborhood technique. Expert Syst. Appl. 2015, 42, 612–627. [Google Scholar] [CrossRef]
  20. Vafaee Sharbaf, F.; Mosafer, S.; Moattar, M.H. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization. Genomics 2016, 107, 231–238. [Google Scholar] [CrossRef]
  21. Ebrahimpour, M.K.; Eftekhari, M. Ensemble of feature selection methods: A hesitant fuzzy sets approach. Appl. Soft Comput. J. 2017, 50, 300–312. [Google Scholar] [CrossRef]
  22. Sheeja, T.K.; Kuriakose, A.S. A novel feature selection method using fuzzy rough sets. Comput. Ind. 2018, 97, 111–121. [Google Scholar] [CrossRef]
  23. Wang, L.; Meng, J.; Huang, R.; Zhu, H.; Peng, K. Incremental feature weighting for fuzzy feature selection. Fuzzy Sets Syst. 2019, 368, 1–19. [Google Scholar] [CrossRef]
  24. Chen, J.; Mi, J.; Lin, Y. A graph approach for fuzzy-rough feature selection. Fuzzy Sets Syst. 2019, 1. [Google Scholar] [CrossRef]
  25. Liu, Z.; Zhao, X.; Li, L.; Wang, X.; Wang, D. A novel multi-attribute decision making method based on the double hierarchy hesitant fuzzy linguistic generalized power aggregation operator. Information 2019, 10, 339. [Google Scholar] [CrossRef] [Green Version]
  26. Xia, J.; Liao, W.; Chanussot, J.; Du, P.; Song, G.; Philips, W. Improving Random Forest With Ensemble of Features and Semisupervised Feature Extraction. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1471–1475. [Google Scholar]
  27. Seijo-Pardo, B.; Porto-Díaz, I.; Bolón-Canedo, V.; Alonso-Betanzos, A. Ensemble feature selection: Homogeneous and heterogeneous approaches. Knowl.-Based Syst. 2017, 118, 124–139. [Google Scholar] [CrossRef]
  28. Bolón-Canedo, V.; Alonso-Betanzos, A. Ensembles for feature selection: A review and future trends. Inf. Fusion 2019, 52, 1–12. [Google Scholar] [CrossRef]
  29. Seijo-Pardo, B.; Bolón-Canedo, V.; Alonso-Betanzos, A. On developing an automatic threshold applied to feature selection ensembles. Inf. Fusion 2019, 45, 227–245. [Google Scholar] [CrossRef]
  30. Drotár, P.; Gazda, M.; Vokorokos, L. Ensemble feature selection using election methods and ranker clustering. Inf. Sci. 2019, 480, 365–380. [Google Scholar] [CrossRef]
  31. Chiew, K.L.; Tan, C.L.; Wong, K.S.; Yong, K.S.C.; Tiong, W.K. A new hybrid ensemble feature selection framework for machine learning-based phishing detection system. Inf. Sci. 2019, 484, 153–166. [Google Scholar] [CrossRef]
  32. Alhamidi, M.R.; Arsa, D.M.S.; Rachmadi, M.F.; Jatmiko, W. 2-Dimensional homogeneous distributed ensemble feature selection. In Proceedings of the 2018 International Conference on Advanced Computer Science and Information Systems, ICACSIS 2018, Yogyakarta, Indonesia, 27–28 October 2018. [Google Scholar]
  33. Dowlatshahi, M.B.; Derhami, V.; Nezamabadi-Pour, H. Ensemble of filter-based rankers to guide an epsilon-greedy swarm optimizer for high-dimensional feature subset selection. Information 2017, 8, 152. [Google Scholar] [CrossRef] [Green Version]
  34. Guyon, I. NIPS 2003 Workshop on Feature Extraction and Feature Selection Challenge. Available online: http://clopinet.com/isabelle/Projects/NIPS2003/#links (accessed on 17 July 2018).
  35. Feature Selection Dataset. Available online: http://featureselection.asu.edu/datasets.php (accessed on 2 April 2018).
  36. Dheeru, D.; Karra Taniskidou, E. Machine Learning Repository. Available online: http://archive.ics.uci.edu/ml (accessed on 2 April 2018).
  37. Gene Expression Model Selector. Available online: http://gems-system.org (accessed on 10 May 2018).
  38. Mohana Chelvan, P.; Perumal, K. A Survey on Feature Selection Stability Measures. Int. J. Comput. Inf. Technol. 2016, 5, 98–103. [Google Scholar]
  39. Khaire, U.M.; Dhanalakshmi, R. Stability of feature selection algorithm: A review. J. King Saud Univ. Comput. Inf. Sci. 2019, in press. [Google Scholar] [CrossRef]
Figure 1. Sample images in USPS dataset. (http://www.cad.zju.edu.cn/home/dengcai/Data/USPS/images.html).
Figure 1. Sample images in USPS dataset. (http://www.cad.zju.edu.cn/home/dengcai/Data/USPS/images.html).
Information 11 00038 g001
Figure 2. Sample images in YALE dataset (http://www.cad.zju.edu.cn/home/dengcai/Data/Yale/images.html).
Figure 2. Sample images in YALE dataset (http://www.cad.zju.edu.cn/home/dengcai/Data/Yale/images.html).
Information 11 00038 g002
Figure 3. Sample images in ORL dataset (http://www.cad.zju.edu.cn/home/dengcai/Data/ORL/images.html).
Figure 3. Sample images in ORL dataset (http://www.cad.zju.edu.cn/home/dengcai/Data/ORL/images.html).
Information 11 00038 g003
Figure 4. 2-dimensional distribution ensemble feature selection framework.
Figure 4. 2-dimensional distribution ensemble feature selection framework.
Information 11 00038 g004
Figure 5. Illustration of feature aggregation in ensemble feature selection.
Figure 5. Illustration of feature aggregation in ensemble feature selection.
Information 11 00038 g005
Figure 6. All possible feature combinations in ensemble feature selection with p = q = 4.
Figure 6. All possible feature combinations in ensemble feature selection with p = q = 4.
Information 11 00038 g006
Figure 7. Performance (accuracy, recall, specificity, precision) comparison of the proposed method without automatic threshold with the proposed method + automatic threshold.
Figure 7. Performance (accuracy, recall, specificity, precision) comparison of the proposed method without automatic threshold with the proposed method + automatic threshold.
Information 11 00038 g007
Figure 8. F1-Score comparison of the proposed method without automatic threshold with proposed method + automatic threshold.
Figure 8. F1-Score comparison of the proposed method without automatic threshold with proposed method + automatic threshold.
Information 11 00038 g008
Table 1. Datasets.
Table 1. Datasets.
NoDatasetsCategories# of Samples# of Features# of ClassesSource
1MADELONArtificial data26005002[34]
2COIL20Image data1440102420[35]
3GISETTE700050002[34]
4USPS929825610[35]
5YALE165102415[35]
6ORL400102440[35]
7CTGMedical record data2126233[36]
811-TUMORS17412,53311[37]
9LUNG CANCER20312,6005[37]
10TOX_17117157484[35]
11PROSTATE_GE10259662[35]
12GLI_858522,2832[35]
13LYMPHOMA9640269[35]
14SMK_CAN_18718719,9932[35]
Table 2. Feature selection performance.
Table 2. Feature selection performance.
DatasetsAlgorithmsACCRECSPEPREF1
MADELONReliefF75.2673.8576.6775.990.75
CFS48.2151.7944.6248.330.50
mRMR70.0073.0866.9268.840.71
FCBF69.8767.9571.7970.670.69
COIL20ReliefF94.44100.0094.1547.830.65
CFS93.98100.0093.6645.830.63
mRMR94.21100.0093.9046.810.64
FCBF92.59100.0092.2040.740.58
USPSReliefF88.4595.4887.0559.600.73
CFS89.1796.1387.7861.150.75
mRMR89.6094.8488.5562.380.75
FCBF86.2093.9884.6455.040.69
CTGReliefF98.2799.4094.3398.400.99
CFS96.8698.1992.2097.790.98
mRMR86.0394.1557.4588.610.91
FCBF97.6598.3995.0498.590.98
11-TUMORSReliefF67.3150.0068.005.880.11
CFS61.540.0064.000.00NaN
mRMR76.9250.0078.008.330.14
FCBF55.770.0058.000.00NaN
Table 3. Performance evaluation.
Table 3. Performance evaluation.
DatasetsAlgorithmsACCRECSPEPREF1# of Selected Features
MADELONReliefF75.5975.5475.6475.670.76125
2D ensemble69.7769.7469.7969.750.70108.5
Proposed Method73.1573.1873.1373.150.7358.9
COIL20ReliefF93.4398.6193.1543.900.61256
2D ensemble95.42100.0095.1752.600.69224.1
Proposed Method96.0099.0995.8355.810.71157.1
GISETTEReliefF93.2992.7493.8493.780.931250
2D ensemble93.2893.0593.5193.490.931091.9
Proposed Method93.8793.8693.8893.880.94700.7
USPSReliefF88.6495.8887.1960.060.7464
2D ensemble90.1295.8588.9763.560.7657.6
Proposed Method90.1295.8588.9763.560.7657.6
YALEReliefF54.6958.3354.419.250.16256
2D ensemble56.3370.0055.439.38NaN222.9
Proposed Method60.0066.6759.579.880.17143
ORLReliefF60.8350.0061.113.160.07256
2D ensemble64.0843.3364.622.990.06224.4
Proposed Method66.0056.6766.244.040.08127.9
CTGReliefF98.4399.2395.5998.760.996.00
2D ensemble98.6099.3895.8898.840.994.5
Proposed Method98.8599.5296.5299.020.992.7
11-TUMORSReliefF71.5480.0071.2013.330.233134
2D ensemble74.0455.0075.049.080.222715.6
Proposed Method77.1250.0078.479.130.221320.5
LUNG CANCERReliefF89.5074.0090.9147.300.563150
2D ensemble90.0082.0090.7344.230.573151.5
Proposed Method93.3394.0093.2756.350.70866.8
TOX_171ReliefF64.1268.3562.6040.140.501437
2D ensemble52.9460.2750.2930.490.401360.30
Proposed Method65.8866.1565.8141.290.50678.2
PROSTATE_GEReliefF85.6785.3386.0086.780.861492
2D ensemble80.6780.6780.6782.040.811363.4
Proposed Method91.3390.0092.6792.610.91470.4
GLI_85ReliefF80.4064.1187.0372.270.665571
2D ensemble80.4063.7587.5270.720.655572
Proposed Method84.8073.0489.8076.070.741671.6
LYMPHOMAReliefF66.0783.1351.2460.380.701007
2D ensemble64.2972.2057.2460.830.66874.9
Proposed Method76.7989.7364.9570.510.79412.1
SMK_CAN_187ReliefF57.6852.2262.7657.030.544999
2D ensemble63.2161.1165.1762.440.625000
Proposed Method71.0768.5273.4570.950.692750
Table 4. Subset of selected features.
Table 4. Subset of selected features.
Dataset#1 Run#2 Run#3 Run#4 Run#5 Run#6 Run#7 Run#8 Run#9 Run#10 RunIntersection
MADELON1081115108141410132 and 4
YALE1211910111315105151 and 4
ORL53637127123121 and 3
CTG131291189911991 and 4
TOX_17111374581151391 and 3
PROSTATE_GE888521788131 and 4
GLI_8518615211211 and 2
LYMPHOMA1789116487121 and 4
SMK_CAN_187214933721215102 and 4
Table 5. Feature generation of the proposed method on the CTG dataset.
Table 5. Feature generation of the proposed method on the CTG dataset.
IterationSelected FeatureFeature Representative
1[1 13 22 20][1000000000001000000101]
2[13 22 18][0000000000001000010001]
3[1 22 20][1000000000000000000101]
4[1 7 22][1000001000000000000001]
5[22][0000000000000000000001]
6[1 22 20][1000000000000000000101]
7[1 22][1000000000000000000001]
8[1 7 22 20][1000001000000000000101]
9[3 22][0010000000000000000001]
10[3 22][0010000000000000000001]
Table 6. Stability measurement comparison on the CTG dataset.
Table 6. Stability measurement comparison on the CTG dataset.
HammingSpearmanPearson
A[1 13 22 20][1000000000001000000101]0.9910.8000.908
B[13 22 18][0000000000001000010001]
A1[1 7 22 20][1000001000000000000101]0.9910.8000.915
B1[3 22][0010000000000000000001]
A2[1 22 20][1000000000000000000101]0.9911.0000.931
B2[13 22 18][0000000000001000010001]
A1[3 22][0010000000000000000001]0.9951.0001.000
B2[1 22][1000000000000000000001]

Share and Cite

MDPI and ACS Style

Alhamidi, M.R.; Jatmiko, W. Optimal Feature Aggregation and Combination for Two-Dimensional Ensemble Feature Selection. Information 2020, 11, 38. https://doi.org/10.3390/info11010038

AMA Style

Alhamidi MR, Jatmiko W. Optimal Feature Aggregation and Combination for Two-Dimensional Ensemble Feature Selection. Information. 2020; 11(1):38. https://doi.org/10.3390/info11010038

Chicago/Turabian Style

Alhamidi, Machmud Roby, and Wisnu Jatmiko. 2020. "Optimal Feature Aggregation and Combination for Two-Dimensional Ensemble Feature Selection" Information 11, no. 1: 38. https://doi.org/10.3390/info11010038

APA Style

Alhamidi, M. R., & Jatmiko, W. (2020). Optimal Feature Aggregation and Combination for Two-Dimensional Ensemble Feature Selection. Information, 11(1), 38. https://doi.org/10.3390/info11010038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop