Next Article in Journal
Novel Guidance CPS Based on the FatBeacon Protocol
Next Article in Special Issue
A User-Centred Well-Being Home for the Elderly
Previous Article in Journal
A Review of MEMS Scale Piezoelectric Energy Harvester
Previous Article in Special Issue
Path Planning Strategy for Vehicle Navigation Based on User Habits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Proposing Enhanced Feature Engineering and a Selection Model for Machine Learning Processes

1
School of Computer Science and Engineering, University of Bridgeport, 126 Park Ave, Bridgeport, CT 06604, USA
2
Information Science and Technologies, Penn State University, 3000 Ivyside Park, Altoona, PA 16601, USA
3
Computer Systems, School of Business, Farmingdale State College, 2350 Broadhollow Rd, Farmingdale, NY 11735, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(4), 646; https://doi.org/10.3390/app8040646
Submission received: 6 March 2018 / Revised: 10 April 2018 / Accepted: 10 April 2018 / Published: 20 April 2018
(This article belongs to the Special Issue Socio-Cognitive and Affective Computing)

Abstract

:

Featured Application

This module can be used independently in any Machine Learning project or can be used in a model that is engineered by boosting and blending of algorithms for better accuracy and fitness.

Abstract

Machine Learning (ML) requires a certain number of features (i.e., attributes) to train the model. One of the main challenges is to determine the right number and the type of such features out of the given dataset’s attributes. It is not uncommon for the ML process to use dataset of available features without computing the predictive value of each. Such an approach makes the process vulnerable to overfit, predictive errors, bias, and poor generalization. Each feature in the dataset has either a unique predictive value, redundant, or irrelevant value. However, the key to better accuracy and fitting for ML is to identify the optimum set (i.e., grouping) of the right feature set with the finest matching of the feature’s value. This paper proposes a novel approach to enhance the Feature Engineering and Selection (eFES) Optimization process in ML. eFES is built using a unique scheme to regulate error bounds and parallelize the addition and removal of a feature during training. eFES also invents local gain (LG) and global gain (GG) functions using 3D visualizing techniques to assist the feature grouping function (FGF). FGF scores and optimizes the participating feature, so the ML process can evolve into deciding which features to accept or reject for improved generalization of the model. To support the proposed model, this paper presents mathematical models, illustrations, algorithms, and experimental results. Miscellaneous datasets are used to validate the model building process in Python, C#, and R languages. Results show the promising state of eFES as compared to the traditional feature selection process.

Graphical Abstract

1. Introduction

One of the most important research directions of Machine Learning (ML) is Feature Optimization (FO) (collectively grouped as Feature Engineering (FE), Feature Selection (FS), and Filtering) [1]. For FS, a saying “Less is More” becomes the essence of this research. Dimensionality Reduction [2] has become a focus in the ML process to avoid unnecessary computing power/cost, overlearning, and predictive errors. In this regard, redundant features which may have similar predictive value to other feature(s), may be excluded without negatively affecting the learning process. Similarly, the irrelevant features should be excluded as well. FS and FE not only focuses on extracting a subset from the optimal feature set but also building new feature sets previously overlooked by ML techniques. This also includes reducing the higher dimensions into lower ones to extract the feature’s value. Latest research has shown noteworthy progress in FE. In [3], the authors reviewed the latest progress in FS and associated algorithms. Out of a few, principal component analysis (PCA) [4] and Karhunen Loeve expansion [5] are widely used with eigen-values and eigen-vectors of the data covariance matrix for FO. The squared error is calculated as well in the mapping of orthonormal transformation to reduce general errors. Another approach is Bayes error probability [6] to evaluate a feature set. However, Bayes errors are generally unknown. Discriminant analysis are also used in FE. Hence, in the line with the latest progress and related study (See Section 2), the work proposed in this paper uses ML and mathematical techniques, such as statistical pattern classification [7], Orthonormalization [8], Probability theory [9], Jacobian [7], Laplacian [3], and Lagrangian distribution [10] to build the mathematical constructs and underlying algorithms (1 and 2). To advance such developments, a unique engineering of the features is proposed where the classifier learns to group an optimum set of features without consuming excessive computing power, regardless of the anatomy of the underlying datasets and predictive goals. This work also effectively addresses the known challenges of ML process such as overfitting, underfitting, predictive errors, poor generalization, and low accuracy.

1.1. Background and Motivation

Despite using the best models and algorithms, FO is crucial to the performance of the ML process and predictions. FS has been a focus in the fields of data mining [11], data discovery, text classification [12], and image processing [13]. Unfortunately, raw datasets pose no clear advice or insight into which variables must be focused on. Usually, datasets contain several variables/features but not all of them contribute towards predictive modeling. Another significance of such research is to determine the intra- and inter-relationships between the features. Their internal dependence and correlation/relevance greatly impact the way a model learns from the data. To make the process computationally inexpensive and keep the accuracy higher, features should be categorized by the algorithm itself. The existing literature proves that such work is rarely undertaken in ML research.

1.2. Parent Research

The proposed model eFES is a participating module of the enhanced machine learning engine engineering (eMLEE) model, which is based on parallel processing and learns from its mistakes (i.e., processing and storing the wrong predictions). Other than eFES, the rest of the four modules as shown in Figure 1 are beyond the scope of this paper. Specifically, eMLEE modules are: (i) enhanced algorithm blend and tuning (eABT) to optimize the classifier performance; (ii) enhanced feature engineering and selection (eFES) to optimize the features handling; (iii) enhanced weighted performance metric (eWPM) to validate the fitting of the model; and (iv) enhanced cross validation and split (eCVS) to tune the validation process. Out of these, eCVS is in its infancy in the research work. Existing research, as discussed in Section 2, has shown the limitations of general purpose algorithms in Supervised Learning (SL) for predictive analytics, decision making, and data mining. Thus, eFES (i.e., the part of eMLEE) fills the gaps that Section 2 discusses.

1.3. Our Contributions

Our contributions are the following.
  • Improved feature search and quantification for unknown or previously unlabeled features in the datasets for new insights and the relevance of predictive modeling.
  • Outlier identification to minimize the effects on classifier learning.
  • Constructing a feature grouping function (FGF) to add or remove a feature once we have scored them in their correlation, relevance, and non-redundancy nature of predictive value. Identifying the true nature of the feature vs attribute so bias can be reduced. Features tend to gain or lose their significance (predictive value) from one dataset to another. A perfect example would be an attribute “Gender” (e.g., Gender/Sex may not have any predictive value in a certain type of the dataset/prediction). However, it may have significant value in the different dataset.
  • Constructing a logical 3D space where each feature is observed for its fitness value. Each feature can be quantified based on a logical point in 3D space. Its contribution towards overfitting (x), underfitting (y), and optimum-fitting (z) can be scored, recorded, and then weighted for adding or removing in FGF.
  • Developing a unique approach of utilizing an important metric in ML (i.e., error). We have trained our model to be governed by maximum and minimum bounds of the error, so we can maintain acceptable bias and fitness including overlearning. Our maximum and minimum bounds for errors are 80% and 20% respectively. These error bounds can be considered one of our novel ideas in the proposed work. The logic goes thus: models are prone to overfitting, bias, high errors, and low accuracy. We tried to envision if the proposed model can be governed by some limits of the errors. Errors above 80% or below 20% are considered red flags. Such may indicate, bias, overlearning or under-learning of the model. Picking 80% and 20% was our rule of thumb to validate our theory with experiments on a diverse dataset (discussed in the appendix).
  • Finally, engineering local gain (LG) and global gain (GG) functions to improve the feature tuning and optimization.
Figure 2 shows the elevated level block diagram of the eFES Unit.

2. Related Study

To identify the gaps in the latest state of the art in the field of FO, we considered area of ML where FO was of high relevance. In general, every ML problem is affected by feature selection and feature processing. Predictive modeling, as our focus for the consumption of the proposed model, is a great candidate to be looked at, for FO opportunities. One of the challenges in FO is to mine the hidden features that are previously unknown and may hide a great predictive value. If such knowledge can be extracted and quantified, ML process can dramatically be improved. On the other hand, new features can also be created by aggregating existing features. Also, two irrelevant features can be combined, and their weighted function can become a productive feature with higher predictive value. Clearly, in-depth comprehensive review of the FO and related state of the art are outside the scope of this paper. However, in this section we provide the related study of noteworthy references and then list the gaps we identified. We also present the comparisons of some of the techniques in Section 5.
Li et al. [3] presented a detailed review of the latest development in the feature selection segment of machine learning. They provided various frameworks, methods, and comparisons in both Supervised Learning (SL) and Unsupervised Learning (UL). However, their comparative study did not reveal any development where each feature can achieve a run-time predictive scoring and can be added or removed algorithmically as the learning process continues. Vergara and Estevez [14] reviewed feature selection methods. They presented updates on results in a unifying framework to retrofit successful heuristic criteria. The goal was to justify the need of the feature selection problem in depth concepts of relevance and redundancy. However, their work was only focused on unifying frameworks and placed the generalization on broader scale. Mohsenzadeh et al. [15] utilized a sparse Bayesian learning approach for feature sample selection. Their proposed relevance sample feature machine (RSFM) is an extension of RVM algorithm. Their results showed the improvement in removing irrelevant features and producing better accuracy in classification. Additionally, their results also demonstrated better generalization, less system complexity, reduced overfitting, and computational cost. However, their work needs to be extended to more SL algorithms. Ma et al. [16] utilized Particle Swarm Optimization (PSO) algorithm to develop their proposed approach for detection of falling elderly people. Their proposed research enhances the selection of variables (such as hidden neurons, input weights, etc.) The experiments showed higher sensitivity, specificity, and accuracy readings. Their work though in the domain of healthcare industry does not address the application of approach to a different industry with an entirely different dataset. Lam et al. [17] proposed a unsupervised feature-learning process to improve the speed and accuracy, using the Unsupervised Feature Learning (UFL) algorithm, and fast radial basis function (RBF) for further feature training. However, the UFL may not fit when applied. SL. Han et al. [18] used circle convolutional restricted Boltzmann machine method for 3D feature learning in unsupervised process of ML. The goal was to learn from raw 3D shapes and to overcome the challenges of irregular vertex topology, orientation ambiguity on the surface, and rigid transformation invariances in shapes. Their work using 3D modeling needs to be extended to SL domains and feature learning. Zeng et al. [19] used the deep perceptual features for traffic sign recognition in the kernel extreme learning machines. Their proposed DP-KELM algorithm showed high efficiency and generalization. However, the proposed algorithm needs to be tested across different traffic systems in the world for more distinctive features than those they have considered. Wang et al. [20] discussed the process of purchase decision in subject minds using MRI scanning images through ML methods. Using the recursive cluster elimination-based SVM method, they obtained higher accuracy (71%) as compared to previous findings. They utilized Filter (GML) and wrapping methods (RCE) for feature selection. Their work also needs to be extended to other image techniques in healthcare. Lara et al. [21] provided a survey on ML application for wearable sensors, based on human activity recognition. They provided a taxonomy of learning approach and their related response time on their experiments. Their work also supported feature extraction as an important phase of ML process. ML has also shown a promising role in engineering, mechanical, and thermo-dynamic systems. Zhang et al. [22] worked on ML techniques to do the prediction in the thermal systems for systems components. Besides many different units and technique adoptions, they also utilized FS methods based on correlation feature selection algorithm. They used Weka data-mining tools and came up with the reduced feature set of 16 for improved accuracy. However, their study did not reveal how exactly they came up with this number and whether different number of the features would have helped any further. Wang et al. [23] used the supervised feature method to remove redundant features and considered the important ones for their gender classification. However, they used the neural network method as a feature extraction method, which is mostly common in unsupervised learning. Their work is yet to be tested for more computer vision tasks including image recognition tasks in which bimodal vein modeling becomes significant. Liu et al. [24] utilized the concept of F-measure optimization for FS. They developed a cost-sensitive feature approach to determine the best F-measure-based feature for the selection by ML process. They argued F-measure to be better than accuracy, for purposes of performance measurement. However, accuracy is not sufficient to be considered a baseline for performance reflection of any model or process. Abbas et al. [25] proposed solutions for IoT-based feature models using the multi-objective optimum approach. They enhanced the binary pattern for nested cardinality constraints using three paths. The second path was observed to increase the time complexity due to the increasing group of features. Though their work was not directly in ML methodologies, their work showed performance improvement in the 3rd path when the optional features were removed.
Here are the gaps that we identified based on a comprehensive literature review and comparisons made, evaluated, and presented in Section 5 later in this paper.
  • Parallel processing of the features, in which features can be evaluated one by one, has not been done, while the model metrics are being measured and recorded simultaneously to see the real-time effect.
  • Improved grouping of features is needed across diverse feature types in datasets for improved performance and generalization.
  • 3D modeling is rarely done. The 3D approach can help for side-by-side metric evaluation.
  • Accuracy is taken as granted to measure the performance of the model. However, we argue on the relevance of this metric and support other metrics in conjunction with it. We engineer our model to incline towards the metrics that are found relevant for a given problem based on the classifier learning.
  • Feature quantification and function building governed by algorithms the way we presented is not found in the literature, and the dynamic ability of such a design, as our work indicated, can be a good filler of this gap in the state of the art.
  • Finally, FO has not been directly addressed. FO helps to regulate the error biasing, outlier detection, and poor generalization.

3. Groundwork of eFES

This section presents background on the underlying theory, mathematical constructs, definitions, algorithms, and the framework.
The elevated-level framework shown in Figure 3 elaborates on the importance of each definition, incorporated with the other units of eMLEE and the ability to implement parallel processing by design. In general computing, parallel processing is done by dividing program instructions to be run by multiple processors, so the time efficiency can be improved. This also ensures the maximum utilization of otherwise idle processors. Similar concepts can be implemented on the algorithms and ML models. ML algorithms depend on the problem and data types and require sequential training of each of the data models. However, the parallel processing can dramatically improve the learning process, especially for the blended model, such as eMLEE. In light of the latest work of parallel processing in ML, such as in [26], the authors introduced the parallel framework on ML algorithms for large graphs. They experimented with aggregation and sequential steps in their model to allow researchers to improve the usage of various algorithms. Another study was done in [27], where authors used induction to improve the parallelism in the decision trees. Python and R libraries have come a long way to help provide useful libraries and classes to develop various ML techniques. Authors in [28] introduced a python library Qjan to parallelize the ML algorithms in compliance by MapReduce. A PhD thesis [29] work done by a student at the University of California at Berkeley used concurrency control method to parallelize the ML process. Another study done in [30] utilized parallel processing approaches in ML techniques for detection in big-data networks. Therefore, similar progresses have motivated us to incorporate parallel processing in the proposed model.
Our proposed model parallelism is done in two layers:
(i)
Outer layer to eFES, where eFES unit communicates with other units of the eMLEE such as eABT, eWPM, eCVS and LT. Parallelism is done through real-time metric measurement with LT object and based on classifier learning, eFES reacts to the inner layer (defined next). Other units such as eABT and eCVS enhance the algorithm blend and test-training split in parallel, while eFES is being trained. In other words, all four units including eFES regulated by LT unit, are run in parallel to improve the speed of the learning process and validation for every feature as processed in the eFES unit and every algorithm processed in the eABT unit. However, eFES can also work without being related to the other units, if researchers and industrialists may however choose so.
(ii)
Inner layer to eFES, where adding and removing of the feature are done in parallel. When the qualifying feature is added, the metrics are measured by the model to see if fitness improves, and then features are added and removed one by one to see the effect on the fitness function. This may be done sequentially, but parallelism improves the insurance that each feature is evaluated at the same time; the classifier is incorporating metrics reading from LT object and speed of the process especially when the huge dataset is being processed.
Figure 3 illustrates the inner layer parallel processing of each construct that constitutes the eFES unit. It shows the high-level block diagram of eFES unit modeling and related functions. Each definition is explained in plain English next.
Definition 1 covers the theory of LT unit, which works as a main coordinator assisting and regulating all the sub-units of eMLEE engine such as eFES. It inherently is based on parallel processing at a low level. While the classifier is in the learning process, LT object (in parallel) performs the measurements, records it, updates it as needed, and then feeds the learning process back. During classifier learning, LT object (governed by LT algorithm, outside of scope of this paper, but will be available as an API) creates a logical table (with rows and columns) where it adds or removes the entry of each feature as a weighted function, while constantly measuring the outcome of the classifier learning.
Definition 2 covers the creation of the feature set from raw input via random process, as shown above. As discussed earlier, it uses 3D modeling using x, y, and z coordinates. Each feature is quantized based on the scoring of these coordinates (x being representative of overfit, y being underfit and z being an optimum fit).
Definition 3 covers the core functions of this unit to quantify the scoring of the features, based on their redundancy and irrelevancy. It does this in a unique manner. It should be noted that not every irrelevant feature with high score will be removed by the algorithm. That is the beauty of it. To increase the generalization of such model with a diverse dataset that it has not seen during test (i.e., prediction), features are quantified, and either kept, removed, or put on the waiting list for re-processing of addition or removal evaluation. The benefit of this approach is that it will not do injustice to any feature without giving a second chance later in the learning process. This is because features, once aggregated with a new feature or previously unknown feature, can dramatically improve scores to participate in the learning process. However, the deep work of “unknown feature extraction” is kept for future work, as discussed in the future works section.
Definition 4 utilizes definition 1 to 3 and introduces a global and local gain functions to evaluate the optimum feature-set. Therefore, the predictor features, accepted features, and rejected features can be scored and processed.
Definition 5 finally covers the weight function to observe the 3D weighted approach for each feature that passes through all the layers, before each feature is finally added to the list of the final participating features.
The rest of the section is dedicated to the theoretical foundation of mathematical constructs and underlying algorithms.

3.1. Theory

eFES model manifests itself into specialized optimization goals of the features in the datasets. The most crucial step of all is the Extended Feature Engineering (EFE) that we refer when we build upon existing EF techniques. These five definitions help build the technical mechanics of the proposed model of eFES unit.
Definition 1.
Let there be a Logical Table (LT) module that regulates the ML process during eFES constructions. Let LT have 3D coordinates as x, y, and z to track, parallelize, and update the x ← overfit(0:1), y ← underfit(0:1), z ← optimumfit(−1:+1). Let there be two functions, Feature Adder as +𝔽, and Feature Remover as −𝔽, based on linearity of the classifier for each feature under test for which the RoOpF (Rule. 1) is valid. Let Lt. RoOpF > 0.5 to be considered of acceptable predictive value.
eFES LT module builds very important functions at initial layers for adding a good fit feature and removing a bad fit feature from the set of features available to it, especially when algorithm blend is being engineered. Clearly, not all features will have an optimum predictive value and thus identifying them will count towards optimization. The feature adder function is built as:
+ F ( x , y , z ) = + F F n =     ( F n   F n + 1 )   i = 1 Z ( L T . s c o r e   ( i ) ) +   j , k = 1 x , y   ( L T . s c o r e   ( j , k ) )
The feature remover function is built as:
F ( x , y , z ) = F F n =     ( F n   F n + 1 )   j , k = 1 x , y ( L T . s c o r e   ( j , k ) )   i = 1 z   ( L T . s c o r e ( i ) )
Very similar to k-means clustering [12] concept, that is highly used in unsupervised learning, LT implements feature weights mechanism (FWM) so it can report a feature with high relevancy score and non-redundant in a quantized form. Thus, we define:
F W M ( X , Y , Z ) = x = 1 X y = 1 Y z = 1 Z   ( u x w x .   u y w y   . u z w z )   ( Δ ( x , y , z ) )
Δ ( x , y , z ) =   { l = 1 L   ( u l x w l x ) ,     if   z     0 ,   A N D   z   >   ( 0.5 ,   y ) u i   { 0 , 1 } ,       1     i     L l = 1 L   ( u l y w l y ) ,           if   z     0 ,   A N D   z   >   ( 0.5 ,   x )
Illustration in Figure 4 shows the concept of LT modular elements in 3D space as discussed earlier. Figure 5 shows the variance of the LT. Figure 6 shows that it is based on binary weighted classification scheme to identify the algorithm for blending and then assign a binary weight accordingly in LT logical blocks. The diamond shape shows the err distribution that is observed and recorded by LT module as new algorithm is added or existing is removed. The complete mathematical model for eFES LT is beyond the scope of this paper. We finally provide the eFES LT functions as:
e F E S =   [ e F E S   =   1 N e   ( e r r e r r   +   E r r ) 2 ]     ×   n = 1 N F n ( f ( x , y , z ) | e x p ( + F F n + F F n +   ( F F n )   ) |
where err = local error (LE), Err = global error (GE). f (x, y, z) is the main feature set in ‘F’ for 3D.
RULE 1If (LTObject.ScoFunc (A (i) > 0.5) Then
              Assign “1”
        Else Assign “0”
         
        PROCEDURE 1
        Execute LT.ScoreMetrics (Un.F, Ov.F)
        Compute LT.Quantify (*LT)
        Execute LT.Bias (Bias.F, *)
        *_Shows the pointer to the LT object.
		
Definition 2.
Fn = {F1, F2, F3, … …, Fn} indicates all the features appears in the dataset, where each feature Fi Fn|fw0. fw indicates the weighted feature value in the set. Let Fran (x,y,z) indicates the randomized feature set.
We estimate the cost function based on randomized functions. Las Vegas and Monte Carlo algorithms are popular randomized algorithms. The key feature of the Las Vegas algorithm is that it will eventually have to make the right solution. The process involved is stochastic (i.e., not deterministic) and thus guarantee the outcome. In case of selecting a function, this means the algorithm must produce the smallest subset of optimized functions based on some criteria, such as the accuracy of the classification. Las Vegas Filter (LVS) is widely used to achieve this step. Here we set a criterion in which we expect each feature at random gets a random maximum predictive value in each run. ∅ shows the maximum inconsistency allowed per experiment. Figure 5 shows the cost function variation in LT object for each coordinate.
PROCEDURE 2ScorebestImport all attributes as ‘nCostbestn
       For j ← 1 to Iterationmax Do
          Cost ← Generate random number between 0 and Costbest
          Score ← Randomly select item from Cost feature
          If LT.InConsistance (Scorebest, Training Set) ≤ ∅ Then
		    Scorebest ← Score
		    Costbest ← C
       Return (Scorebest)
		
Definition 3.
Let lt.IrrF and lt.RedF be two functions to store the irrelevancy and redundancy score of each feature for a given dataset.
Let us define a Continuous Random Vector CRVQN, and Discrete Random Variable DRVH = {h1,h2,h3,……..,hn}. The density function of the random vector based on cumulative probability is P(CRV) = i = 1 N PH(hi)p CRV|DRV, PH(hi) being a priori probability of class.
As we observe that the higher error limit (e) (err, green line, round symbol) and lower error limit (E), (Err, blue line, square symbol) bound the feature correlation in this process. Our aim is to spread the distribution in z-dimension for optimum fitting as features are added. The red line (diamond symbol) that separates the binary distribution of Redundant Feature (Red.F) and Irrelevant Features (Irr.F) based on error bounds. The green and red lines define the upper and lower limit of the error, in which all features correlate. Here, we build a mutual information (MI) function [14] so we can quantify the relevance of a feature upon other in the random set and this information is used to build the construct for Irr.F, since once our classifier learns, it will mature the Irr.F learning module as defined in the Algorithm 1.
M I ( I r r . F ( x , y , z ) | f i , f i + 1 =   a = 1 N b = 1 N p   ( f i ( a ) , f i + 1 ( b ) .   l o g ( f i ( a ) , f i + 1 ( b ) p   ( f i ( a )   .   f i + 1 ( b ) )
We expect MI ← 0, for features to be statistically independent, so we build the construct in which the MI will be linearly related to the entropies of the features under test for Irr.F and Red.F, thus:
M . I ( f i ,   f i + 1 ) =   { H ( f i ) H   ( f i | f i + 1 ) H   ( f i + 1 H   ( f i + 1 | f i ) H ( f i ) +   H ( f i + 1 ) H   ( f i , f i + 1 )
We use the following construct to develop the relation of ‘Irr.F’ and ‘Red.F’ to show the irrelevancy factor and Redundant factor based on binary correlation and conflict mechanism as illustrated in above table.
I r r . F = i , j K { f ii f ij f ji f ji } R e d . F = { MI ( f i ;   I r r . F ) > 0.5    Strong   Relevant   Feature MI ( f i ;   I r r . F ) < 0.5    Weak   Relevant   Feature MI ( f i ;   I r r . F ) = 0.5    Neutral   Relevant   Feature
Definition 4.
Globally in 3-D space, there exist three types of features types (variables), as predictor features: P F   =   { p f 1 , p f 2 , p f 3 , . . p f n } , and accepted features to be A F   =   { a f 1 , a f 2 , a f 3 , . . , a f n } and rejected features to be R F = { r f 1 , r f 2 , r f 3 , . . r f n } , in which G     ( g + 1 ) , global gain for all experimental occurrence of data samples. G being the global gain (GG). ‘ g being the local gain (LG). Let PV be the predictive value. Accepted features are a f n   P V , strongly relevant to the sample data set Δ S , if there exist at-least one x and z or y and z plane with score 0.8 , AND a single feature f     F is strongly relevant to the objective Function ‘ObF’ in distribution ‘d’ if there exist at-least a pair of example in data set { Δ S 1 , Δ S 2 ,   Δ S 3 , …… , Δ S n     I } , such that d ( Δ S i )     0 and d ( Δ S i + 1 )     0 . Let   ( φ , ρ , ω ) correspond to the acceptable maximum 3-axis function for possible optimum values of x, y, and z respectively.
We need to build an ideal classifier that learns from data during training and estimate the predictive accuracy, so it generalizes well on the testing data. We can use probabilistic theory of Bayesian [31] to develop a construct similar to direct table lookup. We assume a random variable to be ‘rV’ that will appear with many values in set of { r V 1 ,   r V 2 , r V 3 , ,   r V n }   that   appear   as   a   class . We will use prior probability P   ( r V i ) . Thus, we represent a class or set of classes as r V i , and the greatest P   ( r V i ) , for given pattern of evidence (pE) that classifier learns on P   ( r V i   |   p E )   >   P   ( r V j |   p E )   valid for all i     j .
Because we know that
P   ( r V i   |   p E )   =   P   ( p E   |   r V i )   P   ( r V i )     ( P ( p E ) )
Therefore, we can write the conditional equation where P (pE) is considered with regard to probability of (pE) is P   ( p E   |   r V i ) P   ( r V i )   >   P   ( p E   |   r V j ) P   ( r V j )   valid for all i     j . Finally, we can write the probability of the error for the above given pattern, as P (pE)|error, assuming the cost function for all correct classification is 0, and for all incorrect is 1, then as stated earlier, the Bayesian classification will put the instance in the class labelling the highest posterior probability as P   ( p E )   =   i = 1 k P   ( r V i )   P   ( p E | r V i ) . Therefore, the construct can thus be determined as P   ( p E ) | e r r o r   =   E r r o r   [ 1 m a x { P   ( r V 1 )   |   p E , …… , P   ( r V k | p E ) } ] . Let us construct the matrix function of all features, accepted and rejected features, based on GG and LG, as
G   ( x , y , z )   =   1 N   i = 1 n { ( g i ) ×   M H }
M H =   { p f x 1 y 1 p f x 1 y n p f x n y 1 p f x n y n }     =   { a f 11 a f 12 a f 1 n a f n 1 a f n 2 a f n m } × { r f 11 r f 1 n r f 21 r f 2 n r f 2 n r f m n }   ±     ( φ , ρ , ω )
Table 1 shows the various ranges for Minimum, Maximum and Middle points of the all three functions as discussed earlier.
Using Naïve Bayes multicategory equality as:
P 1 , 2 , 3 , , N   [ j x j ] + [ j y j ] + [ j z j ] =   k V a r ( x , y , z ) [ z * i ]
where z * ( n )   argmax z P ( z ) k = 1 n p ( [ z ] ) . z k , and Fisher score algorithm [3] can be used in FS to measure the relevance of each feature based on Laplacian score, such that B ( i , j ) =   { 1 N l   i f   u i =   u i = 1 0   o t h e r w i s e , .
N l shows the no. of data samples in test class shown subscript ‘l’. Generally, we know that based on specific affinity matrix, F I S H E R s c o r e ( f i ) = 1   1 L A P C L A C I A N s c o r e ( f i ) .
To group the features based on relevancy score, we must ensure that each group member of the features exhibit low variance, medium stability and their score is based on optimum-fitness, thus each member follows k   ( k   K ,   w h e r e   K     f   ( 0 : 1 ) . This also ensure that we address the high dimensionality issue, as when feature appears in high dimension, they tend to change their value for training mode, thus, we determine the information gain using entropy function as:
E n t r o p y   ( F n )   =   t = 1 V 1 p t   l o g   p t
V1 indicates the number of various value of the target features in set of F. and p t is the probability of the type of value of t in a complete subset of the feature tested. Similarly, we can calculate the entropy for each feature in x, y, z dimension as:
E n t r o p y   ( F n   x , y , z )   =   t T   ( x , y , z ) |   F   ( t : x , y , z ) | | F t | E n t r o p y   ( F n )
Consequently, gain function in probability of entropy in 3D is determined as:
G a i n   ( I ,   F   ( t : x , y , z ) )   =   E n t r o p y   ( F n )     E n t r o p y   ( F n   x , y , z )
We develop a ratio of gain for each feature in z-dimension as this ensure the maximum fitness of the feature set for the given predictive modeling in the given dataset for which ML algorithm needs to be trained. Thus, gR indicates the ratio between:
g R   ( z )   =   G a i n   ( I ,   F   ( t : x , y , z ) )   G   ( x , y , z )   | P   ( p E )   >   P   ( p E ) | e r r o r  
Figure 7 shows the displacement of the local gain and global gain functions based on probability distributions. As discussed earlier, LG and GG functions are developed to regulate the accuracy and thus validity of the classifier is measured initially based on accuracy metric.
Table 2 shows the probability of local and global error limits based on probability function (Figure 7) in terms of local (g) and global gain function (G).
RULE 2If (g (err) < 0.2) Then
        Flag ‘O.F’
        Elseif (g (err) > 0.8) Flag ‘U.F
If we assume the fact of { Δ S 1 , Δ S 2 ,   Δ S 3 , …… , Δ S n     I } , such that d ( Δ S i )     0 and d ( Δ S i + 1 )     0 , where ‘I’ is the global input of testing data. We also confirm the relevance of the feature in the set using objective Function construct in distribution ‘d’, thus:
O b F   ( d ,   I )   = l o g   ( G a i n   ( I ,   F   ( t : x , y , z ) ) )   ( e r r [ m a x : 1 ] ,   e r r [ m i n : 0 ] )       |   d   ( Δ S i )     0   |     f o r   e v e r y   F i   i n   g r o u p  
Then, Using Equations (14)–(17), we can finally get
F . E n g ( x , y , z ) =   1   ( k × M )   t = 1 K t = k M O b F   ( d ,   I )   × M H t )
F . G r p ( x , y , z ) =   F . E n g ( x , 0 , 0 ) +   F . E n g ( 0 , y , 0 )   F . E n g ( 0 , 0 , z )  
Figure 8 Illustration of Feature Engineering and Feature Group as constructed in the mathematical model and governed by the Algorithms 1 and 2, defined later. Metrics API is available from eMLEE package. The white, yellow, and red orbital shapes indicate the local gain progression through 3D space. The little 3D shapes (x, y, and z) in the accepted feature space in grouping indicates several (theoretically unlimited) instances of the optimized values as the quantization progresses.
Definition 5.
Feature selection is governed by satisfying the scoring function (score) in 3D space (x: Over-Fitness, y: Under-Fitness, z: Optimum-Fitness) for which evaluation criterion needs to be maximized, such that E v a l u a t i o n   C r i t e r i o n :   f . There exist a weighted, W ( ) { ( φ , ρ , ω ) ,   1 } function that quantifies the score for each feature, based on response from eMLEE engine with function e M L E E r e t u r n , such that each feature in { f 1 ,   f 2 ,   f 3 , . . ,   f n , } , has associated score for ( φ : x , y , z , ρ : x , y , z , ω : x , y , z ) .
Two or more features may have the same predictive value and will be considered redundant. The non-linear relationship exists between two or more features (variables) that affects the stability and linearity of the learning process. If the incremental accuracy is improved, then non-linearity of a variable is ignored. As the number of the features are added or removed in the given set, the OF, UF, and B changes. Thus, we need to quantify their convergence, relevance, and covariance distribution across the space in 3D. We implement weighted function for each metric using LVQ technique [1], in which, we measure each metric over several experimental runs for enhanced feature set, as reported back from the function explained in Theorems 1 and 2, such that we optimize the z-dimension for optimum fitness and reduce x and y dimension for over-fitness and under-fitness. Let us define:
W ( ) =   1 S t p ( x ) d x   γ = 1 σ N γ T   .   N γ   S γ p ( x ) d x
where the piecewise effective decision border is S t =   γ = 1 σ S γ   , In addition, the unit normal vector, ( N γ ) for border S γ ,   γ = 1 ,   2 ,   3 ,   4 , . σ is valid for all cases in space. Let us define the probability distribution of data on S γ   :   γ =   S γ p ( x ) d x . Here, we can use the Parzen method [32], to restore the nonparametric density estimation method, to estimate the γ .
γ   ^ ( Δ ) =   j = 1 K δ   ( d   ( x i ,     S γ )     Δ 2
where d   ( x i ,     S γ ) shows the Euclidean distance function. Figure 9 shows the Euclidean distance function based on binary weights for W ( ) function.
Table 3 lists the quick comparison of the values of two functions as developed to observe the minimum and maximum bounds.
We used the Las Vegas Algorithm approach that helps to get correct solution at the end. We used it to validate the correctness of our gain function. This algorithm guarantees correct outcome if the solution is returned or created. It uses the probability approximate functions to implement runnable time-based instances. For our feature selection problem, we will have a set of features that will guarantee the optimum minimum set of features for acceptable classification accuracy. We use linear regression to compute the value of features to detect the non-linearity relationship between features, we thus implement a function, F u n c ( U ( t ) ) = a + b t . Where a, and b are two test features and values can be determined by using linear regression techniques, so b =   t = 1 T ( t     t ¯ )   ( U ( t )     u ) ¯   t = 1 T   ( t     t ¯ ) 2 . Where, a =   u ¯ b   t ¯ , u ¯ =   1 T   t = 1 T U   ( t ) , t ¯ =   1 T   t = 1 T t . These equations also minimize the squared error. To compute weighted function, we use feature ranking technique [12]. In this method, we will score each feature, based on quality measure such as information gain. Eventually, the large feature set will be reduced to a small feature set that is usable. The Feature Selection can be enhanced in several ways such as pre-processing, calculating information gain, error estimation, redundant feature or terms removal, and determining outlier’s quantification, etc. The information gain can be determined as:
G a i n I ( w ) =     j = 1 M P ( M j   ) .   log P ( M j ) + P ( w ) j = 1 M P   ( M j   |   w ) . log P ( M j   | w ) + P   ( w ¯ )   j = 1 M P ( M j   | w ¯ )   .   log   P   ( M j | w ¯ )
M’ shows the number of classes and ‘P’ is the probability. ‘W’ is the term that it contains as a feature. P   ( M j   | w ) is the conditional probability. In practice, the gain is normalized using Entropy, such as
N o r m . G a i n I ( w ) = { G a i n I ( w ) } { n ( w ) n log n   ( w ) n }
Here we apply conventional variance-mean techniques. We can assume, max   i = 1 n φ i ρ i ω i i = 1 n log φ i ρ i ω i . The algorithm will ensure that ‘EC { F . Sco   ( x , y , z ) ,   F . Opt   ( x , y , z )     0.5 } ’ stays in optimum bounds. Linear combination of Shannon information terms [7] and conditional mutual information maximization (CMIM) [3] for U MAX ( Z k ) =   max Z k   Δ s [ Inf ( Z k : X , Y | ( XY ) k ) ] builds the functions as
S c o r e ( X | Y ) =   y k Y G ( y k ) .   x k X G   ( x k )   × log   ( g ( z ) )
J M I N ( Z ) d =     β   ( k , k K   ( 0 ) S ( X : Y ) k +   γ   ( k , k K   ( 0 ) S ( Y : X ) k  
By using Equations (23)–(27), we get
F . S c o ( x , y , z ) =   S c o r e ( X | Y ) +   i = 1 n W ( ) i j = 1 n G a i n j ( w )
F . O p t ( x , y , z ) =   J M I N ( Z ) d . F . S o c ( x , y , z ) N { F . S o c ( x , y , z ) 1 + Т N o r m . G a i n I ( w ) } j = 1 n Δ E r r   ( j )

3.2. eFES Algorithms

The following algorithms aim: (i) to compute functions as raw feature extraction, related features identification, redundancy, and irrelevancy to prepare the layer for feature pre-processing; (ii) to compute and quantify the selection and grouping factor for the acceptance as model incorporates them; and (iii) to compute the optimization function of the model, based on weights and scoring functions. objeMLEE is the API call for accessing public functions
Following are the pre-requisites for the algorithms.
Initialization: Set the algorithm libraries, create subset of the dataset for random testing and then correlating (overlapping) tests.
Create: LTObject for eFES
Create: ObjeMLEE (h)/*create an object reference of eMLEE API */
Set: ObjeMLEE.PublicFunctions (h.eABT,h.eFES,h.eWPM,h.eCVS)/* Handles for all four constructs*/
Global Input: A n = { A 1 ,   A 2 ,   A 3 , . . ,   A n } , F n   =   { F 1 ,   F 2 ,   F 3 , . . ,   F n , } ,   D a t a S e t   ( s i g n a l , n o i s e )
Dataset Selection: These algorithms require the dataset to be formatted and labelled with supervised learning in mind. These algorithms have been tested for miscellaneous datasets selected from different domains as listed in the appendix. Some preliminary clean-up may be needed depending upon the sources and raw format of the data. For our model building, we developed a Microsoft SQL Server-based data warehouse. However raw data files such as TXT and CSV are valid input files.
Overall Goal (Big Picture): The foremost goal of these two algorithms is to govern the mathematical model built-in eFES unit. These algorithms are essential to work in a chronological mode, as the output of Algorithm 1 is required for Algorithm 2. The core idea that algorithms utilize is to quantify each feature either in original, revealed or an aggregated state. Based on such scoring, which is very dynamic and parallelized while classifier learning is being governed by these algorithms, the feature is removed, added, or put on the waiting list, for the second round of screening. This is the beauty of it. For example, Feature-X may be scored low in the first round and because Feature-Y is now added, that impacts the scoring of the Feature-X, and thus Feature-X is upgraded by scoring function and included accordingly. Finally, algorithms accomplish the optimum grouping of the features from the dataset. This scheme maximizes the relevance, reduces the redundancy, improves the fitness, accuracy, and generalization of the model for improved predictive modeling in any datasets.
Algorithm 1 aims to compute the low-level function as F.Prep (x, y, z), based on final Equations (26) and (27) as developed in the model earlier. It uses the conditions of Irrelevant feature and Redundant feature functions and run the logic if the values are below 50% as a check criterion. This algorithm splits the training data based on popular approach as cross validation. However, it must be noted in line 6, that we use our model API for improving the value of k in the process, that we call enhanced cross validation. LT object regulates it and optimizes the value of k based on the classifier performance in the real time. It then follows the error rule (80%, 20%) and keeps track of each corresponding feature, as they are added or removed. Finally, it gets to the start using the gain function in 3D space for each fitting factor since our model is based on 3D scoring of each feature in the space where point is moved in x, y, and z values in space (logical tracking during classifier learning).
Algorithm 2 aims to use the output of algorithm 1 in conjunction with computing many other crucial functions to compute a final function of feature grouping function (FGF). It uses the weighted function to analyze each participating feature including the ones that were rejected. It also utilizes the LT object and its internal functions using the API. This algorithm slices the data into various non-overlapping segments. It uses one segment at a time, then randomly mixed them for more slices to improve the classifier generalization ability during the training phase. It uses eFES as a LT object from the library of eMLEE and records the coordinates for each feature. This way, entry is made in LT class, corresponding to the gain function as shown in lines 6 to 19. From line 29 to 35, it also uses probability distribution function, as explained earlier. It computes two crucial functions of   ( φ , ρ , ω ) and G   ( x , y , z ) . For this global gain (GG) function, each distribution of local gain g (x, y, z) must be considered as features come in for each test. All the low probability-based readings are discarded for active computation but kept in waiting list in the LT object for the second run. This way, algorithm does justice to each feature and give it a second chance before finally discarding it. The rest of the features that qualify in first or second run, are then added to the FGF.
Example 1.
In one of the experiments (such as Figure 13) on dataset with features including ‘RELIGION’, we discovered something very interesting and intuitive. The data was based on survey from students, as listed in the appendix. We then formatted some of the sets from different regions and ran our Good Fit Student (GFS) and Good Fit job Candidate (GFjC) algorithms (as we briefly discuss in future works). GFS and GFjC are based on eMLEE model and utilize eFES. We noticed as a pleasant surprise that in some cases, it rejected the RELIGION feature for GFS prediction and this made sense as normally religion will not influence success in the studies of the student, but then we discovered that it gave some acceptable scoring to the same feature, because it was coming from a different GEOGRAPHICAL region of the world. It made sense as well, because religion’s influence on the individual may be diverse depending on his or her background. We noticed that it successfully correlated with Correlation Factor (CF) > 0.83 on other features in the set and considered the associated feature to be given high score due to being appeared with the other features of the collateral importance (i.e., Geographical information). CF is one of the crucial factors in GFS and GFjC algorithms. GFS and GFjC are out of the scope of this paper.
Algorithm 1. Feature Preparation Function—F.Prep (x, y, z)
Applsci 08 00646 i001
Another example was encountered where this feature played significant role in the job industry where a candidate’s progress can be impacted based on their religious background. Another example is of GENDER feature/attribute that we discuss in Section 6.1. This also explains our motivation towards creating FGF (Algorithm 2).
FGF function determines the right number and type of the features from a given data set during classifier learning and reports accordingly if satisfactory accuracy and generalization have not been reached. eFES unit, as explained in the model earlier, uses 3D array to store the scoring via LT object in the inner layer of the model. Therefore, eFES algorithms can tell the model if more features are needed to finally train the classifier for acceptable prediction in the real-world test.
Some other examples are data from healthcare, where a health condition (a feature) may become of high relevance if a certain disease is being predicted. For example, to predict the likelihood of cancer in a patient, DIABETES can have higher predictive score, because an imbalanced sugar can feed the cancerous cells. During learning, the classifier function starts identification of the features and then start adding or removing them based on effectiveness of the cost and time. Compared to other approaches where such proactive quantification is not done the eFES scheme dominates.
Algorithm 2. Feature Grouping Function (FGF)
Applsci 08 00646 i002
Figure 10 simulations demonstrate the global gain and local gain functions coherences with respect to Loss and Cost functions. Figure 10a shows that gain function is unstable when eFES randomly created the feature set. Figure 10b shows that gain functions stabilize when eFES uses weighted function, as derived in the model.

3.3. eFES Framework

Figure 11 illustrates the internal functions of the proposed module on a granular level. eMLEE API refers to the available functions that eMLEE model provides to each module such as eFES. Each grey box is a function. The diamond shapes represent a decision-making point in the flow.

4. Results and Discussions

This section provides simulated results in 3D and 2D view to provide in-depth analysis of the outcome of the proposed model for various functions and metrics. Significant samples of the entire experimental results are provided at the latest state of this eFES model development stage. These simulations elaborate on processing features to observe the optimum fitness (i.e., z dimension). 3D visuals are selected for better analysis of how the curve moves in space when the learner is optimized in the dimensions. The equation below drives the experimental run for monitoring the z-dimension in correspondence to each of x, y, and z. It should be noted that the results shown are a snapshot of 100+ experimental runs for several data samples of the datasets. The equation shown for each indicates the sampling construct for the analysis being envisioned. Features were included in the experiments from the raw datasets. To improve the generalization of the model, various experiments were performed on standard numbers such as 5, 10, 15, 20 and 40. Clearly, less is more, as we stated earlier, but we leave it up to the model to finally group (FGF) the features that have the highest predictive value for learning and ensuring the maximum fitness and generalization. For each experiment, a miscellaneous dataset was used to improve the generalization ability of the model and underlying algorithms.
Figure 12 shows the 3D variance simulations of the functions. Figure 13 shows the comparison between features that were engineered (Enhanced Feature Engineering (EFE)) and that were not engineered (in blue). It is observed that EFE outperformed the FE. No FE indicates that the experiment took features set as per standard pick and ran the process. EFE indicates the enhanced feature engineering while incorporating mathematical constructs and algorithms, where features were added and removed based on metrics reading and eventually creating an optimum feature set, as engineered by eMLEE.
Figure 14a–d shows the tests on 20-experimental run. It should be noted that as the number of experiments were increased, the classifier learning was improved as per proposed model. The selection of 20 features were based on optimum number of the grouping function (FGF). Clearly, each dataset brings in different number of features. Out of these features, some features are irrelevant, redundant, and outliers. Some features are not known at the beginning of classifier learning. However, we standardized around number 20 for experimental purposes. However, it is up to the algorithm to tell the model how many features need to be qualified and then included in the learning process.
Figure 15a–e shows the test on (5, 10, 15, 20 and 50) features set. It compares the EFE and FE correlation for Fitness Factor (FF). FF is computed by the eFES algorithms explained earlier.
Figure 15 shows the set of experiments for observation of diverse set of features to study the model’s fitness factor. As it is observed that EFE keeps the linearity (stability) of the model. (e) was as special test of various metrics. “Engineered” refers to all the metrics of eFES model incorporated.
Figure 16 shows the three sets of 20-grouped feature sets. The goal of these experiments was to study the model ability to improve the accuracy for the features (Accepted, Rejected and Mixed) from the given data set.
Figure 17 shows the candlestick (commonly used for stock analysis) analysis for LE and GE bounds. It is observed that the model learned to stay in the bounds of 20% and 80% for LE and GE. Negative 50 range is shown to elaborate on potential of error swings (i.e., for invalid values). The green and purple sticks are for reference only.
Figure 18 and Figure 19 shows the observation of the bias of the model for 20-experimental analysis. Correlation Factor (CF) is computed by the eFES algorithms. Figure 18 shows the error and accuracy increases during higher end of quantification range as shown. Figure 19 on the other hand shows that the model has achieved the desired correlation of Err and Accuracy function.
Table 4 shows the outcome of our 10-experimental analysis test, where we tuned the model to discover the maximum possible practical measures as shown. We used 500+ iterations on several different datasets to validate the model stability with real world data with the functions we built in our proposed model. Red values are found to be in error. Further work is needed to investigate it.

5. Comparative Analysis

This section provides a brief comparison of the latest techniques with the proposed model of eFES. The data sources are detailed in the Appendix. Table 5 shows the listing of the dataset used. Table 6 lists the methods considered for the comparisons. Table 7 lists the table structure of the division for results in Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18 and Table 19. The Python and R packages used are detailed in the tools sub-section of the Appendix. All the values are normalized to range between 0 and 1 for our functions’ standard outcome measures. It should be noted that the data shown in Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18 and Table 19 are a subset of our total experimental analysis. The rest of the results are left out for our literature review and model comparison paper for future work/writings.

6. Final Remarks

6.1. Conclusions

This paper reports the latest progress of the proposed model for enhanced Feature Engineering and Selection (eFES) including mathematical constructs, framework, and the algorithms. eFES is a module of enhanced Machine Learning Engine Engineering (eMLEE) parent model. eFES is based on the following building blocks: (a) a features set is processed through standard methods and records the measured metrics; (b) features are weighted based on learning process where accepted features and rejected features are separated using 3D-based training through building Local Gain (LG) and Global Gain (GG) functions; (c) features are then scored and optimized so the ML process can evolve into deciding which features need to be accepted or rejected for improved generalization of the model; (d) finally features are evaluated, tested, and the model is completed with feature grouping function (FGF). This paper reports observation on several hundreds of experiments and then implements 10 experimental approaches to tune the model. The 10th experimental rule was adopted to narrow down (i.e., slice) the result extraction from several hundred runs. The LG and GG functions were built and optimized in 3D space. The included results show promising outcomes of the proposed scheme of the eFES model. It supports the use of feature sets to further optimize the learning process of ML models for supervised learning. Using the novel approach of Local Error and Global Error bounds of 20% to 80%, we could tune our model more realistically. If the errors were above 80% or below 20%, we flag it to be an invalid fit. This unique approach of engineering a model turns out to be very effective in our experiments and observations, as reported and discussed in this paper. This model though is based on parallel processing but using high-speed hardware or a Hadoop-based system will help further.
Features (i.e., attributes) in the datasets are often irrelevant and redundant and may have less predictive value. Therefore, we constructed these two functions. A) Irrelevant Irr.F (Equation (8)) and B) Red.F (Algorithm 1). The real-world data may have more features and based on this exact fact, we realized the gap to fill with our work. For ML model classifier learning, features play a crucial role when it comes to speed, performance, predictive accuracy, and reliability of the model. Too many features or too few features may overfit or underfit the model. Then the question becomes, what is the optimum (i.e., the right number) feature set that should be filtered for a ML process, that is where our work comes in. We wanted to have the model decides for itself as it continues to learn with more data. Certain features such as” Gender” or ”Sex” may have extreme predictive value (i.e., weight) for building predictive modeling for an academic data from a part of the world where gender bias is high. However, the same feature may not play a significant role when it is included in a set from a domain, where gender bias may not exist. Moreover, we also do not anticipate that based on our thoughts, but we let our model tell us which feature should be included or removed, thus we have two functions, Adder ( + F ( x , y , z ) ) Equation (1) and Remover ( F ( x , y , z ) ) Equation (2). Number ”20” for features was selected to optimize around this number. Figure 15a–e shows the tests for 5, 10, 15 and 20 to observe the fitness factor. Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 in Section 5 show the promising state of eFES as compared to the existing techniques. It performed very well, generally. Parallel processing and 3D engineering of the features functions greatly improved the FO as we intended to investigate and improved with our work. Future work will further enhance the internals of it.
In some of the experimental tests, we came across some invalid outcomes, where we had to re-tune our model. Clearly, every model build-up process contains such issues where more work/investigation is always needed. We have found that such issues are not reflective of any huge inaccuracy in the results or instability of the model. Specially, in our diverse and stress testing, the errors and unexpected behavior and readings were very little as compared to stable and expected results. It should be watched closely with future enhancements and results, so it does not grow and become a real bug. This model is based on supervised learning algorithms.

6.2. Future Works

To further improve the current state of the eMLEE and its components (such as reported in this paper), we will be testing more data specifically from http://www.kaggle.com, www.data.gov, and www.mypersonality.org. We will be developing/testing more algorithms, especially in the domains of unsupervised learning for new insights into feature engineering and selection. Also, eFES needs further extensions towards exploring and engineering unknown features that are normally not encountered by the learning process but may have great predictive value. We are improving/developing a model known as the “Predicting Educational Relevance for an Efficient Classification of Talent (PERFECT)” Algorithm Engine (PAE). PAE is based on eMLEE and incorporates three algorithms known as Noise Removal and Structured Data Detection (NR-SDD), Good Fit Student (GFS), and Good Fit job Candidate (GFjC). We have published the preliminary results [33] and are working to apply the eFES (i.e., eMLEE) model in its latest form to study/explore/validate further enhancements.

Author Contributions

All authors contributed to the research and related literature. F.U. wrote the manuscript as a part of his Ph.D. dissertation work. J.L. advised the F.U. throughout his research work. All the authors discussed the research with F.U. during production of this paper. J.L., S.R. and S.H. reviewed the work presented in this paper and advised changes and improvement throughout the paper preparation and model experimental process. F.U. incorporated the changes. F.U. conducted the experiments and simulations. All authors reviewed and approved the work.

Conflicts of Interest

The authors declare no conflict of interest.

Key Notations

xPoint of overfitting (OF)
yPoint of underfitting (UF)
zPoint of optimum-fitting (OpF)
F n Complete raw feature set
+ F Feature Remover Function
F Feature Adder Function
LTLogical Table Function
A (i)ith ML algorithm such as SVM
e F E S   Ratio of normalized error between local and global errors
F r a n   ( x , y , z ) Randomized feature set
f w Weighted feature value
Δ ( x , y , z ) Regulating Function in LT object to obey the reference of 50% for training
err (e), LELocal error
Err (e), GEGlobal error
maximum inconsistency
Q N nth random generator
f i i Position of a feature in 2D space
g (LG)Local gain
G (GG)Global gain
a f i ith accepted feature
r f i ith rejected feature
p f i ith predictive feature
Δ S i ith dataset item
  ( φ , ρ , ω ) Acceptable parameter function for x, y, z
O b F Objective function
k   K Predictor ID in the group of K
ECEvaluation Criterion
W ( ) Weighted Function
N γ ,   S γ Border unit normal vectors
γ   ^ ( Δ ) Probability distribution based on nonparametric density estimation
G a i n I ( w ) Information gain
J M I N ( Z ) d Jacobian minimization

Appendix A

Appendix A.1. Dataset Sources

We have utilized the data from the following domains listed below. Some datasets were raw, CSV, and SQL lite format with parameters and field definitions. We transformed all our input data into the SQL Server data warehouse. Some of datasets are found to be ideal for doing healthcare preventive medicine, stock market, epidemic, and crime control prediction.

Appendix A.2. Tools

Due to the years of background in databases and data architecture, we selected the Microsoft SQL Server [34] (Business Intelligence, SQL Server Analysis Services, and Data mining) as our data warehouse. Preliminary work is being conducted in Microsoft Azure machine learning tools. We used Microsoft Excel data mining tools [35,36]. Due to our programing background, we used Microsoft C# (mostly for learning in the beginning) and Python and R language for main building of this model, and algorithms. There are various popular and useful Python data analysis and scientific libraries (https://wiki.python.org/moin/NumericAndScientific) such as Pandas, Numpy, SciPy (https://www.scipy.org/), Matplotlib, scikit-learn, Statsmodels, ScientificPython, Fuel, SKdata, Fuel, MILK, etc. For R language (https://cran.r-project.org/), there are various libraries such as gbm, KlaR, tree, RWeka, ipred, CORELearn, MICE Package, rpart, PARTY, CARET, randomForest. We used some of them as they were relevant to our work and we are in the process of learning, experimenting and using more of them for future work. We also used GraphPad Prism (https://www.graphpad.com/scientific-software/prism/) to produce simulated results. Some of the python and R packages used are the following: FSelector-package, sklearn.feature_extraction, sklearn.decomposition, from sklearn.ensemble, nsprcomp R package, R (RFE), R (varSelRF), R (Boruta package), calc.relaimpo (Relative important) (r), earth package, Step-wise Regression, Weight of Evidence (WOE).

References

  1. Guyon, I.; Elisseeff, A. An Introduction to Variable and Feature Selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  2. Globerson, A.; Tishby, N. Sufficient Dimensionality Reduction. J. Mach. Learn. Res. 2003, 3, 1307–1331. [Google Scholar]
  3. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature Selection: A Data Perspective. ACM Comput. Surv. 2017, 50, 94. [Google Scholar] [CrossRef]
  4. Dayan, P. Unsupervised learning. In The Elements of Statistical Learning; Springer: New York, NY, USA, 2009; pp. 1–7. [Google Scholar]
  5. Tuia, D.; Volpi, M.; Copa, L.; Kanevski, M.; Munoz-Mari, J. A Survey of Active Learning Algorithms for Supervised Remote Sensing Image Classification. IEEE J. Sel. Top. Signal Process. 2011, 5, 606–617. [Google Scholar] [CrossRef]
  6. Chai, K.; Hn, H.T.; Cheiu, H.L. Naive-Bayes Classification Algorithm. Bayesian online classifiers for text classification and Filterin. In Proceedings of the 25th Annual International ACM SI GIR Conference on Research and Development in Information Retrieval, Tampere, Finland, 11–15 August 2002; pp. 97–104. [Google Scholar]
  7. Kaggle. Feature Engineering; Kaggle: San Francisco, CA, USA, 2010; pp. 1–11. [Google Scholar]
  8. Lin, C. Optimization and Machine Learning; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  9. Armstrong, H. Machines that Learn in the Wild; NESTA: London, UK, 2015; pp. 1–18. [Google Scholar]
  10. Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  11. Liu, H.; Motoda, H. Feature Selection for Knowledge Discovery and Data Mining; Springer Science & Business Media: Berlin, Germany, 1998. [Google Scholar]
  12. Forman, G. Feature Selection for Text Classification. Comput. Methods Feature Sel. 2007, 16, 257–274. [Google Scholar]
  13. Nixon, M.S.; Aguado, A.S. Feature Extraction & Image Processing for Computer Vision; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  14. Vergara, J.R.; Estévez, P.A. A Review of Feature Selection Methods Based on Mutual Information. Neural Comput. Appl. 2015, 24, 175–186. [Google Scholar] [CrossRef]
  15. Mohsenzadeh, Y.; Sheikhzadeh, H.; Reza, A.M.; Bathaee, N.; Kalayeh, M.M. The relevance sample-feature machine: A sparse bayesian learning approach to joint feature-sample selection. IEEE Trans. Cybern. 2013, 43, 2241–2254. [Google Scholar] [CrossRef] [PubMed]
  16. Ma, X.; Wang, H.; Xue, B.; Zhou, M.; Ji, B.; Li, Y. Depth-based human fall detection via shape features and improved extreme learning machine. IEEE J. Biomed. Health Inform. 2014, 18, 1915–1922. [Google Scholar] [CrossRef] [PubMed]
  17. Lam, D.; Wunsch, D. Unsupervised Feature Learning Classification with Radial Basis Function Extreme Learning Machine Using Graphic Processors. IEEE Trans. Cybern. 2017, 47, 224–231. [Google Scholar] [CrossRef] [PubMed]
  18. Han, Z.; Liu, Z.; Han, J.; Vong, C.M.; Bu, S.; Li, X. Unsupervised 3D Local Feature Learning by Circle Convolutional Restricted Boltzmann Machine. IEEE Trans. Image Process. 2016, 25, 5331–5344. [Google Scholar] [CrossRef] [PubMed]
  19. Zeng, Y.; Xu, X.; Shen, D.; Fang, Y.; Xiao, Z. Traffic Sign Recognition Using Kernel Extreme Learning Machines With Deep Perceptual Features. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1–7. [Google Scholar] [CrossRef]
  20. Wang, Y.; Chattaraman, V.; Kim, H.; Deshpande, G. Predicting Purchase Decisions Based on Spatio-Temporal Functional MRI Features Using Machine Learning. IEEE Trans. Auton. Ment. Dev. 2015, 7, 248–255. [Google Scholar] [CrossRef]
  21. Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
  22. Zhang, K.; Guliani, A.; Ogrenci-Memik, S.; Memik, G.; Yoshii, K.; Sankaran, R.; Beckman, P. Machine Learning-Based Temperature Prediction for Runtime Thermal Management Across System Components. IEEE Trans. Parallel Distrib. Syst. 2018, 29, 405–419. [Google Scholar] [CrossRef]
  23. Wang, J.; Wang, G.; Zhou, M. Bimodal Vein Data Mining via Cross-Selected-Domain Knowledge Transfer. IEEE Trans. Inf. Forensics Secur. 2018, 13, 733–744. [Google Scholar] [CrossRef]
  24. Liu, M.; Xu, C.; Luo, Y.; Xua, C.; Wen, Y.; Tao, D. Cost-Sensitive Feature Selection by Optimizing F-measures. IEEE Trans. Image Process. 2017, 27, 1323–1335. [Google Scholar] [CrossRef]
  25. Abbas, A.; Member, S.; Siddiqui, I.F.; Uk, S.; Lee, J.I.N. Multi-Objective Optimum Solutions for IoT-Based Feature Models of Software Product Line. IEEE Access 2018, in press. [Google Scholar] [CrossRef]
  26. Haller, P.; Miller, H. Parallelizing Machine Learning-Functionally. In Proceedings of the 2nd Annual Scala Workshop, Stanford, CA, USA, 2 June 2011. [Google Scholar]
  27. Srivastava, A.; Han, E.-H.S.; Singh, V.; Kumar, V. Parallel formulations of decision-tree classification algorithms. In Proceedings of the 1998 International Conference Parallel Process, Las Vegas, NV, USA, 15–17 June 1998; pp. 1–24. [Google Scholar]
  28. Batiz-Benet, J.; Slack, Q.; Sparks, M.; Yahya, A. Parallelizing Machine Learning Algorithms. In Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures, Pittsburgh, PA, USA, 25–27 June 2012. [Google Scholar]
  29. Pan, X.; Sciences, C. Parallel Machine Learning Using Concurrency Control. Ph.D. Thesis, University of California, Berkeley, CA, USA, 2017. [Google Scholar]
  30. Siddique, K.; Akhtar, Z.; Lee, H.; Kim, W.; Kim, Y. Toward Bulk Synchronous Parallel-Based Machine Learning Techniques for Anomaly Detection in High-Speed Big Data Networks. Symmetry 2017, 9, 197. [Google Scholar] [CrossRef]
  31. Kirk, M. Thoughtful Machine Learning; O’Reilly Media: Newton, MA, USA, 2015. [Google Scholar]
  32. Kubat, M. An Introduction to Machine Learning; Springer: Berlin, Germany, 2015. [Google Scholar]
  33. Uddin, M.F.; Lee, J. Proposing stochastic probability-based math model and algorithms utilizing social networking and academic data for good fit students prediction. Soc. Netw. Anal. Min. 2017, 7, 29. [Google Scholar] [CrossRef]
  34. Tang, Z.; Maclennan, J. Data Mining With SQL Server 2005; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  35. Linoff, G.S. Data Analysis Using SQL and Excel; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  36. Fouché, G.; Langit, L. Data Mining with Excel. In Foundations of SQL Server 2008 R2 Business Intelligence; Apress: Berkeley, CA, USA, 2011; pp. 301–328. [Google Scholar]
Figure 1. This illustration shows the elevated system externals of eMLEE. Logical Table (LT) interacts primarily with eFES and eABT as compared to the other two modules. It coordinates and regulates the metrics of the learning process in the parallel mode.
Figure 1. This illustration shows the elevated system externals of eMLEE. Logical Table (LT) interacts primarily with eFES and eABT as compared to the other two modules. It coordinates and regulates the metrics of the learning process in the parallel mode.
Applsci 08 00646 g001
Figure 2. eFES elevated Level.
Figure 2. eFES elevated Level.
Applsci 08 00646 g002
Figure 3. Theoretical Foundation Illustration on the elevated-level.
Figure 3. Theoretical Foundation Illustration on the elevated-level.
Applsci 08 00646 g003
Figure 4. Illustration of the conceptual view of LT Modules in 3D space.
Figure 4. Illustration of the conceptual view of LT Modules in 3D space.
Applsci 08 00646 g004
Figure 5. (a) This test shows the variance of the LT module for the cost function for all three co-ordinates and then z (optimum-fitness). This is the ideal behavior; (b) This test shows the real (experimental) behavior; (c) This shows the ideal shift of all 3 coordinates in space while they are tested by the model in parallel. Each coordinate (x, y, z) lies on the black lines in each direction. Then, based on the scoring reported by LT object (and cost function), they either sit on positive point or negative as shown; (d) This shows the ideal spread of each point, when z is optimized with the lowest cost function.
Figure 5. (a) This test shows the variance of the LT module for the cost function for all three co-ordinates and then z (optimum-fitness). This is the ideal behavior; (b) This test shows the real (experimental) behavior; (c) This shows the ideal shift of all 3 coordinates in space while they are tested by the model in parallel. Each coordinate (x, y, z) lies on the black lines in each direction. Then, based on the scoring reported by LT object (and cost function), they either sit on positive point or negative as shown; (d) This shows the ideal spread of each point, when z is optimized with the lowest cost function.
Applsci 08 00646 g005
Figure 6. Illustration of probability-based feature binary classification. Overlapped matrices for Red.F and Irr.F, for which the probability scope resulted in acceptable errors by stepping into vector space, for which 0.8 > err > 0.2.
Figure 6. Illustration of probability-based feature binary classification. Overlapped matrices for Red.F and Irr.F, for which the probability scope resulted in acceptable errors by stepping into vector space, for which 0.8 > err > 0.2.
Applsci 08 00646 g006
Figure 7. Illustration on probability of LG and GG function.
Figure 7. Illustration on probability of LG and GG function.
Applsci 08 00646 g007
Figure 8. Illustration of Feature Engineering and Feature Group as constructed in the mathematical model.
Figure 8. Illustration of Feature Engineering and Feature Group as constructed in the mathematical model.
Applsci 08 00646 g008
Figure 9. Illustration of function based on Euclidean Distance function.
Figure 9. Illustration of function based on Euclidean Distance function.
Applsci 08 00646 g009
Figure 10. Gain Function correlation with Loss and Cost Function.
Figure 10. Gain Function correlation with Loss and Cost Function.
Applsci 08 00646 g010
Figure 11. Framework to illustrate the internal processes of eFES module.
Figure 11. Framework to illustrate the internal processes of eFES module.
Applsci 08 00646 g011
Figure 12. (a) It shows that variance in z is minimum on random datasets; (b) It shows the variance in all of axis as ideal, as what we wanted to observe; (c) It shows the variance in all axis to be real (practical), as what we observed.
Figure 12. (a) It shows that variance in z is minimum on random datasets; (b) It shows the variance in all of axis as ideal, as what we wanted to observe; (c) It shows the variance in all axis to be real (practical), as what we observed.
Applsci 08 00646 g012
Figure 13. A random experiment on 15 features for FE vs. EFE Correlation study for the observed Fitness Factor.
Figure 13. A random experiment on 15 features for FE vs. EFE Correlation study for the observed Fitness Factor.
Applsci 08 00646 g013
Figure 14. (a) We observe that LG and GG were very random throughout the tests; (b) We observe that LG showed linear correlation (regression) when x (overfitting) was found to be low, and z was kept random in 3D space. GG, as observed was random; (c) Observations on low y, where we found GG to be close to the linear response; (d) Finally, as the model is optimized (high z), we saw expected and desired linear regression. We observed some unexpected as shown by the peaks, which were suspected to be riding outliers.
Figure 14. (a) We observe that LG and GG were very random throughout the tests; (b) We observe that LG showed linear correlation (regression) when x (overfitting) was found to be low, and z was kept random in 3D space. GG, as observed was random; (c) Observations on low y, where we found GG to be close to the linear response; (d) Finally, as the model is optimized (high z), we saw expected and desired linear regression. We observed some unexpected as shown by the peaks, which were suspected to be riding outliers.
Applsci 08 00646 g014
Figure 15. Set of experiments for observation of diverse set of features to study the model’s fitness factor: (a) With features in higher range of 15, we observe the consistent stability; (b) This considers 10 features to evaluate the fitness function for both EFE and FE. Clearly, we observe the improvement in Fitness Function; (c) With features in higher range of 15, we observe the consistent stability; (d) However, as expected, we noticed that features up to 20, the maximum range of fitness function is around 80%; (e) This shows the comparison of the various metrics and read the relevant value of the fitness factor for each study of the metrics as shown by distinct colors.
Figure 15. Set of experiments for observation of diverse set of features to study the model’s fitness factor: (a) With features in higher range of 15, we observe the consistent stability; (b) This considers 10 features to evaluate the fitness function for both EFE and FE. Clearly, we observe the improvement in Fitness Function; (c) With features in higher range of 15, we observe the consistent stability; (d) However, as expected, we noticed that features up to 20, the maximum range of fitness function is around 80%; (e) This shows the comparison of the various metrics and read the relevant value of the fitness factor for each study of the metrics as shown by distinct colors.
Applsci 08 00646 g015
Figure 16. Accuracy Validation for Feature Optimization.
Figure 16. Accuracy Validation for Feature Optimization.
Applsci 08 00646 g016
Figure 17. Observation of the candle-stick analysis for Global (Err) error bounds.
Figure 17. Observation of the candle-stick analysis for Global (Err) error bounds.
Applsci 08 00646 g017
Figure 18. Poor Correlation of Err and Accuracy during high bias.
Figure 18. Poor Correlation of Err and Accuracy during high bias.
Applsci 08 00646 g018
Figure 19. Expected and desired correlation of Err and Accuracy function.
Figure 19. Expected and desired correlation of Err and Accuracy function.
Applsci 08 00646 g019
Table 1. Typical observations of the functions.
Table 1. Typical observations of the functions.
F u n c t i o n MinMidMax
  ( φ , ρ , ω ) (0.21, 0.71, 0.44)(0.43, 55, 49)(0.81, 0.76, 58)
g (x, y, z)(0.34, 0.51, −0.11)(0.55, 0.51, 0.68)(0.67, 71, 89)
G   ( x , y , z ) (0.44, 0.55, 0.45)(0.49, 0.59, 0.58)(0.52, 0.63, 0.94)
Table 2. Typical observations.
Table 2. Typical observations.
g (max)G (min)G (max)g (min)
P (err)0.250.450.310.56
P (Err)0.320.410.490.59
Table 3. Average Observations.
Table 3. Average Observations.
FunctionMinMax
W ( ) 28.31%78.34%
γ +0.2834−0.1893
Table 4. 10th experimental values for functions shown.
Table 4. 10th experimental values for functions shown.
Internal Functions12345678910
M I 0.0090.0120.0230.034−0.9310.5630.6110.6780.7120.731
I r r . F ( x , y , z ) 0.1190.2170.2410.2980.3810.3830.5120.6290.6720.681
Red.F0.1910.200−0.0010.2890.3210.3410.4400.5120.5250.591
err0.8210.7810.7320.6120.5290.4890.4100.3710.3300.319
Err0.9010.9000.8710.8440.731−0.3210.6200.5210.4200.381
F . S c o ( x , y , z ) 0.3900.4210.4980.5340.6340.7210.7700.8120.8560.891
F . O p t ( x , y , z ) 0.1100.2300.3980.4910.5400.559−0.2100.6390.7760.791
Table 5. Data Sources (DS).
Table 5. Data Sources (DS).
1Breast Cancer Wisconsin Data Set
2Car Evaluation
3Iris species
4Twitter User Gender Classification
5College Scoreboard
6Pima Indians Diabetes Database
7Student Alcohol Consumption
8Education Statistics
9Storm Prediction center
10Fatal Police Shootings
112015 Flight Delays and Cancellations
12Credit Card Fraud Detection
13Heart disease data set
14Japan Census data
15US Mass Shootings
16Adult Census income
171.88 Million US Wildfires
18S & P 500 stock Data
19Zika Virus epidemic
20Retail Data Analytics
Table 6. Methods.
Table 6. Methods.
1Information GainIG
2Chi-squaredCS
3Pearson CorrelationPC
4Analysis of VarianceANOVA
5Weight of EvidenceWOE
6Recursive Feature EliminationRFE
7Sequential Feature SelectorSFS
8Univariant SelectionUS
9Principal Component AnalysisPCA
10Random ForestRF
11Least Absolute Shrinkage and Selection OperatorLASSO
12RIDGE RegressionRR
13Elastic NetEN
14Gradient Boosted MachinesGBM
15Linear discriminant analysisLDA
Multiple Discriminant AnalysisMDA
16Joint Mutual InformationJMI
17Non-negative matrix factorizationNNMF
Table 7. Tables structures for 8 to 19.
Table 7. Tables structures for 8 to 19.
Table NumberMeasureNumber of the DatasetsNumber of the Methods
8Accuracy1–101–10
9Accuracy11–2011–17
10Error1–101–10
11Error11–2011–17
12Precision Score1–101–10
13Precision Score11–2011–17
14C-Index1–101–10
15C-Index11–2011–17
16AUC1–101–10
17AUC11–2011–17
18Cohen’s Kappa1–101–10
19Cohen’s Kappa11–2011–17
Table 8. Performance Evaluation (PE) = Accuracy, ideally higher values are desired.
Table 8. Performance Evaluation (PE) = Accuracy, ideally higher values are desired.
DSIGCSPCANOVAWOERFESFSUSPCARFeFESWinner
10.4890.5810.4930.8210.4320.7180.5620.3210.6120.9370.827RF
20.3910.5010.4520.7320.4790.7000.5900.4760.8420.8420.863eFES
30.4290.5430.5720.6830.4810.6810.6230.4190.6930.5340.633PCA
40.4440.4920.3650.6010.5380.7100.5120.4780.7930.7920.825eFES
50.4920.5720.3920.6210.5930.6610.6520.5630.8240.3120.881eFES
60.4820.5630.5920.9100.5820.6330.6920.6730.8470.1180.792ANOVA
70.5230.5570.6730.6660.5230.5020.6190.6930.7340.4920.700PCA
80.5420.5990.9620.7320.7100.6820.6380.4780.8920.6920.983eFES
90.4230.6300.9210.8020.5040.6230.7420.7320.8720.6310.902PC
100.4910.6120.6830.6780.8640.6440.6990.5350.4180.6830.789WOE
Table 9. Performance Evaluation (PE) = Accuracy.
Table 9. Performance Evaluation (PE) = Accuracy.
DSLASSORRENGBMLDAMDAJMINNMFeFESWinner
110.6620.5720.7230.8820.7040.7720.6630.8230.801GBM
120.6060.6230.2210.8030.7720.8280.9200.6130.934eFES
130.5120.6910.3780.7230.8340.5830.6120.5930.860eFES
140.7310.6120.1430.7030.2340.5240.6830.4440.792eFES
150.7710.7450.1230.8560.8030.8900.5830.7980.812MDA
160.9240.7030.4260.3230.8660.5970.4210.2310.690LASSO
170.8320.7910.4840.4280.7920.8900.7920.1660.942eFES
180.8830.7230.9230.5730.7230.7480.8420.7720.793EN
190.8110.5960.5730.8030.4360.8000.7230.7240.942eFES
200.6980.5820.5900.7770.4940.7840.6830.6820.825eFES
Table 10. Performance Evaluation (PE) = Error, ideally lower values are desired.
Table 10. Performance Evaluation (PE) = Error, ideally lower values are desired.
DSIGCSPCANOVAWOERFESFSUSPCARFeFESWinner
10.2540.4430.6230.2340.1120.2130.4870.1260.1110.1730.223PCA
20.2310.1930.4230.2780.3210.1830.2150.1930.2130.2130.201RFE
30.5930.3180.2830.3180.2940.1430.3680.2160.1720.2290.045eFES
40.4430.2440.3420.0870.2210.1930.1250.2590.1930.2810.112ANOVA
50.1830.2890.1240.2130.0840.4260.2580.4420.1450.3420.014eFES
60.3920.1730.1920.2820.1030.0830.1590.0440.0390.2930.023eFES
70.3610.1110.0090.0450.1150.0630.1930.3100.1350.1830.216PC
80.1830.3250.2890.1830.1830.2220.3290.3310.1730.3120.128eFES
90.4980.3100.4230.1920.4350.2150.2290.2830.1320.0730.024eFES
100.3890.3000.5280.2160.3920.3760.4020.1940.0810.0820.006eFES
Table 11. Performance Evaluation (PE) = Error.
Table 11. Performance Evaluation (PE) = Error.
Δ∑ΛAΣΣΟPPENΓBMΛΔAMΔAϑMINNMΦεΦEΣΩιννερ
110.0920.1750.1340.0970.2810.1450.1130.1300.073εΦEΣ
120.1250.1790.1110.1730.1730.1000.1930.1930.034εΦEΣ
130.2140.2310.1900.2390.1510.1820.2310.2100.111εΦEΣ
140.1630.2000.1380.1660.0880.1630.0090.0030.183NNMΦ
150.1930.1930.0830.1290.2100.2190.1220.1220.178EN
160.2360.4370.2380.3210.1340.1100.1770.1910.088εΦEΣ
170.1730.4320.1100.1460.4430.3250.2120.2530.134EN
180.1130.2670.1930.1910.3920.2830.1540.0990.004εΦEΣ
190.1720.1670.1830.2190.1080.2000.1210.1110.312ΛΔA
200.2830.0450.1280.1830.2140.2040.2310.1310.021εΦEΣ
Table 12. Performance Evaluation (PE) = Precision Score, ideally higher values are desired.
Table 12. Performance Evaluation (PE) = Precision Score, ideally higher values are desired.
DSIGCSPCANOVAWOERFESFSUSPCAeFESWinner
10.6770.8990.9610.5200.7550.8160.8200.6390.7920.723CS
20.9360.7550.5530.6000.5220.6900.7760.7640.8410.802IG
30.8610.8740.7790.8340.6470.8440.6770.9070.7440.689CS
40.5450.6030.8820.8500.7250.6370.8870.5540.7540.956eFES
50.7670.5840.8940.8610.7610.9150.7530.5130.9090.932eFES
60.7530.5570.6640.7070.7060.7320.6220.7140.8040.762PCA
70.8140.5850.8650.6670.7900.6200.7810.7730.9330.903PCA
80.5460.7060.8520.9020.6190.7100.7320.7380.6380.967eFES
90.8590.7600.6100.6270.6170.6730.5910.8030.5750.992eFES
100.6980.7100.7020.6740.8210.6910.5030.7810.7460.710US
Table 13. Performance Evaluation (PE) = Precision Score.
Table 13. Performance Evaluation (PE) = Precision Score.
DSRFLASSORRENGBMLDAMDAJMINNMFeFESWinner
110.7330.7630.8240.4730.6100.7860.5220.7310.8620.893eFES
120.8380.8640.5490.7720.5840.9100.7600.7060.6310.845LDA
130.6110.9640.9280.7810.5650.7030.5500.8270.9080.923LASSO
140.9050.9230.7540.8070.6430.6700.6050.5310.6500.982eFES
150.6400.6010.8200.9500.5120.9480.8270.7860.6620.734EN
160.5300.6900.6210.6220.8080.9340.6300.5370.9310.956eFES
170.5470.8250.5120.7110.7400.8770.7660.6970.5610.893eFES
180.6320.6420.8910.6700.8640.6650.7740.9020.7020.962eFES
190.7280.6710.7200.7260.7430.5820.5500.7810.6310.704JMI
200.8360.7460.5740.5850.9790.8720.7580.9410.9520.942GBM
Table 14. Performance Evaluation (PE) = C-Index, ideally lower values are desired.
Table 14. Performance Evaluation (PE) = C-Index, ideally lower values are desired.
DSIGCSPCANOVAWOERFESFSUSPCAeFESWinner
10.4120.3330.2360.2940.2260.0620.4210.4270.2150.118eFES
20.3190.2560.3930.1060.4330.0780.3610.1280.2350.056eFES
30.2070.2510.2710.1180.1340.3070.2220.3380.2110.312ANOVA
40.5230.2200.0580.0520.2030.3250.0610.4390.0400.189PCA
50.1340.5340.4760.1370.1440.3870.1990.1140.1050.210PCA
60.6270.4250.2850.2430.4480.2740.4880.1860.1810.034eFES
70.1130.1930.1520.4980.2000.0360.0250.1490.0710.092SFS
80.1670.2730.4100.1280.1050.4350.1390.1930.1480.192WOE
90.2910.2210.0960.2910.3260.4480.1610.2350.2110.073eFES
100.0930.2930.4070.4880.2000.1790.3410.4720.0400.002eFES
Table 15. Performance Evaluation (PE) = C-Index.
Table 15. Performance Evaluation (PE) = C-Index.
DSRFLASSORRENGBMLDAMDAJMINNMFeFESWinner
110.0540.3140.3140.0360.1500.2190.2010.1900.0190.112NNMF
120.2680.2170.2930.2520.5520.2090.3180.3940.2190.243LDA
130.5420.3250.3200.3270.3460.2450.2490.3220.4030.296LDA
140.2820.1410.2510.1320.1360.3600.2170.2240.2320.106eFES
150.0430.0600.2100.0210.0930.2640.0910.2470.1360.129RF
160.0600.0530.2000.1000.0550.2520.1550.0560.0780.031eFES
170.2000.0530.3310.1400.0400.1070.2160.3350.2470.013eFES
180.3270.2330.2580.2950.2900.3460.3340.3780.3290.297LASSO
190.0940.1960.3120.3090.0660.2160.1280.1640.2580.032eFES
200.0730.2630.2040.0640.0530.2060.0100.2390.0470.024MDA
Table 16. Performance Evaluation (PE) = AUC, ideally higher values are desired.
Table 16. Performance Evaluation (PE) = AUC, ideally higher values are desired.
DSIGCSPCANOVAWOERFESFSUSPCAeFESWinner
10.5690.8080.7390.6330.8480.5630.5180.5400.8740.810PCA
20.5130.7960.8000.6430.6100.6590.6180.6640.5890.762CS
30.7840.6360.7810.5890.4990.5850.5390.8580.7170.96eFES
40.5920.8340.4980.7880.7890.7130.9110.8300.6450.976eFES
50.6550.6980.8050.5040.8800.5740.6380.8850.7420.699WOE
60.5900.7410.7910.8250.6540.8260.6980.6790.9620.892PCA
70.8020.6260.6800.5100.8960.7450.6460.7350.9740.740PCA
80.8050.5600.5500.8260.6090.8120.6590.7040.8140.894eFES
90.6420.8020.7690.8910.5040.4820.6290.8300.7340.836ANOVA
100.8720.8980.8580.7850.9210.5730.8310.7540.8680.971eFES
Table 17. Performance Evaluation (PE) = AUC.
Table 17. Performance Evaluation (PE) = AUC.
DSRFLASSORRENGBMLDAMDAJMINNMFeFESWinner
110.7250.8350.8890.7510.5450.7060.6760.5620.5180.774RR
120.8890.8190.5320.5550.8900.7510.9460.6880.7780.903MDA
130.5680.8350.5200.5250.5020.7640.6050.6510.4870.952eFES
140.7800.7280.6060.8700.7920.5450.5530.8550.9900.962NNMF
150.6020.6150.8330.7000.8040.4930.6450.6160.8990.867NNMF
160.7360.6490.5890.6650.8480.8470.9050.6210.8970.952eFES
170.5410.7110.7770.5110.8680.8840.6910.9040.6650.962eFES
180.7960.5250.7680.7620.7550.5130.7590.9100.5990.852eFES
190.8730.4810.6060.6390.5580.5750.7830.8420.6750.820RF
200.8600.3650.8930.6030.8930.8400.8290.6460.4960.824GBM
Table 18. Performance Evaluation (PE) = Cohen’s Kappa, ideally higher values are desired.
Table 18. Performance Evaluation (PE) = Cohen’s Kappa, ideally higher values are desired.
DSIGCSPCANOVAWOERFESFSUSPCAeFESWinner
10.6660.8160.9130.6210.2060.6560.9300.9780.5860.912US
20.7620.7540.5020.9260.9590.7740.9150.5660.8750.925WOE
30.9210.2070.6910.7570.9200.5200.8460.9320.7580.623US
40.6930.5420.6730.5000.7650.9240.6470.5010.8240.957eFES
50.7730.5330.7750.6150.8140.5350.6820.5360.8780.856PCA
60.6850.9100.5680.6060.6980.8310.6460.9020.8510.945eFES
70.6350.7160.6760.7930.5930.8020.8430.6710.9300.991eFES
80.6670.8770.9180.7510.8540.9300.7940.5270.9360.875PCA
90.8970.6440.4540.5170.7620.8020.6850.8650.6500.834IG
100.5990.8240.8030.8020.8270.8750.9330.8510.7240.925eFES
Table 19. Performance Evaluation (PE) = Cohen’s Kappa.
Table 19. Performance Evaluation (PE) = Cohen’s Kappa.
DSRFLASSORRENGBMLDAMDAJMINNMFeFESWinner
110.8920.9290.8190.8740.5370.6620.8330.5810.8570.983eFES
120.9540.6600.9440.4890.5820.8690.7530.7860.7710.973eFES
130.5760.9520.6860.5880.7440.7120.6580.9270.6710.910LASSO
140.5190.7800.5050.8500.6030.7310.9420.9750.9580.846JMI
150.9850.8460.9030.5910.5840.7500.6170.9450.8920.904RF
160.7860.8040.6050.6730.8140.6350.9090.5730.7320.973eFES
170.8270.5670.8140.7720.8670.8900.6700.7710.7630.734LDA
180.7510.7330.8200.8130.7600.6370.8710.7390.8670.923eFES
190.6980.6450.6360.8010.7270.8860.9690.9540.7810.845MDA
200.5830.6460.7950.9300.9530.5230.6810.5650.5240.578EN

Share and Cite

MDPI and ACS Style

Uddin, M.F.; Lee, J.; Rizvi, S.; Hamada, S. Proposing Enhanced Feature Engineering and a Selection Model for Machine Learning Processes. Appl. Sci. 2018, 8, 646. https://doi.org/10.3390/app8040646

AMA Style

Uddin MF, Lee J, Rizvi S, Hamada S. Proposing Enhanced Feature Engineering and a Selection Model for Machine Learning Processes. Applied Sciences. 2018; 8(4):646. https://doi.org/10.3390/app8040646

Chicago/Turabian Style

Uddin, Muhammad Fahim, Jeongkyu Lee, Syed Rizvi, and Samir Hamada. 2018. "Proposing Enhanced Feature Engineering and a Selection Model for Machine Learning Processes" Applied Sciences 8, no. 4: 646. https://doi.org/10.3390/app8040646

APA Style

Uddin, M. F., Lee, J., Rizvi, S., & Hamada, S. (2018). Proposing Enhanced Feature Engineering and a Selection Model for Machine Learning Processes. Applied Sciences, 8(4), 646. https://doi.org/10.3390/app8040646

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop