Next Article in Journal
License Plate Image Generation using Generative Adversarial Networks for End-To-End License Plate Character Recognition from a Small Set of Real Images
Next Article in Special Issue
ProLSFEO-LDL: Prototype Selection and Label- Specific Feature Evolutionary Optimization for Label Distribution Learning
Previous Article in Journal
A Hybrid Dispatch Strategy Based on the Demand Prediction of Shared Bicycles
Previous Article in Special Issue
Exploring the Patterns of Job Satisfaction for Individuals Aged 50 and over from Three Historical Regions of Romania. An Inductive Approach with Respect to Triangulation, Cross-Validation and Support for Replication of Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Imbalanced Datasets Preprocessing in the Performance of Associative Classifiers

by
Adolfo Rangel-Díaz-de-la-Vega
1,
Yenny Villuendas-Rey
2,*,
Cornelio Yáñez-Márquez
1,*,
Oscar Camacho-Nieto
2 and
Itzamá López-Yáñez
2
1
Centro de Investigación en Computación del Instituto Politécnico Nacional, Ciudad de Mexico 07700, Mexico
2
Centro de Innovación y Desarrollo Tecnológico en Cómputo del Instituto Politécnico Nacional, Ciudad de Mexico 07700, Mexico
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(8), 2779; https://doi.org/10.3390/app10082779
Submission received: 26 February 2020 / Revised: 2 April 2020 / Accepted: 9 April 2020 / Published: 16 April 2020

Abstract

:
In this paper, an experimental study was carried out to determine the influence of imbalanced datasets preprocessing in the performance of associative classifiers, in order to find the better computational solutions to the problem of credit scoring. To do this, six undersampling algorithms, six oversampling algorithms and four hybrid algorithms were evaluated in 13 imbalanced datasets referring to credit scoring. Then, the performance of four associative classifiers was analyzed. The experiments carried out allowed us to determine which sampling algorithms had the best results, as well as their impact on the associative classifiers evaluated. Accordingly, we determine that the Hybrid Associative Classifier with Translation, the Extended Gamma Associative Classifier and the Naïve Associative Classifier do not improve their performance by using sampling algorithms for credit data balancing. On the other hand, the Smallest Normalized Difference Associative Memory classifier was beneficiated by using oversampling and hybrid algorithms.

1. Introduction

Credit scoring is a two-class classification problem (to grant or not the credit to the applicant). This problem is imbalanced by nature because, in practice, more credits are granted than those that are rejected. However, the classification costs are not the same for both classes, due to the inner nature of the credit assignment [1,2]. For example, if a potential good applicant is denied credit, the financial institution loses a potential client. On the other hand, if a bad applicant is granted credit, the financial institution has losses of a monetary nature, and possibly expenses associated with legal actions that it has to take to recover the money invested.
That is why the class of greatest interest in this phenomenon is the detection of potential bad applicants, who should not be granted credit [3]. Paradoxically, this class of greatest interest is the minority class in this phenomenon, which adds complexity for those involved in finding solutions to the problem of credit scoring in the context of Computational Intelligence [4].
In the recent scientific literature, there is a wide variety of pattern classifier algorithms in a wide range of applications, including Deep Neural Networks [5,6], models that show good performance. Regarding the topic of our research, it is possible to find research papers where attempts to solve the problem of credit scoring are reported. Various supervised classification models have been used in these investigations; the use of Support Vector Machines [7,8,9], Artificial Neural Networks [10,11,12] and Classifier Ensembles [13,14,15,16], among others [17,18,19], stands out. Some of the experimental comparisons made to determine the performance of the classifiers in terms of credit assignment [20,21,22,23] exhibit, in our opinion, certain problems that prevent generalizing the published results.
The main task to be solved in this paper is to successfully address these two problems [24]: on the one hand, research studies incorporate few datasets, and those datasets are not public, nor are they available for use. In addition, there are almost no common datasets in the different investigations. Additionally, in the documentary study of the state of the art carried out by the authors, it was observed that, if a research group has used a certain supervised classifier, in other investigations it is not taken into account, but used other supervised classifiers.
The No Free Lunch Theorems [25] state that there is no superiority of one classifier over others, over all datasets and all performance measures. However, recent studies point to the existence of good performance of associative classifiers in solving problems of supervised classification of the financial field [26].
It is a fact known by the scientific community that, on numerous occasions, the preprocessing of the data contributes to the improvement of the performance of certain supervised classifiers; in particular, when the datasets present imbalance between classes [27]. Several investigations have been reported in the literature that have been carried out in order to determine the impact of data preprocessing in improving solutions to the problem of granting credit [28]. In particular, the computational problem related to the selection of instances (applicants) [2] has aroused great interest in the scientific community, so that in recent years the emphasis has been placed on the study of instance selection techniques for imbalanced data [1].
In this paper, we address two challenges: the evidence that in the comparative studies reviewed [20,21,22,23], there is no consensus as to what are the best preprocessing techniques for the different classifiers in the assignment of credit; and, in addition, and as a relevant point, it is a fact that, to the best of our knowledge, there is no scientific research to assess the impact of instance sampling in the performance of associative classifiers. Addressing such justifies the conduct of this research.
The aim of this paper is to successfully attack the two problems raised in the previous paragraph. Therefore, the aim of this research consists in carrying out an extensive experimental study to assess the impact of instance selection by sampling, in the performance of associative classifiers for credit scoring.

2. Previous Works

2.1. Credit Scoring

Credit scoring is one of the main income sources of financial institutions. Therefore, not having the necessary tools for customer segmentation, can cause them to be broken, due to the high delay rate of their customers. That is why, more frequently, intelligent credit granting systems (customer segmentation) are required to ensure with high probability that the future borrower will be able to meet their credit obligations, using intelligent models that facilitate and improve their approval process.
Credit scoring [29] is called any credit evaluation system for customers that allows the risk inherent in each credit application to be automatically assessed or parameterized. This risk will depend on the solvency of the client, the type of credit, the terms, and other characteristics of each client. These characteristics will define whether each credit application is approved or rejected.
Credit scoring is, therefore, a classification problem. Given a set of observations belonging to a certain class known a priori, a set of rules is search for that allow the classification of new observations into two groups: those that with high probability they will be able to face their credit obligations, and those that, on the contrary, will fail in their credit obligations.
For this, an analysis of the applicant’s personal characteristics (profession, age, heritage, gender, place of residence, and others) and the characteristics of the operation (destination of credit, percentage financed, rate, term, to mention a few) will have to be carried out, which will allow the system to induce the rules that will subsequently be applied to new applications, thus determining their classification. In any case, the credit scoring models mainly use the client information evaluated and contained in the credit applications or in internal or external sources of information. In general, Credit scoring models assign the future borrower a score (individuals and SMEs) or a rating (Business) [29].
When credit scoring techniques are used in origination (or placement), that is, to resolve credit applications, they are known as reactive or Application Scoring models. Instead, when they are used to manage the loan portfolio they are known as proactive models or Behavioral Scoring. In the case of the models used in the placement of credit, financial institutions generally determine a cutoff point to determine which applications are accepted (for obtaining a rating higher than the cutoff) and which are not. The cut off setting does not respond to risk considerations exclusively, but depends on the percentage of benefits desired by the entity and its ability to manage the risk.

2.2. Computational Intelligence Models for Financial Applications

The Computational Intelligence algorithms have been successfully applied in various branches of science and engineering [30]. Regarding the topic of our research, as of 1968 and as a result of Beaver’s studies (one of the pioneers in the investigation of bankruptcy prediction models in companies) [31], several researchers began working with multivariable models with the objective of being able to determine more precisely which companies were heading for bankruptcy and which others were not. In this context, the development of the Z-Score was proposed in the year 1968 by Altman [32] that has been applied in many companies in the financial sector. For the application of credit scoring, in recent years, several new techniques have appeared, namely: Decision Trees [33], Artificial Neural Networks [12], Support Vector Machines [9], Rough Sets [19], Deep Learning [15], and Metaheuristic algorithms [34], among others.
There are several comparative studies to assess the performance of supervised classifiers for credit scoring. Maybe the first was carried out by Srinivasan and Kim [35], comparing various methodologies and find that the Decision Trees outperform the Logistic Regression, while these yield better results than the Discriminant Analysis. In addition, they suggested that the superiority of trees is directly related to the complexity of the data under study.
Ohter interesting comparatives studies are [7,21,22,29,36,37]. In addition, recent studies point to the existence of good performance of associative classifiers in solving problems of supervised classification of the financial field [26].

2.3. Data Preprocessing for Financial Applications

One of the first analyses on instance sampling on credit scoring was the one by Greene [38]. In his research paper he addressed the issue of selecting instances for predicting credit card default, and analyzed the most common technique used in credit rating: Linear Discriminant Analysis (LDA) and provided us with alternatives to this technique.
García et al. [2] conducted an investigation to analyze the impact on the presence of noise and outliers on credit risk data and establish how to improve information through data preprocessing by filtering. In the research work of López et al. [27] a comparative study was conducted to address class imbalance through instance preprocessing techniques, cost-sensitive learning and classifier ensemble methods. In addition, they analyze the impact of the intrinsic characteristics of the data on the classification task, such as small disjoints, lack of density, overlapping and separability of classes, noise and boundaries.
Crone and Finlay [39] conducted an empirical study where they analyzed the sample size and class balance. They propose that the size sufficient to build and validate a credit scoring model is 1500 to 2000 samples for each class. Bischl et al. [1], studied different strategies for the correction of class imbalance through instance sampling, and they noted that in some cases the correction worsened the performance of the classifiers, perhaps due to the over-adjustment of the training sets.
Marqués et al. [3], in the experimental results of their experiments, showed that the use of sampling methods consistently improved the performance given by the original (imbalanced) data. In addition, they mentioned that oversampling techniques worked better than any undersampling approach. Dal Pozzolo et al. [40] analyzed when undersampling was effective for imbalanced data, and they proposed that undersampling depended on the degree of imbalance and the non-separability of classes.
García et al. [20] explored the effects of sample types on the predictive performance of classifier ensembles for credit risk and corporate bankruptcy prediction problems. They focused on characterizing positive (risky) instances, and showed that there is a correlation between the classifier ensembles performance and the dominant type of positive instances.
In conclusion, we can affirm that, although several studies have been carried out regarding the influence of the preprocessing of financial data, none of them addressed its impact on associative classifiers. In the subsequent sections, we will address this important theme.

3. Materials and Methods

This section describes the datasets and associative classifiers used in this investigation. Special emphasis is placed on datasets related to the financial environment, as they constitute a central part of this paper (Section 3.1). Additionally, the operation of the associative classifiers addressed in this research is described in detail in Section 3.2.

3.1. Datasets

This section describes the datasets that will be used to assess the impact of preprocessing of financial data in the performance of associative classifiers. Some of these datasets are well known in the literature, in addition to being a reference, because they are widely used in many of the research work carried out so far.
The used datasets are: Australian Credit Approval (Australian) (https://archive.ics.uci.edu/ml/datasets/Statlog+(Australian+Credit+Approval)), German Credit data (German) (https://archive.ics.uci.edu/ml/datasets/Statlog+(German+Credit+Data)), Japanese Credit Approval (Japanese) (https://archive.ics.uci.edu/ml/datasets/Credit+Approval), Default of credit card clients (Default credit) (https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients), Iranian (Shared personally by Hassan Sabzevari), Polish bankruptcy (Polish_1 to Polish_5) (https://archive.ics.uci.edu/ml/datasets/Polish+companies+bankruptcy+data), The PAKDD 2009 dataset (The PAKDD) (https://www.cs.purdue.edu/commugrate/data/credit_card/), Give me some credit (Give me credit) (https://www.kaggle.com/c/GiveMeSomeCredit/data) and Qualitative bankruptcy data (Qualitative) (https://archive.ics.uci.edu/ml/datasets/qualitative_bankruptcy). These datasets are very interesting, due to the fact that they include both numeric and categorical attributes, and missing data.
As a summary, Table 1 shows a description of the datasets used in this investigation. The abbreviations (Num.) and (Cat.) refer to the number of numerical and categorical attributes, respectively. IR represents the imbalance ratio of the datasets.
As can be seen, 10 of the 13 datasets are imbalanced (with IR > 1.5), six of them have mixed descriptions (numerical and categorical attributes), and eight contain absences of information (missing values). It should be noted that in all cases there are only two classes.

3.2. Associative Classifiers

In this section, the associative classifiers that will be evaluated in this paper are analyzed. In each case, its operation is detailed and a brief reference is made to its main characteristics, as well as its application or not to the financial field.

3.2.1. Hybrid Associative Classifier with Translation

The Hybrid Associative Classifier with Translation (HACT or CHAT by its Spanish acronym) was proposed by Santiago-Montero as a classification model [41], and has been used successfully in the financial field [42].
The HACT has two phases: association (training), and recovery (classification). This classifier assumes that the dataset is complete (that is, there are no absences of information in the data), and that it is described only by numerical attributes. In addition, it assumes that classes are represented by consecutive integers.
Let us have a dataset, having association pairs p of the form x μ , y μ , where μ = 1 ,   2 ,   ,   p ,   x μ R n and y μ R m . Each instance x μ is composed by n components, where x j μ represents the j-th component of such instance. The classes y μ take the form of binary vectors of size m , where y i μ represents the i-th component of the vector.
Before starting the training or association, the HACT classifier performs an axis translation process. To do this, the average of all training instances, component by component, is calculated, and then, all instances are translated considering that average. Le x ^ be the average of all instances. Each instance x μ is translated as x ^ μ x μ x ^ .
After translation, class vectors are formed. To do this, the binary y μ vectors corresponding to each instance are formed, and the corresponding component of each vector is set to 1. For example, if you have three classes (1, 2, 3) the vectors corresponding to each of the classes would be [1, 0, 0], [0, 1, 0] and [0, 0, 1], respectively. After the instances have been moved and the classes configured, the training phase of the HACT model constructs a matrix W so that when an input pattern x ^ μ is presented, the stored pattern y μ associated with the input pattern is recovered. This process comprises two basic steps:
  • For each association x ^ μ , y μ in the training set, the external product y μ x ^ μ T is completed, where x ^ μ T is the transposed of the input vector x ^ μ .
  • Sum the p   externals products to obtain the matrix W = α μ = 1 p y μ x ^ μ T , where α is a normalization parameter (usually α = 1 / p ). Each component of the matrix W is defined as w i , j = μ = 1 p y i μ x ^ j μ .
On the other hand, the classification phase of HACT consists of two phases:
  • Translate the pattern to classify o , according to the average of the training patterns, as o ^ o x ^ .
  • Determine the components of the output vector (class) for the pattern to classify o ^ . To do so, it is considered that y i o = 1 if   j = 1 n w i , j o ^ j = max h = 1 . . p j = 1 n w h , j o ^ j 0 otherwise . Thus, the class k wil be returned if and only if the obtained vector has value 1 in its k-th component, and 0 in the remaining components.
Although the HACT algorithm is unable of handling qualitative data, or absence of information in the data, it has obtained good results in the financial field [42].

3.2.2. Extended Gamma Classifier

The Gamma Associative Classifier was proposed by López-Yáñez as a prediction model [43], and has been used successfully in supervised classification tasks [44]. In its original version, this algorithm was unable to handle qualitative data, or absence of information in the data.
That is why an extension of this classifier was made, to overcome these limitations [45]. This extension modifies the way in which the similarity between instances is calculated, and allows the direct application of this classification model to databases with mixed and incomplete attributes, very common in the financial field.
Let X and P be the training and testing sets, respectively, from a universe U, where each instance x X ,   p P is described by a set of attributes A = A 1 , A 2 , , A m ; each attribute A i has a definition domain d o m A i , which can be numeric or categorical.
As a particular case, if the value of the attribute A i in an instance x is unknown, it is considered to be a missing data, and denoted as x i =   ? . It is assumed that there is a set of classes K = K 1 , , K c associated with the training instances.
The Extended Gamma Associative Classifier (EG) consists of two phases: training and classification. The training phase of this classifier begins with the storage of the training set, and includes the subsequent calculation of various parameters (Table 2).
In the classification phase, EG uses an iterative process, based on the calculation of the average similarity to each class of the instance to be classified. To analyze the similarity between the test instances and training instances, the extended generalized similarity γ e x t is used. After obtaining the similarities, the average of the generalized similarity of said test pattern for each class k l K (Equation (1)) is calculated. Let p P be an instance to classify and let p j be the value corresponding to the j-th attribute.
The number of instances belonging to the class k l in the training set is given by n , and x j i represents the value of the j-th attribute of the i-th instance of the class k l , and w j represents the weight of the j-th attribute.
c k l = i = 1 n j = 1 m w j γ e x t x j i , p j n γ e x t x j , y j , θ = γ n u m x j , y j , θ if   the   jth   attribute   is   numeric γ c a t x j , y j if   the   jth   attribute   is   categoric γ m i s s x j , y j if   x j   or   y j are   missing where γ n u m x j , y j , θ = 1 if   x j y j θ 0 otherwise ,   γ c a t x j , y j = 1 if   x j = y j 0 otherwise   and   γ m i s s x j , y j = 1 if   x j = y j = ? 0 otherwise
If a single maximum is found among all the values of c k l , the process ends. If not, the values of the stop and pause parameters, as well as the value of the θ parameter, are taken into account in an iterative process. Further details of the functioning of this classifier can be found in the original paper [45].
The Extended Gamma Associative Classifier has been successfully applied in solving social problems with mixed and incomplete data (particularly in estimating the voting intentions of Mexican citizens [45]). However, in the knowledge of the authors, this classifier has not been applied to the financial field.

3.2.3. Naïve Associative Classifier

The Naïve Associative Classifier (NAC) was recently proposed to solve classification problems in the financial field [26]. This classifier surpassed several of the state of the art in this type of problems; and, in addition, it has its own methodology to estimate the weight of its attributes [46].
The NAC directly handles mixed and incomplete data, and is also transportable and transparent [26]. In its training phase, the NAC stores the training set and calculates, for each numerical attribute, the standard deviation.
Let p P be an instance to classify and let p j be the value corresponding to the j-th attribute. To analyze the similarity between the test instance and the training instances, two operators are used: the Mixed and Incomplete Data Similarity Operator (MIDSO) and the total similarity operator s t (Equation (2)). After obtaining the similarities, the average of the generalized similarity of the test instance for each class k l , denoted as s l p (Equation (6)) is calculated.
Again, the number of instances belonging to the class k l in the training set is given by n , and x j i represents the value of the j-th attribute of the i-th instance of the class k l , and w j represents the weight of the j-th attribute.
s t x , y = i = 1 m w i M I D S O x , y , A i
M I D S O   x , y , A i = s c x , y , A i     i f   A i   i s   c a t e g o r i c a l   s n x , y , A i     i f   A i   i s   n u m e r i c
s c x , y , A i = 0   s i   x i y i     x i = ?     y i = ? 1           o t h e r w i s e
s n x , y , A i = 0   s i   x i y i > σ i     x i = ?     y i = ? 1           o t h e r w i s e
s l p = 1 n y ϵ k l s t p , y
If a single maximum is found among all the values of s l p , the process ends. If not, any of the classes with maximum similarity is assigned.
Although the NAC has been successfully applied to the solution of problems in the financial field, the impact of the data imbalance and its preprocessing in its performance has not been explored.

3.2.4. Smallest Normalized Difference Associative Memory

The Smallest Normalized Difference Associative Memory classifier (SNDAM) is also a newly created classification algorithm within the associative approach. It was proposed by Ramírez-Rubio and collaborators [47], and aims to reduce the limitations of the classic Alpha-Beta associative memories.
This classification model assumes a set of training and test data described by numerical attributes, and not having absences of information. The SNDAM model is based on two operators: the generalized alpha operator α R and the generalized beta operator β R , in its variants MAX ( β R ˇ ) and MIN ( β R ^ ). Let c and d be two real numbers, the generalized alpha and beta operators are:
α R c , d = c d + 1
β R ˇ c , d = d c 1 if   c d c if   c = d
β R ^ c , d = c d 1 if   c d d if   c = d
For the training of this classifier, there are two aspects: the behavior as an associative memory type MAX, or as an associative memory type MIN. Depending on this, the SNDAM training phase begins as follows:
For each instance x X in the training set, an auto-associative matrix M x is created using the generalized alpha operator. Each component m x i , j , with i , j 1 , m of such matrix is given by:
m x i , j = α R x i , x j
Subsequently, if it is a memory type MAX, an array of associations M is created whose components m _ m a x i , j are calculated according to Equation (11). In the MIN case, the components of the association matrix are created according to Equation (12). In other words, the matrices obtained for each object of the training set are generalized in a single matrix, through the maximum and minimum operators, component by component.
m _ m a x i , j = x X m x i , j
m _ m i n i , j = x X m x i , j
After the matrix M is constructed, the maximum values of each of the attributes are calculated, considering the objects in the training set. For each attribute A i a value M A X i will then be associated.
Let p be a test instance that it is wanted to be recovered from the associative memory. The recovery phase in this model will return an artificial object z, described by real attributes. In the recovery phase, there are also two types: the MAX and MIN. In the first case, to obtain the components z _ m a x i of the artificial object z Equation (13) is used for the recovery, while in the second Equation (14) is used.
z _ m a x i = j = 1 . . m β R ^ m _ m a x i , j , p j
z _ m i n i = j = 1 . . m β R ˇ m _ m i n i , j , p j
Then, the normalized difference δ between the recovered instance z and each of the instances of the training set x X is calculated. Again, there are two possibilities, Max and MIN. These differences are calculated as:
δ _ m a x x , z = i = 1 m z _ m a x i x i M A X i
δ _ m i n x , z = i = 1 m z _ m i n i x i M A X i
Finally, the class of the instance whose normalized difference with the instance to be classified was smaller was assigned.
This classifier has been successfully applied in solving medical problems [47]. However, in the knowledge of the authors, the SNDAM classifier has not been applied to the financial field.

3.3. Sampling Algorithms for Imbalanced Data

In a large number of papers, novel methods have been proposed to address the problem of imbalance between classes, which are classified into three groups [27]; algorithm-level approaches (in which a new algorithm is created or an existing one is modified), data-level approaches (in which data is modified in order to lessen the performance impact of the algorithms of classification when there is an imbalance in the distribution of classes), and cost-sensitive classification (which consider different costs with respect to the class distribution).
This section deals with the class balancing algorithms that will be evaluated in the present investigation. All of them belong to the data-level approach for imbalanced classification. First, we address oversampling algorithms and then, undersampling algorithms. At last, we address hybrid approaches.
It is possible to find several state-of-the-art articles [48,49,50,51,52] in which the preprocessing of datasets is employed to reduce the impact caused by the distribution of classes. In such research, it has been empirically demonstrated that the application of a preprocessing stage to balance the distribution of classes is usually a useful solution to improve the quality of the identification of new instances. Data preprocessing techniques for imbalanced data can be divided into three groups:
1.
Oversampling algorithms. These techniques are based on the creation of synthetic instances of the minority class through replication, or creating new instances based on the existing ones.
2.
Undersampling algorithms. These methods are based on the elimination of instances of the majority class.
3.
Hybrid Algorithms These methods are a combination of oversampling and undersampling techniques.
Oversampling algorithms seek to match the quantities of instances in each class by oversampling minority classes. In this way, the quantity of instances in these classes will be artificially increased, making all classes have approximately the same number of objects. The techniques for selecting instances for oversampling that will be used for the comparative analysis carried out in this work are listed below (Table 3).
As mentioned earlier, undersampling algorithms seek to match the amounts of instances in each class, by sampling the majority classes [25]. Thus, objects that are considered less relevant are eliminated, so that all classes have approximately the same number of instances. Next, the undersampling algorithms evaluated in the present investigation are listed (Table 4).
Hybrid algorithms use oversampling and undersampling techniques. The hybrid algorithms evaluated in the present investigation are (Table 5):
In this section, we analyzed the datasets to be used, as well as some of the most representative associative classifiers. In addition, we mentioned the sampling algorithms for class balancing we will use. Next section explains the proposed experimental methodology in order to assess the impact of the preprocessing techniques for imbalanced financial data sampling in the performance of classifiers of the associative approach.

4. Experimental Methodology

This section describes the proposed methodology to assess the impact of imbalanced data preprocessing on associative classifiers. For this, the phases of this methodology are described, as well as its adaptation to the financial environment.
The proposed methodology is organized in eight steps or stages. These stages were defined by taking into account the particularities of the financial environment, the data preprocessing algorithms, and the associative classifiers.
A general description of the first six stages is given below, which will be explained in detail in the following subsections of this paper. Figure 1 shows a graphic representation of the stages of this methodology. Stages seven (execution of the experiments) and eight (analysis of the results) will be addressed in Section 5, as they correspond to the results obtained, as well as their analysis and discussion.

4.1. Dataset Selection

As mentioned earlier, 13 datasets were considered in this investigation, all of them referring to the problem of credit scoring. The number of attributes of these datasets varies between six and 64 attributes, while the number of instances is between 250 and 150,000. On the other hand, 10 of the 13 databases are imbalanced (with IR > 1.5), six of them have mixed descriptions (numerical and categorical attributes), and eight contain absences of information. It should be noted that in all cases there are only two classes. These classes correspond to clients that do not represent a risk to banking institutions, and clients that do represent such risk.

4.2. Validation Methods Selection

There are various methods or techniques to compare the results obtained from classification algorithms. One of these techniques is cross validation. The pioneer in using cross-validation was Larson [66] in 1931. In his research work, he divided the data into two parts, a sample was used for regression and a second sample was used for prediction. In the 70′s, Stone [67] and Geisser [68] formally developed the concept of cross-validation, which is a statistical method to divide a set of data into different subsets. Its application is very useful when the number of data is relatively small due to the complexity or even impossibility of obtaining it. There are different variants of validation, among which are:
Hold-Out: the complete set of data is taken and divided into two subsets, the first one dedicated to the training phase and the second to the test phase. The partition of the data is done by taking random elements.
K-fold cross validation: the data set is divided into K partitions that give K mutually exclusive subsets. That is, one of the K subsets is used as a test set and the remaining sets are grouped to form the training set. The procedure is repeated K times by exchanging the test set to ensure that the K subsets have been used in the test phase.
5 × 2 cross validation: It is important to highlight the work of Dietterich [69], who compares various evaluation methods. He proposed to apply 5 × 2 cross validation, which consists of repeating a cross validation five times with K = 2. In each of the five executions, the data is randomly divided into two subsets of data, one for training and the other for testing. Each of the training partitions are taken as input of the classification algorithm and the test partitions are used to make a test of the final solution.
Considering the high imbalance of some of the datasets used (for example, Polish_1, with an IR = 24.9), it was decided to use the 5 × 2 cross validation method, to compare the results obtained in the investigation.
For this, the datasets were divided using the KEEL software [70], which allows to store, in each case, the partitions obtained. This is a clear advantage, since the different classification algorithms are compared on the same test sets, in each case.

4.3. Performance Measure Selection

The evaluation of the performance of supervised classifiers has been the subject of study since the very emergence of the classification algorithms. One of the measures first used is to consider the number of instances of the test set correctly classified, with respect to the total instances of that set. This measure is known as accuracy or correct classification rate [71].
However, this is not the only way to evaluate the performance of the classifiers. Let us consider a two-class scenario (Positive and Negative), as shown in Figure 2. In this case, a classification algorithm has four possibilities:
1
Correctly classify a positive instance (True Positive, TP)
2
Correctly classify a negative instance (True Negative, TN)
3
Incorrectly classifying a positive instance (False Negative, FN)
4
Incorrectly classifying a negative instance (False Positive, FP)
It should be noted that the costs of each of the two possibilities of incorrect classification (False Positives and False Negatives) are not always the same. In the particular case of the credit scoring, if clients that represent a risk to the financial institution are considered as a positive class, and those that do not represent a risk as a negative class, it is easy to deduce that the cost of classifying a risky client (positive) as negative, it is greater than classifying a good customer (negative) as a risk customer. In these scenarios, it is sought, above all, to reduce the amount of False Negatives.
This type of situation is aggravated by considering imbalanced datasets, since standard performance measures such as accuracy are not considered adequate [27]. This is due to the bias of these measures towards the majority class, since they do not distinguish between the number of correct classifications of the different classes, which can lead to erroneous conclusions. In this investigation, we will consider the Area under the ROC curve to evaluate the performance of the classifiers, after applying data balancing algorithms.
Below, we list some of the most commonly used performance measures [71] for imbalanced dataset scenarios, considering a two-class confusion matrix, as shown in Figure 3.

4.4. Sampling Algorithms Selection

As mentioned early, there are numerous algorithms for data balancing. These algorithms are divided into three large groups: undersampling algorithms, oversampling algorithms and hybrid algorithms. In this investigation, the KEEL tool [72] was used to apply these algorithms to the different datasets.
KEEL version 3.0 contains 20 sampling algorithms. Of them, 17 were selected for the experiments. The reason for the selection is its computational efficiency. The Aglomerative Hierarchical Clustering (AHC), Hybrid Preprocessing using SMOTE and Rough Sets Theory (SMOTE-RSB) and Class Purity Maximization (CPM) algorithms were not considered, due to the impossibility of obtaining results in 24 h for some datasets.
We used a personal computer with 2GB of RAM, Windows 7 operating system, and an Intel Core i3 processor of 5th generation. Such computer was not exclusively dedicated to the experiments execution, and therefore, we cannot analyze the execution time of the algorithms under study.
As some of these algorithms do not handle mixed or incomplete data, in the cases corresponding to datasets that do contain them, it is proposed thin this research the lost values were imputed, and the categorical attributes were converted to numerical. For this, it is possible to use the KEEL software [66] in its functionalities “Concept Most Common Attribute Value” and “Min Max ranging”.

4.5. Classifiers Selection

From the associative classifiers mentioned, the following algorithms were chosen for conducting the experiments: HACT, NAC, EG and SNDAM. HACT [54] and NAC [23] have been applied previously, successfully, to the financial field [26,42].
For the execution of the associative classification algorithms, we use the EPIC tool [73,74], currently under development. This tool contains the above-mentioned associative classifiers, and it is compatible with the datasets files after applying the sampling algorithms offered by KEEL, and has a visual environment that facilitates the performance of experiments. On the other hand, the EPIC tool provides a summary of the results obtained according to numerous performance measures, and also stores the classes assigned to each test instance, in each case.

4.6. Statistical Test Selection

In the proposed methodology, it is necessary to evaluate the performance of several supervised classifiers on different datasets. To establish the existence or not of differences between performances, it is necessary to carry out statistical tests. Among the statistical tests recommended for this task [75] is the Friedman test [76].
In this case, a null hypothesis H0 is defined, which states that there are no differences in the performance of the compared algorithms, and an alternative hypothesis H1, which states that there are differences in the performance of the compared algorithms.
Friedman’s test consists in ordering the performances of the algorithms (from best to worst performance), replacing them with their respective rank. The best result corresponds to rank 1, the second best to rank 2, and so on. When ordering them, the existence of identical data is considered, in which case an average range is assigned. Then, the test computes the z-statistic. It is used to find the corresponding probability value p in the statistical tables, and then compare it with a significance value α. In the tests of statistical hypotheses, the p-value represents the probability of obtaining a result as extreme as one already observed, assuming that the null hypothesis is true. The lower the p-value, the more evidence exists against the truthfulness of the null hypothesis. If the p-value is less than the level of significance α, the null hypothesis is rejected and significant differences are considered to exist.
If the null hypothesis of equal performance is rejected by the Friedman test, it is necessary to apply post-hoc tests, to determine among which algorithms are the differences. Among the recommended post-hoc tests for the analysis of algorithm performance in multiple datasets, is the Holm test [77]. This test, to adjust the significance value α, uses a descending procedure. There are automated tools for the calculation of the Friedman test, as well as for the calculation of post-hoc tests. In this paper, we use the KEEL tool [72]. As mentioned at the beginning of this section, the first six stages of the proposed methodology were detailed. Considering the above, next section will describe stages seven and eight of the proposal, corresponding to the execution of the experiments, and their analysis.

5. Experimental Results

This section describes the experiments performed. First, an analysis is carried out on the results obtained by the sampling algorithms (Section 5.1). Then, the impact of the results obtained by these algorithms in the performance of associative classifiers is evaluated (Section 5.2). Finally, the statistical analysis is made (Section 5.3).

5.1. Results of the Sampling Algorithms

As expected, the oversampling algorithms were able to perfectly balance the datasets, obtaining and imbalance ratio of one in all cases (Table 6).
Considering the experiments performed, we can conclude that oversampling methods tend to obtain perfect balances. However, these results are at the cost of artificially increasing the cardinality of the data sets. The next section will analyze how these results impact the performance of classifiers of the associative approach, and if this computational cost is justified in results of better performance.
Below, the results of the undersampling algorithms are offered in Table 7, for each of the datasets analyzed. The cases in which the classes were inverted are shown in italics (the minority class became the majority), and in bold the good results (those with an imbalance closer to 1).
The RUS method obtained a perfectly balanced dataset, in all cases. CNN obtained good results for five datasets, but inverting the classes in two of them. It also has an interesting behavior for the Australian, Japanese and Qualitative datasets (original IR of 1.25, 1.26 and 1.34, respectively). In these datasets, the CNN method inverted the classes, and increase the imbalance ratio (to 3.37, 3.45 and 6.36, respectively). In the remaining datasets, CNN obtained IR from 1.12 to 2.62.
The CNNTL algorithm maintain the behavior of CNN in the Australian, Japanese and Qualitative datasets (returning IRs of 8.59, 9.07 and 5.85, respectively). In the remaining datasets, is had good performances (from 1.15 to 2.96), but inverting the classes in the Default credit, German and Give me credit datasets.
NCL algorithm obtained good results for the datasets having an original imbalance ratio lover than four, and it did not obtain a balanced dataset in the remaining ones. It also inverted the classes in the Australian, Japanese and Qualitative datasets.
The OSS algorithm had the same behavior of CNN and CNNTL in the Australian, Japanese and Qualitative datasets (with IRs of 5.50, 5.30 and 6.27, respectively). In the remaining datasets, it obtained good results (IRs from 1.21 to 2.56). Same as CNNTL, it inverted the classes in the Default credit, German and Give me credit datasets.
SBC algorithm had a disastrous behavior for the financial data. It inverted the classes in all datasets, and in nine cases it deleted the entire majority class (results marked with -). TL had good results for the almost balanced datasets (IR < 2), and did not obtained balanced results in the remaining data.
In general, the balancing methods evaluated but RUS had a poor performance. CNN, CNNTL and OSS obtained highly imbalanced results for almost balanced data (IR < 2). We consider that those methods should not be applied to balanced or near balanced data. For the remaining datasets, their results range from 1.12 to 2.62 (CNN), 1.15 to 2.96 (CNNTL) and 1.21 to 2.56 (OSS). On the other hand, the NCL algorithm showed good results for datasets having IR<4, and bad results for the remaining datasets.
In addition, all the compared algorithms but RUS inverted the classes in several datasets (converting the majority class into the minority one).
In the following, the results of the hybrid algorithms are offered in Table 8, for each of the datasets analyzed in the present investigation. The cases in which the classes were invested are shown in italics (the minority class became the majority), and in bold the good results (those with an imbalance closer to 1).
SMOTE-ENN and SMOTE-TL algorithms obtained very good balances in all cases (IRs from 1.01 to 1.66). SPIDER and SPIDER2 algorithms obtained good results for the datasets having an imbalance ratio lower than four, and bad results in the remaining datasets.
However, SPIDER2 algorithm obtained better results than SPIDER for the datasets having high imbalance (IR > 4). The IRs for SPIDER2 range from 2.51 to 4.14, while the ones of SPIDER range from 3.98 to 6.87.
In addition, we made a diagram summarizing some of the main characteristics of the compared methods (Figure 4). We include some of the positive and negative characteristics, according to the results obtained for instance sampling.
Considering the experiments performed, we can conclude that some hybrid methods tend to obtain good balances; however, these results are at the expense in many cases, of inverting the majority class making it a minority. The next section will analyze how these results impact the performance of classifiers of the associative approach, and if this computational cost is justified in results of better performance.

5.2. Impact of the Sampling Algorithms in the Performance of Associative Classifiers

This section evaluates the impact of the results obtained by the different data balancing algorithms, in the performance of associative classifiers. For each of the datasets, after applying the balancing algorithms, the performance (Area under the ROC Curve) of four associative classifiers HACT, ED, NAC and SNDAM was calculated.

5.2.1. Impact of the Sampling Algorithms on the Performance of Associative Classifiers

The AUC results for the HACT classifier are presented in Table 9, Table 10 and Table 11 below. The good results (AUC improvements) are highlighted in bold, while the results that present less AUC than the original set (imbalanced) are underlined.
The oversampling algorithms had slight drops and increases in the classifier performance, but no clear advantage was shown in the results. However, to determine if these differences in performance are significant or not, statistical tests were applied (Section 5.3).
A similar behavior was observed for undersampling algorithms, having slight drops and increases in the classifier performance, but with no clear advantages. Due to the SBC algorithm deleted the majority class in several datasets, its results in such data were not computed. Again, to determine if these differences in performance are significant or not, statistical tests were applied (Section 5.3).
As can be seen, the differences in performance for the HACT classifier were obtained in a few datasets, and never exceeded the original AUC by more than 2%. These results point to the inefficiency of the sampling algorithms for the improvement of this classifier.
Similarly, for the oversampling and hybrid algorithms, the slight improvements in performance in some datasets do not justify, in the opinion of the authors, the increase in the cardinality of the datasets.

5.2.2. Impact on the performance of the Extended Gamma Classifier

The AUC results for the Extended Gamma classifier are presented in Table 12, Table 13 and Table 14 below. The good results (AUC improvements) are highlighted in bold, while the results that present less AUC than the original set (imbalanced) are underlined.
For the Extended Gamma classifier, the results of oversampling algorithms were similar to those obtained by the HACT classifier. There is no clear advantage of applying the oversampling algorithms, in the performance of the classifier.
For the undersampling algorithms, there is an improvement of classifier performance after applying OSS and CNNTL in six and five of the compared datasets, respectively. To determine if these differences in performance are significant or not, statistical tests were applied (Section 5.3).
For the Extended Gamma classifier, the hybrid algorithms showed a subtle improvement in performance in some datasets (ex. Australian, Default credit, Give me credit, Polish_1 and Polish_2). In the remaining of the datasets, differences can be seen in favor of the AUC are less than 1%. However, the SMOTE-ENN and SMOTE-TL algorithms had very unfavorable results in bankruptcy detection, evidencing a loss of more than 20% of AUC, with respect to the original.
Same as for the HACT classifier, in the case of the Extended Gamma classifier, for the over-sampling algorithms, the slight improvements in performance obtained do not justify, in the authors’ criteria, the increase in the cardinality of datasets.

5.2.3. Impact on the performance of the NAC Classifier

The AUC results for the NAC classifier are presented below, in Table 15, Table 16 and Table 17. The good results (AUC improvements) are highlighted in bold, while the results that present less AUC than the original set (imbalanced) are underlined. The oversampling algorithms outperformed the results of using the original data in just five datasets. In all of them, the AUC improvements were of 1% only. Considering that oversampling algorithms increase the computational complexity by augmented the number of instances, we consider that its potential benefits for AUC do not justify its added complexity.
According to undersampling algorithms, the best results were obtained by NCL and RUS, improving the AUC in four datasets. However, as it can be seen, on numerous occasions, using undersampling techniques negatively impacts the performance of the NAC classifier.
As can be seen in Table 17, the hybrid algorithms showed a slight improvement in the performance of the classifier in seven of the analyzed databases. The greatest improvement was obtained in the Iranian dataset, where the Area under the ROC Curve increased from 0.61 to 0.73, with the SMOTE-ENN algorithm. In the rest of the datasets analyzed, at least, strong falls were not observed in terms of classification performance.
For the NAC classifier, the data balancing algorithms did not show an obvious improvement over the original performance. It should be noted that the algorithms, in addition to involving a computational cost, in hybrid cases and oversampling, also increase the cardinality of the datasets.

5.2.4. Impact on the performance of the SNDAM Classifier

The AUC results for the SNDAM classifier are presented below, in Table 18, Table 19 and Table 20. The good results (AUC improvements) are highlighted in bold, while the results that present less AUC than the original set (imbalanced) are underlined.
Unlike the previously analyzed classifiers, the SNDAM showed an increase in AUC in 11 of the 13 compared datasets, after applying oversampling algorithms. These results point to the benefits of using such sampling techniques to increase SNDAM performance. However, to establish whether such AUC differences are significant or not, Section 5.3 uses statistical test.
In addition, undersampling algorithms also seem to increase the AUC of SNDAM, again for 11 of the 13 datasets. NCL, OSS, RUS and TL showed good results, although the increases in AUC were small (1%–4%) except for the Japanese dataset (7%).
The hybrid sampling algorithms obtained the best results, being able to increase SNDAM performance in 12 of the 13 datasets (SMOTE-ENN and SMOTE-TL) and in 8 datasets (SPIDER and SPIDER2). For the SNDAM classifier, the data balancing algorithms showed an evident improvement with respect to the original performance, unlike the other associative classifiers analyzed. Again, next section will address the statistical tests to determine if the differences in performance founded are significant or not.

5.3. Statistical Analysis

To establish whether the differences in the AUC founded in the previous section are significant or not, statistical tests were carried out.
The Friedman tests applied did not reject the hypothesis of equal performance when comparing the AUC of the balancing methods for the HACT, with p-values of 0.9799 for oversampling, 0.2116 for undersampling, and 0.9212 for hybrid algorithms. In this case, it is possible to conclude, with 95% certainty, that using class balancing methods DOES NOT improve the performance of HACT classifier, in imbalanced datasets, belonging to the financial field.
The test also did not reject the null hypothesis for oversampling and hybrid algorithms for NAC classifier (p-values of 0.2853 and 0.4980, respectively). For undersampling algorithms, the test did reject the null hypothesis, with a p-value of 0.0207. In the Friedman test, the best ranked algorithm was the original classifier, without instance sampling. For the undersampling algorithms, we applied the Holm’s test (Table 21). Holm’s procedure rejects those hypotheses that have an unadjusted p-value ≤ 0.01.
As shown by the test, we can conclude with a 95% of certainty, that the sampling algorithms but SBC, DID NOT improve the performance of the NAC classifier, in financial imbalanced data. In addition, SBC algorithm decreases the NAC performance.
Regarding the Extended Gamma classifier, the Friedman’s tests did reject the null hypothesis of equal performance for oversampling, undersampling and hybrid algorithms. The corresponding p-values were 0.000048, 0.015356 and 0.013584, respectively. The best ranked algorithms were ROS, TL and SPIDER2. In these cases, post-hoc tests were performed to determine among which algorithms the differences were. Table 22, Table 23 and Table 24 show the results of the tests applied for oversampling, undersampling and hybrid methods, respectively.
The ROS algorithm had no significant differences with respect to the original classifier, nor with the SMOTE-SL algorithm. Then, we can conclude within a 95% confidence, that using oversampling algorithms DID NOT increase the performance of the Extended Gamma classifier, using imbalanced financial data.
The best ranked undersampling algorithm, TL, had no significant difference in performance with none of the remaining undersampling algorithms but SBC, nor with respect using the original imbalanced dataset. The SBC algorithm was indeed significantly worse than TL, for the Extended Gamma Classifier performance. Again, we can conclude within a 95% confidence, that using undersampling algorithms DID NOT increase the performance of the Extended Gamma classifier, using imbalanced financial data.
The best ranked hybrid algorithm SPIDER2, had no significant difference in performance with SPIDER and SMOTE-TL nor with respect using the original imbalanced dataset. The SMOTE-ENN algorithm was significantly worse than SPIDER2, for the Extended Gamma classifier performance. Again, we can conclude within a 95% confidence, that using hybrid algorithms DID NOT increase the performance of the Extended Gamma classifier, using imbalanced financial data.
With respect the SNDAM classifier, the best ranked algorithms according to the Friedman tests were SMOTE, RUS and SMOTE-ENN. The results of the corresponding Holm’s test for oversampling, undersampling and hybrid algorithms are showed in Table 25, Table 26 and Table 27.
The SMOTE algorithm did not have significant differences with respect the SMOTE-BL, ADASYN and ADOMS algorithms, due to the null hypothesis were not rejected for such cases. However, SMOTE showed a significantly better performance than SMOTE-SL, ROS and the original classifier without instance selection. The statistical tests allow us to state that using oversampling techniques such as SMOTE, DID increase the Area under the ROC curve of the SNDAM classifier, over imbalanced financial data. Such improvement came with the additional computational cost of creating artificial instances and therefore increasing the cardinality of the datasets.
The best ranked undersampling algorithm, RUS, had no significant difference in performance with none of the remaining undersampling algorithms but SBC, nor with respect using the original imbalanced dataset. Again, we can conclude within a 95% confidence, that using undersampling algorithms DID NOT increase the performance of the SNDAM classifier, using imbalanced financial data.
The Holm’s test did not found significant differences in performance between the SMOTE-ENN and SMOTE-TL algorithms. However, the test did found SMOTE-ENN being significantly better than SPIDER2, SPIDER and the original classifier without instance sampling. Then, can conclude within a 95% confidence, that using oversampling algorithms such as SMOTE-ENN DID increase the performance of the SNDAM classifier, using imbalanced financial data.
From the experiments performed, it is possible to establish that the HACT, Extended Gamma and NAC classifiers DO NOT benefit from data balancing. On the other hand, the SNDAM classifier DOES obtain improvements in its performance by balancing the datasets using oversampling (as SMOTE) and hybrid (as SMOTE-ENN) algorithms. Undersampling algorithms DO NOT improve the performance of the SNDAM classifier over imbalanced financial data.

6. Conclusions and Future Works

In this paper, an in-depth study was carried out on data balancing techniques, as well as their application in financial data, and their impact in the performance of associative classifiers. This study allowed us to reach the following conclusions:
  • About sampling methods:
    • All of the oversampling methods tested obtained balanced datasets, although at the cost of increasing the cardinality of the data.
    • The undersampling methods analyzed (CNN, CNNTL, NCL, OSS, SBC and TL), but RUS, fail to find balanced data sets, when the imbalance ratio of the original set was greater than 4.0. However, for moderate imbalance ratios (less than 4.0), the NCL and TL algorithms got good results.
    • Systematically, the CNN, CNNTL, OSS and SBC algorithms reversed the amounts of instances in the classes in the datasets, making the majority class a minority.
    • The SBC algorithm had a very bad behavior in the face of financial data, since it systematically eliminated all the instances of the majority class.
    • Both SMOTE-ENN and SMOTE-TL obtained good results according to data balancing.
    • SPIDER2 obtained better balanced datasets than SPIDER.
  • About the impact of the sampling in the associative classifiers:
    • The HACT, Extended Gamma and NAC classifiers do not benefit from financial data balancing.
    • Undersampling algorithms do not benefit the SNDAM classifier. However, oversampling and hybrid methods do increase the performance of SNDAM over imbalanced financial data.
    • There is a significant improvement, within a 95% of confidence, in the Area under the ROC curve of SNDAM while sampling imbalanced financial data by SMOTE and SMOTE-ENN.
Considering the above, it is proposed as future work of the research:
  • To design undersampling algorithms that are robust to high imbalance ratios, in order to solve the limitations founded in the evaluated algorithms.
  • To apply the proposed methodology to other supervised classifiers, for instance Deep Neural Networks and other algorithms related with Deep Learning.
  • To choose other datasets, related to areas of interest other than financial, in order to perform experiments similar to those presented in this paper.

Author Contributions

Conceptualization, C.Y.-M. and Y.V-R.; methodology, Y.V.-R.; software, A.R.-D.-d.-l.-V.; validation, O.C.-N., formal analysis, Y.V.-R. and C.Y.-M.; investigation, I.L.-Y.; writing—original draft preparation, Y.V.-R.; writing—review and editing, C.Y.-M.; visualization, A.R.-D.d.-l.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors gratefully acknowledge the Instituto Politécnico Nacional (Secretaría Académica, Comisión de Operación y Fomento de Actividades Académicas, Secretaría de Investigación y Posgrado, CIC and CIDETEC), the Consejo Nacional de Ciencia y Tecnología (Conacyt), and Sistema Nacional de Investigadores for their economic support to develop this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bischl, B.; Kühn, T.; Szepannek, G. On Class Imbalance Correction for Classification Algorithms in Credit Scoring. In Operations Research Proceedings 2014; Springer: Basel, Switzerland, 2014; pp. 37–43. [Google Scholar]
  2. García, V.; Marqués, A.; Sánchez, J.S. On the use of data filtering techniques for credit risk prediction with instance-based models. Expert Syst. Appl. 2012, 39, 13267–13276. [Google Scholar] [CrossRef]
  3. Marqués, A.I.; García, V.; Sánchez, J.S. On the suitability of resampling techniques for the class imbalance problem in credit scoring. J. Oper. Res. Soc. 2013, 64, 1060–1070. [Google Scholar] [CrossRef] [Green Version]
  4. Banasik, J.; Crook, J.; Thomas, L. Sample selection bias in credit scoring models. J. Oper. Res. Soc. 2003, 54, 822–832. [Google Scholar] [CrossRef]
  5. Su, H.; Qi, W.; Yang, C.; Aliverti, A.; Ferrigno, G.; De Momi, E. Deep Neural Network Approach in Human-Like Redundancy Optimization for Anthropomorphic Manipulators. IEEE Access 2019, 7, 124207–124216. [Google Scholar] [CrossRef]
  6. Su, H.; Yang, C.; Mdeihly, H.; Rizzo, A.; Ferrigno, G.; De Momi, E. Neural Network Enhanced Robot Tool Identification and Calibration for Bilateral Teleoperation. IEEE Access 2019, 7, 122041–122051. [Google Scholar] [CrossRef]
  7. Goh, R.; Lee, L. Credit Scoring: A Review on Support Vector Machines and Metaheuristic Approaches. Adv. Oper. Res. 2019, 2019, 1–30. [Google Scholar] [CrossRef]
  8. Wang, T.; Li, J. An improved support vector machine and its application in P2P lending personal credit scoring. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2019; p. 062041. [Google Scholar]
  9. Luo, J.; Yan, X.; Tian, Y. Unsupervised quadratic surface support vector machine with application to credit risk assessment. Eur. J. Oper. Res. 2020, 280, 1008–1017. [Google Scholar] [CrossRef]
  10. Akkoç, S. Exploring the Nature of Credit Scoring: A Neuro Fuzzy Approach. Fuzzy Econ. Rev. 2019, 24, 3–24. [Google Scholar]
  11. Livieris, I.E. Forecasting economy-related data utilizing weight-constrained recurrent neural networks. Algorithms 2019, 12, 85. [Google Scholar] [CrossRef] [Green Version]
  12. Munkhdalai, L.; Lee, J.Y.; Ryu, K.H. A Hybrid Credit Scoring Model Using Neural Networks and Logistic Regression. In Advances in Intelligent Information Hiding and Multimedia Signal Processing; Springer: Basel, Switzerland, 2020; pp. 251–258. [Google Scholar]
  13. Feng, X.; Xiao, Z.; Zhong, B.; Dong, Y.; Qiu, J. Dynamic weighted ensemble classification for credit scoring using Markov Chain. Appl. Intell. 2019, 49, 555–568. [Google Scholar] [CrossRef]
  14. Guo, S.; He, H.; Huang, X. A multi-stage self-adaptive classifier ensemble model with application in credit scoring. IEEE Access 2019, 7, 78549–78559. [Google Scholar] [CrossRef]
  15. Pławiak, P.; Abdar, M.; Acharya, U.R. Application of new deep genetic cascade ensemble of SVM classifiers to predict the Australian credit scoring. Appl. Soft Comput. 2019, 84, 105740. [Google Scholar] [CrossRef]
  16. Xiao, J.; Zhou, X.; Zhong, Y.; Xie, L.; Gu, X.; Liu, D. Cost-sensitive semi-supervised selective ensemble model for customer credit scoring. Knowl.-Based Syst. 2020, 189, 105118. [Google Scholar] [CrossRef]
  17. Shen, K.-Y.; Sakai, H.; Tzeng, G.-H. Comparing two novel hybrid MRDM approaches to consumer credit scoring under uncertainty and fuzzy judgments. Int. J. Fuzzy Syst. 2019, 21, 194–212. [Google Scholar] [CrossRef]
  18. Zhang, W.; He, H.; Zhang, S. A novel multi-stage hybrid model with enhanced multi-population niche genetic algorithm: An application in credit scoring. Expert Syst. Appl. 2019, 121, 221–232. [Google Scholar] [CrossRef]
  19. Maldonado, S.; Peters, G.; Weber, R. Credit scoring using three-way decisions with probabilistic rough sets. Inf. Sci. 2020, 507, 700–714. [Google Scholar] [CrossRef]
  20. García, V.; Marqués, A.I.; Sánchez, J.S. Exploring the synergetic effects of sample types on the performance of ensembles for credit risk and corporate bankruptcy prediction. Inf. Fusion 2019, 47, 88–101. [Google Scholar] [CrossRef]
  21. Louzada, F.; Ara, A.; Fernandes, G.B. Classification methods applied to credit scoring: Systematic review and overall comparison. Surv. Oper. Res. Manag. Sci. 2016, 21, 117–134. [Google Scholar] [CrossRef] [Green Version]
  22. Lessmann, S.; Baesens, B.; Seow, H.-V.; Thomas, L.C. Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. Eur. J. Oper. Res. 2015, 247, 124–136. [Google Scholar] [CrossRef] [Green Version]
  23. Brown, I.; Mues, C. An experimental comparison of classification algorithms for imbalanced credit scoring data sets. Expert Syst. Appl. 2012, 39, 3446–3453. [Google Scholar] [CrossRef] [Green Version]
  24. Su, H.; Yang, C.; Ferrigno, G.; De Momi, E. Improved human–robot collaborative control of redundant robot for teleoperated minimally invasive surgery. IEEE Robot. Autom. Lett. 2019, 4, 1447–1453. [Google Scholar] [CrossRef] [Green Version]
  25. Wolpert, D.H. The supervised learning no-free-lunch theorems. In Soft Computing and Industry; Springer: London, UK, 2002; pp. 25–42. [Google Scholar]
  26. Villuendas-Rey, Y.; Rey-Benguría, C.F.; Ferreira-Santiago, Á.; Camacho-Nieto, O.; Yáñez-Márquez, C. The naïve associative classifier (NAC): A novel, simple, transparent, and accurate classification model evaluated on financial data. Neurocomputing 2017, 265, 105–115. [Google Scholar] [CrossRef]
  27. López, V.; Fernández, A.; García, S.; Palade, V.; Herrera, F. An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics. Inf. Sci. 2013, 250, 113–141. [Google Scholar] [CrossRef]
  28. Piramuthu, S. On preprocessing data for financial credit risk evaluation. Expert Syst. Appl. 2006, 30, 489–497. [Google Scholar] [CrossRef]
  29. Abdou, H.A.; Pointon, J. Credit scoring, statistical techniques and evaluation criteria: A review of the literature. Intell. Syst. Account. Financ. Manag. 2011, 18, 59–88. [Google Scholar] [CrossRef] [Green Version]
  30. Su, H.; Ovur, S.E.; Zhou, X.; Qi, W.; Ferrigno, G.; De Momi, E. Depth vision guided hand gesture recognition using electromyographic signals. Adv. Robot. 2020, 1–13. [Google Scholar] [CrossRef]
  31. Beaver, W.H. Financial ratios as predictors of failure. J. Account. Res. 1966, 4, 71–111. [Google Scholar] [CrossRef]
  32. Altman, E.I. Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. J. Financ. 1968, 23, 589–609. [Google Scholar] [CrossRef]
  33. Damrongsakmethee, T.; Neagoe, V.-E. Principal component analysis and relieff cascaded with decision tree for credit scoring. In Proceedings of the Computer Science On-line Conference, Zlin, Czech Republic, 24–27 April 2019; pp. 85–95. [Google Scholar]
  34. Kozodoi, N.; Lessmann, S.; Papakonstantinou, K.; Gatsoulis, Y.; Baesens, B. A multi-objective approach for profit-driven feature selection in credit scoring. Decis. Support Syst. 2019, 120, 106–117. [Google Scholar] [CrossRef]
  35. Srinivasan, V.; Kim, Y.H. Credit granting: A comparative analysis of classification procedures. J. Financ. 1987, 42, 665–681. [Google Scholar] [CrossRef]
  36. Abellán, J.; Castellano, J.G. A comparative study on base classifiers in ensemble methods for credit scoring. Expert Syst. Appl. 2017, 73, 1–10. [Google Scholar] [CrossRef]
  37. Boughaci, D.; Alkhawaldeh, A.A. Appropriate machine learning techniques for credit scoring and bankruptcy prediction in banking and finance: A comparative study. Risk Decis. Anal. 2018, 1–10. [Google Scholar] [CrossRef]
  38. Greene, W. Sample selection in credit-scoring models. Jpn. World Econ. 1998, 10, 299–316. [Google Scholar] [CrossRef]
  39. Crone, S.F.; Finlay, S. Instance sampling in credit scoring: An empirical study of sample size and balancing. Int. J. Forecast. 2012, 28, 224–238. [Google Scholar] [CrossRef]
  40. Dal Pozzolo, A.; Caelen, O.; Bontempi, G. When is undersampling effective in unbalanced classification tasks? In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Porto, Portugal, 7–11 September 2015; pp. 200–215. [Google Scholar]
  41. Santiago-Montero, R. Hybrid Accociative pattern Classifier with Translation (In Spanish: Clasificador Híbrido de Patrones Basado en la Lernmatrix de Steinbuch y el Linear Associator de Anderson Kohonen). Master’s Thesis, Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City, Mexico, 2003. [Google Scholar]
  42. Cleofas-Sánchez, L.; García, V.; Marqués, A.; Sánchez, J.S. Financial distress prediction using the hybrid associative memory with translation. Appl. Soft Comput. 2016, 44, 144–152. [Google Scholar] [CrossRef] [Green Version]
  43. López-Yáñez, I.; Argüelles-Cruz, A.J.; Camacho-Nieto, O.; Yáñez-Márquez, C. Pollutants time-series prediction using the Gamma classifier. Int. J. Comput. Int. Syst. 2011, 4, 680–711. [Google Scholar] [CrossRef]
  44. Ramirez, A.; Lopez, I.; Villuendas, Y.; Yanez, C. Evolutive improvement of parameters in an associative classifier. IEEE Lat. Am. Trans. 2015, 13, 1550–1555. [Google Scholar] [CrossRef]
  45. Villuendas-Rey, Y.; Yanez-Marquez, C.; Anton-Vargas, J.A.; Lopez-Yanez, I. An extension of the gamma associative classifier for dealing with hybrid data. IEEE Access 2019, 7, 64198–64205. [Google Scholar] [CrossRef]
  46. Serrano-Silva, Y.O.; Villuendas-Rey, Y.; Yáñez-Márquez, C. Automatic feature weighting for improving financial Decision Support Systems. Decis. Support Syst. 2018, 107, 78–87. [Google Scholar] [CrossRef]
  47. Ramírez-Rubio, R.; Aldape-Pérez, M.; Yáñez-Márquez, C.; López-Yáñez, I.; Camacho-Nieto, O. Pattern classification using smallest normalized difference associative memory. Pattern Recogn. Lett. 2017, 93, 104–112. [Google Scholar] [CrossRef]
  48. Cleofas-Sánchez, L.; Sánchez, J.S.; García, V.; Valdovinos, R.M. Associative Learning on imbalanced environments: An empirical study. Expert Syst. Appl. 2016, 54, 387–397. [Google Scholar] [CrossRef] [Green Version]
  49. González, S.; García, S.; Li, S.-T.; Herrera, F. Chain based sampling for monotonic imbalanced classification. Inf. Sci. 2019, 474, 187–204. [Google Scholar] [CrossRef]
  50. Nejatian, S.; Parvin, H.; Faraji, E. Using sub-sampling and ensemble clustering techniques to improve performance of imbalanced classification. Neurocomputing 2018, 276, 55–66. [Google Scholar] [CrossRef]
  51. Yan, Y.; Liu, R.; Ding, Z.; Du, X.; Chen, J.; Zhang, Y. A parameter-free cleaning method for SMOTE in imbalanced classification. IEEE Access 2019, 7, 23537–23548. [Google Scholar] [CrossRef]
  52. Li, Y.; Wang, J.; Wang, S.; Liang, J.; Li, J. Local dense mixed region cutting+ global rebalancing: A method for imbalanced text sentiment classification. Int. J. Mach. Learn. Cybern. 2019, 10, 1805–1820. [Google Scholar] [CrossRef]
  53. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  54. He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar]
  55. Han, H.; Wang, W.-Y.; Mao, B.-H. Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In Proceedings of the International Conference on Intelligent Computing, Hefei, China, 23–26 August 2005; pp. 878–887. [Google Scholar]
  56. Bunkhumpornpat, C.; Sinapiromsaran, K.; Lursinsap, C. Safe-level-smote: Safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Bangkok, Thailand, 27–30 April 2009; pp. 475–482. [Google Scholar]
  57. Batista, G.E.; Prati, R.C.; Monard, M.C. A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor. Newsl. 2004, 6, 20–29. [Google Scholar] [CrossRef]
  58. Tang, S.; Chen, S.-P. The generation mechanism of synthetic minority class examples. In Proceedings of the 2008 International Conference on Information Technology and Applications in Biomedicine, Shenzhen, China, 30–31 May 2008; pp. 444–447. [Google Scholar]
  59. Tomek, I. Two modification of CNN. IEEE Trans. Syst. Man Commun. 1976, 6, 769–772. [Google Scholar]
  60. Hart, P. The condensed nearest neighbor rule. IEEE Trans. Inf. Theory 1968, 14, 515–516. [Google Scholar] [CrossRef]
  61. Kubat, M.; Matwin, S. Addressing the curse of imbalanced training sets: One-sided selection. In Proceedings of the 14th International Conference on Machine Learning (ICML), Nashville, TN, USA, 8–12 July 1997; ICML: Nashville, TN, USA, 1997; pp. 179–186. [Google Scholar]
  62. Laurikkala, J. Improving identification of difficult small classes by balancing class distribution. In Proceedings of the Conference on Artificial Intelligence in Medicine in Europe, Cascais, Portugal, 1–4 July 2001; pp. 63–66. [Google Scholar]
  63. Yen, S.-J.; Lee, Y.-S. Under-sampling approaches for improving prediction of the minority class in an imbalanced dataset. In Intelligent Control and Automation; Springer: Berlin, Heidelberg, 2006; pp. 731–740. [Google Scholar]
  64. Stefanowski, J.; Wilk, S. Selective pre-processing of imbalanced data for improving classification performance. In Proceedings of the International Conference on Data Warehousing and Knowledge Discovery, Turin, Italy, 1–5 September 2008; pp. 283–292. [Google Scholar]
  65. Napierała, K.; Stefanowski, J.; Wilk, S. Learning from imbalanced data in presence of noisy and borderline examples. In Proceedings of the International Conference on Rough Sets and Current Trends in Computing, Warsaw, Poland, 28–30 June 2010; pp. 158–167. [Google Scholar]
  66. Larson, S.C. The shrinkage of the coefficient of multiple correlation. J. Educ. Psychol. 1931, 22, 45–55. [Google Scholar] [CrossRef]
  67. Stone, M. Cross-Validatory Choice and Assessment of Statistical Predictions. J. R. Stat. Soc. 1974, 36, 111–147. [Google Scholar] [CrossRef]
  68. Geisser, S. The predictive sample reuse model method with applications. J. Am. Stat. Assoc. 1975, 70, 320–328. [Google Scholar] [CrossRef]
  69. Dietterich, T.G. Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms. Neural Comput. 1998, 10, 1895–1923. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Alcalá-Fdez, F.H.J.; Sánchez, L.; García, S.; del Jesus, M.J.; Ventura, S.; Garrell, J.M.; Otero, J.; Romero, C.; Bacardit, J.; Rivas, V.M.; et al. KEEL: A software tool to assess evolutionary algorithms for data mining problems. Soft Comput. 2009, 13, 307–318. [Google Scholar]
  71. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  72. Triguero, I.; González, S.; Moyano, J.M.; García López, S.; Alcalá Fernández, J.; Luengo Martín, J.; Fernández, A.; del Jesús, M.J.; Sánchez, L.; Herrera, F. KEEL 3.0: An open source software for multi-stage analysis in data mining. Int. J. Comput. Intell. Syst. 2017, 10, 1238–1249. [Google Scholar] [CrossRef] [Green Version]
  73. Hernández-Castaño, J.A.; Villuendas-Rey, Y.; Camacho-Nieto, O.; Yáñez-Márquez, C. Experimental platform for intelligent computing (epic). Computación y Sistemas 2018, 22, 245–253. [Google Scholar] [CrossRef]
  74. Hernández-Castaño, J.A.; Villuendas-Rey, Y.; Camacho-Nieto, O.; Rey-Benguría, C.F. A New Experimentation Module for the EPIC Software. Res. Comput. Sci. 2018, 147, 243–252. [Google Scholar] [CrossRef]
  75. Garcia, S.; Herrera, F. An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J. Mach. Learn. Res. 2008, 9, 2677–2694. [Google Scholar]
  76. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  77. Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 1979, 6, 65–70. [Google Scholar]
Figure 1. Stages of the proposed methodology.
Figure 1. Stages of the proposed methodology.
Applsci 10 02779 g001
Figure 2. Two class confusion matrix.
Figure 2. Two class confusion matrix.
Applsci 10 02779 g002
Figure 3. Some performance measures for two class imbalanced classification.
Figure 3. Some performance measures for two class imbalanced classification.
Applsci 10 02779 g003
Figure 4. Characteristics of the compared methods.
Figure 4. Characteristics of the compared methods.
Applsci 10 02779 g004
Table 1. Description of the datasets.
Table 1. Description of the datasets.
DatasetInstancesAttributesMissingIR
Num.Cat.
Australian69086No1.25
Default credit30,0001310No3.52
German1000713No2.33
Give me credit150,000100No13.96
Iranian1002280Yes19.04
Japanese69069Yes1.21
Polish_17027640Yes24.93
Polish_210,173640Yes24.43
Polish_310,503640Yes20.22
Polish_49792640Yes18.01
Polish_55910640Yes13.41
Qualitative25006No1.34
The PAKDD20,000109Yes4.12
Table 2. Empirical values for the parameters of the Gamma classifier.
Table 2. Empirical values for the parameters of the Gamma classifier.
ParameterMeaningRecommendation
w It is the vector of weights of the attributes, which indicates the importance of each attribute.Computed by Differential Evolution [40]
θ It is the value that will initially take θ and indicates how different two numerical values can be and that the extended generalized similarity operator considers them similar. θ   =   0 (inicial value)
ρ It is the stop parameter and refers to the maximum value allowed to θ, which allows to continue looking for the disambiguation of patterns near the border; when θ = ρ, the CAG will stop iterating and disambiguate the class. j = 1 m ( j = 1 p x j i ) if there is at least a numeric attribute. Otherwise use ρ = 1
ρ 0 It is the pause parameter. In this pause an evaluation of the pattern to be classified is carried out, in order to determine whether or not it belongs to the unknown class: it depends on whether the normal operation of the algorithm is continued. j = 1 m i = 1 p x j i   if there is at least a numeric attribute. Otherwise use ρ 0 = 1
u It is the threshold to decide if the pattern to be classified belongs to the unknown class or to any of the known classes. u = 1
Table 3. Oversampling algorithms used.
Table 3. Oversampling algorithms used.
NameAcronymReference
Synthetic Minority Over-sampling TEchniqueSMOTE[53]
ADAptive SYNthetic SamplingADASYN[54]
Borderline-Synthetic Minority Over-sampling TEchniqueSMOTE-BL[55]
Safe Level Synthetic Minority Over-sampling TEchniqueSMOTE-SL[56]
Random OversamplingROS[57]
Adjusting the Direction Of the synthetic Minority clasS examplesADOMS[58]
Table 4. Undersampling algorithms used.
Table 4. Undersampling algorithms used.
NameAcronymReference
Tomek’s modification of Condensed Nearest NeighborTL[59]
Condensed Nearest NeighborCNN[60]
Condensed Nearest Neighbor + Tomek’s modification of Condensed Nearest NeighborCNNTL[57]
One Side SelectionOSS[61]
Random UndersamplingRUS[57]
Neighborhood Cleaning RuleNCL[62]
Under-Sampling Based on ClusteringSBC[63]
Table 5. Hybrid algorithms used.
Table 5. Hybrid algorithms used.
NameAcronymReference
Synthetic Minority Over-sampling Technique + Edited Nearest NeighborSMOTE-ENN[57]
Synthetic Minority Over-sampling Technique + Tomek’s modification of Condensed Nearest NeighborSMOTE-TL[57]
Selective Preprocessing of Imbalanced DataSPIDER[64]
Selective Preprocessing of Imbalanced Data 2SPIDER2[65]
Table 6. Imbalance ratio for oversampling algorithms.
Table 6. Imbalance ratio for oversampling algorithms.
DatasetsOriginalADASYNADOMSSMOTE-BLROSSMOTE-SLSMOTE
Australian1.251.001.001.001.001.001.00
Default credit3.521.001.001.001.001.001.00
German2.331.001.001.001.001.001.00
Give me credit13.961.001.001.001.001.001.00
Iranian19.001.001.001.001.001.001.00
Japanese1.261.001.001.001.001.001.00
Polish_124.931.001.001.001.001.001.00
Polish_224.431.001.001.001.001.001.00
Polish_320.221.001.001.001.001.001.00
Polish_418.011.001.001.001.001.001.00
Polish_513.411.001.001.001.001.001.00
Qualitative1.341.001.001.001.001.001.00
The PAKDD4.121.001.001.001.001.001.00
Table 7. Imbalance ratio for undersampling algorithms.
Table 7. Imbalance ratio for undersampling algorithms.
DatasetsOriginalCNNCNNTLNCLOSSRUSSBCTL
Australian1.253.378.591.355.501.002.221.04
Default credit3.521.132.241.781.211.00-2.69
German2.331.122.961.081.751.001.961.68
Give me credit13.961.591.3610.841.391.002.0012.95
Iranian19.002.371.4714.532.181.00-17.91
Japanese1.263.459.071.335.301.001.961.01
Polish_124.932.531.4021.142.501.00-23.70
Polish_224.432.621.4120.312.561.00-22.97
Polish_320.222.581.3316.332.451.00-18.86
Polish_418.012.381.2514.422.221.00-16.72
Polish_513.411.841.1510.231.621.00-12.28
Qualitative1.346.365.851.286.271.00-1.32
The PAKDD4.121.511.152.411.411.002.003.84
Table 8. Imbalance ratio for hybrid algorithms.
Table 8. Imbalance ratio for hybrid algorithms.
DatasetsSMOTE-ENNSMOTE-TLSPIDERSPIDER2Original
Australian1.031.311.171.491.25
Default credit1.391.501.451.023.52
German1.191.661.061.302.33
Give me credit1.131.174.222.6213.96
Iranian1.181.146.183.8919.00
Japanese1.021.281.151.461.26
Polish_11.241.196.874.1424.93
Polish_21.281.216.493.8424.43
Polish_31.321.265.503.3120.22
Polish_41.301.245.083.1318.01
Polish_51.291.253.982.5113.41
Qualitative1.011.011.291.311.34
The PAKDD1.331.161.641.054.12
Table 9. AUC for HACT classifier after oversampling algorithms.
Table 9. AUC for HACT classifier after oversampling algorithms.
DatasetsOriginalADASYNADOMSSMOTE-BLROSSMOTE-SLSMOTE
Australian0.590.590.590.590.610.590.60
Default credit0.950.960.960.960.960.960.97
German0.640.630.640.640.640.640.64
Give me credit0.630.630.600.630.640.630.62
Iranian0.610.600.610.600.610.600.60
Japanese0.640.650.640.660.640.620.63
Polish_10.700.650.660.620.650.640.66
Polish_20.580.580.580.580.580.580.58
Polish_30.520.530.520.510.520.530.52
Polish_40.500.500.500.500.500.500.50
Polish_50.530.520.520.530.520.520.52
Qualitative0.550.550.550.550.550.560.55
The PAKDD0.620.620.620.620.620.620.62
Table 10. AUC for HACT classifier after undersampling algorithms.
Table 10. AUC for HACT classifier after undersampling algorithms.
DatasetsOriginalCNNTLNCLOSSRUSSBCTL
Australian0.590.580.590.570.590.640.59
Default credit0.950.890.950.880.96-0.95
German0.640.640.630.640.640.500.64
Give me credit0.640.620.630.610.630.640.63
Iranian0.610.610.610.610.61-0.61
Japanese0.640.640.650.670.650.500.64
Polish_10.660.620.610.610.66-0.62
Polish_20.580.580.580.580.58-0.58
Polish_30.520.500.520.510.51-0.52
Polish_40.500.490.500.500.50-0.50
Polish_50.530.520.530.520.53-0.53
Qualitative0.550.550.560.550.56-0.55
The PAKDD0.620.630.620.630.620.530.63
Table 11. AUC for HACT classifier after hybrid algorithms.
Table 11. AUC for HACT classifier after hybrid algorithms.
DatasetsOriginalSMOTE-ENNSMOTE-TLSPIDER2SPIDER
Australian0.590.600.570.590.59
Default credit0.950.960.960.950.95
German0.640.630.630.620.63
Give me credit0.630.630.630.630.63
Iranian0.610.610.610.610.61
Japanese0.640.640.650.650.65
Polish_10.700.650.620.600.62
Polish_20.580.580.580.580.58
Polish_30.520.520.520.530.52
Polish_40.500.500.500.500.50
Polish_50.530.520.520.520.53
Qualitative0.550.560.560.550.55
The PAKDD0.620.620.620.620.63
Table 12. AUC for the Extended Gamma classifier after oversampling algorithms.
Table 12. AUC for the Extended Gamma classifier after oversampling algorithms.
DatasetsOriginalADASYNADOMSSMOTE-BLROSSMOTE-SLSMOTE
Australian0.840.850.830.850.830.830.83
Default credit0.990.980.990.970.990.990.99
German0.670.630.610.640.670.670.64
Give me credit0.680.670.540.670.690.690.65
Iranian0.590.510.500.520.590.590.51
Japanese0.680.630.570.580.670.640.63
Polish_10.820.850.830.850.820.820.82
Polish_20.570.550.500.600.610.580.61
Polish_30.760.500.500.500.760.580.50
Polish_40.710.500.500.500.710.550.50
Polish_50.710.500.500.500.720.620.50
Qualitative0.750.500.500.500.750.670.50
The PAKDD0.790.500.500.510.780.710.50
Table 13. AUC for the Extended Gamma classifier after undersampling algorithms.
Table 13. AUC for the Extended Gamma classifier after undersampling algorithms.
DatasetsOriginalCNNTLNCLOSSRUSSBCTL
Australian0.840.840.870.840.830.840.85
Default credit0.990.940.990.940.99-0.99
German0.670.690.660.680.670.530.66
Give me credit0.680.680.680.690.680.670.70
Iranian0.590.600.590.600.59-0.59
Japanese0.680.700.680.690.670.500.68
Polish_10.820.850.860.830.82-0.85
Polish_20.610.600.600.610.61-0.61
Polish_30.760.760.750.740.75-0.76
Polish_40.710.680.710.690.70-0.71
Polish_50.710.720.710.720.71-0.71
Qualitative0.750.730.750.730.74-0.75
The PAKDD0.790.780.790.790.790.550.78
Table 14. AUC for Extended Gamma classifier after hybrid algorithms.
Table 14. AUC for Extended Gamma classifier after hybrid algorithms.
DatasetsOriginalSMOTE-ENNSMOTE-TLSPIDER2SPIDER
Australian0.840.830.850.860.85
Default credit0.991.001.000.980.98
German0.670.650.650.650.65
Give me credit0.680.690.690.680.68
Iranian0.590.510.510.590.59
Japanese0.680.630.630.670.67
Polish_10.820.820.850.870.86
Polish_20.570.560.570.600.61
Polish_30.760.500.500.760.76
Polish_40.710.500.500.710.71
Polish_50.710.500.500.710.72
Qualitative0.750.500.500.750.74
The PAKDD0.790.500.500.800.80
Table 15. AUC for NAC classifier after oversampling algorithms.
Table 15. AUC for NAC classifier after oversampling algorithms.
DatasetsOriginalADASYNADOMSSMOTE-BLROSSMOTE-SLSMOTE
Australian0.830.830.840.840.840.840.83
Default credit0.990.981.000.980.990.990.99
German0.670.650.670.670.670.660.67
Give me credit0.690.690.680.680.700.680.69
Iranian0.610.590.600.610.610.600.59
Japanese0.500.500.500.500.500.500.50
Polish_10.840.840.840.840.830.840.83
Polish_20.600.610.610.610.610.610.61
Polish_30.500.500.500.500.500.500.50
Polish_40.500.500.500.500.500.500.50
Polish_50.500.500.500.500.510.500.50
Qualitative0.530.510.520.500.530.500.51
The PAKDD0.620.590.620.600.620.590.60
Table 16. AUC for NAC classifier after undersampling algorithms.
Table 16. AUC for NAC classifier after undersampling algorithms.
DatasetsOriginalCNNTLNCLOSSRUSSBCTL
Australian0.830.840.830.830.840.820.83
Default credit0.990.940.990.940.99-0.99
German0.670.650.580.670.670.520.57
Give me credit0.690.640.680.680.690.680.69
Iranian0.610.530.680.540.61-0.62
Japanese0.500.500.500.490.540.500.50
Polish_10.830.830.830.810.85-0.83
Polish_20.610.550.610.590.59-0.61
Polish_30.500.500.500.500.52-0.50
Polish_40.500.500.520.510.50-0.50
Polish_50.500.490.550.500.50-0.53
Qualitative0.530.540.530.530.52-0.54
The PAKDD0.620.500.650.520.500.500.57
Table 17. AUC for NAC classifier after hybrid algorithms.
Table 17. AUC for NAC classifier after hybrid algorithms.
DatasetsOriginalSMOTE-ENNSMOTE-TLSPIDER2SPIDER
Australian0.830.840.820.820.83
Default credit0.990.981.000.980.98
German0.670.600.580.510.51
Give me credit0.690.700.700.680.69
Iranian0.610.730.650.610.62
Japanese0.500.500.500.500.50
Polish_10.840.830.840.820.83
Polish_20.600.610.610.610.61
Polish_30.500.500.500.510.50
Polish_40.500.500.500.500.50
Polish_50.500.530.520.510.50
Qualitative0.530.530.520.530.52
The PAKDD0.620.530.560.610.61
Table 18. AUC for SNDAM classifier after oversampling algorithms.
Table 18. AUC for SNDAM classifier after oversampling algorithms.
DatasetsOriginalADASYNADOMSSMOTE-BLROSSMOTE-SLSMOTE
Australian0.800.790.790.810.800.800.80
Default credit1.001.000.991.001.001.001.00
German0.610.620.610.620.610.610.62
Give me credit0.580.600.590.620.580.590.58
Iranian0.570.580.590.590.570.580.59
Japanese0.650.670.660.660.650.650.68
Polish_10.820.820.830.830.820.820.82
Polish_20.910.910.910.900.910.780.91
Polish_30.540.580.550.560.540.540.57
Polish_40.520.520.530.520.520.520.53
Polish_50.520.550.540.540.520.530.55
Qualitative0.540.590.560.560.540.540.57
The PAKDD0.580.620.600.610.580.580.61
Table 19. AUC for SNDAM classifier after undersampling algorithms.
Table 19. AUC for SNDAM classifier after undersampling algorithms.
DatasetsOriginalCNNTLNCLOSSRUSSBCTL
Australian0.800.710.810.770.800.810.82
Default credit1.000.971.000.960.99-1.00
German0.610.590.650.620.620.520.64
Give me credit0.580.580.630.590.590.620.61
Iranian0.570.570.620.580.63-0.60
Japanese0.650.720.680.680.700.500.67
Polish_10.820.700.810.790.82-0.84
Polish_20.910.780.810.810.80-0.85
Polish_30.540.560.560.580.65-0.54
Polish_40.520.550.530.550.60-0.53
Polish_50.520.580.540.560.60-0.53
Qualitative0.540.570.580.570.62-0.56
The PAKDD0.580.620.650.610.680.530.61
Table 20. AUC for SNDAM classifier after hybrid algorithms.
Table 20. AUC for SNDAM classifier after hybrid algorithms.
DatasetsOriginalSMOTE-ENNSMOTE-TLSPIDER2SPIDER
Australian0.800.850.810.840.84
Default credit1.001.001.001.001.00
German0.610.640.640.640.64
Give me credit0.580.640.640.610.60
Iranian0.570.650.640.590.58
Japanese0.650.660.680.650.65
Polish_10.820.870.840.850.85
Polish_20.910.910.770.840.86
Polish_30.540.590.600.540.54
Polish_40.520.540.540.520.52
Polish_50.520.550.550.530.53
Qualitative0.540.620.610.540.54
The PAKDD0.580.670.670.590.59
Table 21. Results of the Holm test comparing the performance of the NAC classifier after undersampling algorithms.
Table 21. Results of the Holm test comparing the performance of the NAC classifier after undersampling algorithms.
iAlgorithmzpHolm
6SBC2.7689160.0056240.008333
5CNNTL1.8610750.0627340.010000
4OSS1.7702910.0766790.012500
3TL0.4085290.6828860.016667
2NCL0.1361760.8916820.025000
1RUS0.0453920.9637950.050000
Table 22. Results of the Holm test comparing the performance of the Extended Gamma classifier after oversampling algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.025.
Table 22. Results of the Holm test comparing the performance of the Extended Gamma classifier after oversampling algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.025.
iAlgorithmzpHolm
6ADOMS3.9945020.0000650.008333
5SMOTE3.0412680.0023560.010000
4ADASYN2.9958760.0027370.012500
3SMOTE-BL2.4965640.012540.016667
2SMOTE-SL0.9986250.3179760.025000
1Original0.1361760.8916820.050000
Table 23. Results of the Holm test comparing the performance of the Extended Gamma classifier after undersampling algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.01.
Table 23. Results of the Holm test comparing the performance of the Extended Gamma classifier after undersampling algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.01.
iAlgorithmzpHolm
6SBC3.4044050.0006630.008333
5RUS1.3617620.1732730.010000
4CNNTL0.8170570.4138960.012500
3OSS0.6354890.525110.016667
2Original0.4993130.6175590.025000
1NCL0.2723520.7853510.05
Table 24. Results of the Holm test comparing the performance of the Extended Gamma classifier after hybrid algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.016667.
Table 24. Results of the Holm test comparing the performance of the Extended Gamma classifier after hybrid algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.016667.
iAlgorithmzpHolm
4SMOTE-ENN2.7907820.0052580.012500
3SMOTE-TL2.108590.0349800.016667
2Original0.4961390.6197960.025000
1SPIDER0.1860520.8524040.050000
Table 25. Results of the Holm test comparing the performance of the SNDAM classifier after oversampling algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.016667.
Table 25. Results of the Holm test comparing the performance of the SNDAM classifier after oversampling algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.016667.
iAlgorithmzpHolm
6Original2.9958760.0027370.008333
5ROS2.9958760.0027370.010000
4SMOTE-SL2.8143080.0048880.012500
3ADOMS1.2255860.2203550.016667
2ADASYN0.226960.8204550.025000
1SMOTE-BL0.226960.8204550.050000
Table 26. Results of the Holm test comparing the performance of the SNDAM classifier after undersampling algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.01.
Table 26. Results of the Holm test comparing the performance of the SNDAM classifier after undersampling algorithms. The test rejects the hypothesis having an unadjusted p-value ≤ 0.01.
iAlgorithmzpHolm
6SBC2.9504840.0031730.008333
5CNNTL2.3149950.0206130.010000
4Original2.0880350.0367950.012500
3OSS1.6795060.0930530.016667
2TL0.7262730.4676710.025000
1NCL0.0907840.9276640.050000
Table 27. Results of the Holm test comparing the performance of the SNDAM classifier after hybrid algorithms. The test rejects the hypothesis having an unadjusted p-value≤0.05.
Table 27. Results of the Holm test comparing the performance of the SNDAM classifier after hybrid algorithms. The test rejects the hypothesis having an unadjusted p-value≤0.05.
iAlgorithmzpHolm
4Original4.2791980.0000190.012500
3SPIDER2.9148160.0035590.016667
2SPIDER22.7907820.0052580.025000
1SMOTE-TL1.178330.2386650.050000

Share and Cite

MDPI and ACS Style

Rangel-Díaz-de-la-Vega, A.; Villuendas-Rey, Y.; Yáñez-Márquez, C.; Camacho-Nieto, O.; López-Yáñez, I. Impact of Imbalanced Datasets Preprocessing in the Performance of Associative Classifiers. Appl. Sci. 2020, 10, 2779. https://doi.org/10.3390/app10082779

AMA Style

Rangel-Díaz-de-la-Vega A, Villuendas-Rey Y, Yáñez-Márquez C, Camacho-Nieto O, López-Yáñez I. Impact of Imbalanced Datasets Preprocessing in the Performance of Associative Classifiers. Applied Sciences. 2020; 10(8):2779. https://doi.org/10.3390/app10082779

Chicago/Turabian Style

Rangel-Díaz-de-la-Vega, Adolfo, Yenny Villuendas-Rey, Cornelio Yáñez-Márquez, Oscar Camacho-Nieto, and Itzamá López-Yáñez. 2020. "Impact of Imbalanced Datasets Preprocessing in the Performance of Associative Classifiers" Applied Sciences 10, no. 8: 2779. https://doi.org/10.3390/app10082779

APA Style

Rangel-Díaz-de-la-Vega, A., Villuendas-Rey, Y., Yáñez-Márquez, C., Camacho-Nieto, O., & López-Yáñez, I. (2020). Impact of Imbalanced Datasets Preprocessing in the Performance of Associative Classifiers. Applied Sciences, 10(8), 2779. https://doi.org/10.3390/app10082779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop