Next Article in Journal
Constructal Optimization for Cooling a Non-Uniform Heat Generating Radial-Pattern Disc by Conduction
Next Article in Special Issue
Multi-Modal Deep Hand Sign Language Recognition in Still Images Using Restricted Boltzmann Machine
Previous Article in Journal
Einstein-Podolsky-Rosen Steering Inequalities and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective Evolutionary Rule-Based Classification with Categorical Data

by
Fernando Jiménez
1,*,
Carlos Martínez
1,
Luis Miralles-Pechuán
2,
Gracia Sánchez
1 and
Guido Sciavicco
3
1
Department of Information and Communication Engineering, University of Murcia, 30071 Murcia, Spain
2
Centre for Applied Data Analytics Research (CeADAR), University College Dublin, D04 Dublin 4, Ireland
3
Department of Mathematics and Computer Science, University of Ferrara, 44121 Ferrara, Italy
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(9), 684; https://doi.org/10.3390/e20090684
Submission received: 30 July 2018 / Revised: 3 September 2018 / Accepted: 6 September 2018 / Published: 7 September 2018
(This article belongs to the Special Issue Statistical Machine Learning for Human Behaviour Analysis)

Abstract

:
The ease of interpretation of a classification model is essential for the task of validating it. Sometimes it is required to clearly explain the classification process of a model’s predictions. Models which are inherently easier to interpret can be effortlessly related to the context of the problem, and their predictions can be, if necessary, ethically and legally evaluated. In this paper, we propose a novel method to generate rule-based classifiers from categorical data that can be readily interpreted. Classifiers are generated using a multi-objective optimization approach focusing on two main objectives: maximizing the performance of the learned classifier and minimizing its number of rules. The multi-objective evolutionary algorithms ENORA and NSGA-II have been adapted to optimize the performance of the classifier based on three different machine learning metrics: accuracy, area under the ROC curve, and root mean square error. We have extensively compared the generated classifiers using our proposed method with classifiers generated using classical methods such as PART, JRip, OneR and ZeroR. The experiments have been conducted in full training mode, in 10-fold cross-validation mode, and in train/test splitting mode. To make results reproducible, we have used the well-known and publicly available datasets Breast Cancer, Monk’s Problem 2, Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery. After performing an exhaustive statistical test on our results, we conclude that the proposed method is able to generate highly accurate and easy to interpret classification models.

Graphical Abstract

1. Introduction

Supervised Learning is the branch of Machine Learning (ML) [1] focused on modeling the behavior of systems that can be found in the environment. Supervised models are created from a set of past records, each one of which, usually, consists of an input vector labeled with an output. A supervised model is an algorithm that simulates the function that maps inputs with outputs [2]. The best models are those that predict the output of new inputs in the most accurate way. Thanks to modern computing capabilities, and to the digitization of ever-increasing quantities of data, nowadays, supervised learning techniques play a leading role in many applications. The first classification systems date back to the 1990s; in those days, researchers were focused on both precision and interpretability, and the systems to be modeled were relatively simple. Years later, when it became necessary to model more difficult behaviors, the researchers focused on developing more and more precise models, leaving aside the interpretability. Artificial Neural Networks (ANN) [3], and, more recently, Deep Learning Neural Networks (DLNN) [4], as well as Support Vector Machines (SVM) [5], and Instance-based Learning (IBL) [6] are archetypical examples of this approach. A DLNN, for example, is a large mesh of ordered nodes arranged in a hierarchical manner and composed of a huge number of variables. DLNNs are capable of modeling very complex behaviors, but it is extremely difficult to understand the logic behind their predictions, and similar considerations can be drawn for SVNs and IBLs, although the underlying principles are different. These models are known as black-box methods. While there are applications in which knowing the ratio behind a prediction is not necessarily relevant, (e.g., predicting a currency’s future value, whether or not a user clicks on an advert or the amount of rain in a certain area), there are other situations where the interpretability of a model plays a key role.
The interpretability of classification systems refers to the ability they have to explain their behavior in a way that is easily understandable by a user [7]. In other words, a model is considered interpretable when a human is able to understand the logic behind its prediction. In this way, Interpretable classification models allow external validation by an expert. Additionally, there are certain disciplines such as medicine, where it is essential to provide information about decision making for ethical and human reasons. Likewise, when a public institution asks an authority for permission to investigate an alleged offender, or when the CEO of a certain company wants to take a difficult decision which can seriously change the direction of the company, some kind of explanations to justify these decisions may be required. In these situations, using transparent (also called grey-box) models is recommended. While there is a general consensus on how the performance of a classification system is measured (popular metrics include accuracy, area under the ROC curve, and root mean square error), there is no universally accepted metric to measure the interpretability of the models. Nor is there an ideal balance between the interpretability and performance of classification systems but this depends on the specific application domain. However, the rule of thumb says that the simpler a classification system is, the easier it is to interpret. Rule-based Classifiers (RBC) [8,9] are among the most popular interpretable models, and some authors define the degree of interpretability of an RBC as the number of its rules or as the number of axioms that the rules have. These metrics tend to reward models with fewer rules as simple as possible [10,11]. In general, RBCs are classification learning systems that achieve a high level of interpretability because they are based on a human-like logic. Rules follow a very simple schema:
IF (Condition 1) and (Condition 2) and … (Condition N) THEN (Statement)
and the fewer rules the models have and the fewer conditions and attributes the rules have, the easier it will be for a human to understand the logic behind each classification. In fact, RBCs are so natural in some applications that they are used to interpret other classification models such as Decision Trees (DT) [12]. RBCs constitute the basis of more complex classification systems based on fuzzy logic [13] such as LogitBoost or AdaBoost [14].
Our approach investigates the conflict between accuracy and interpretability as a multi-objective optimization problem. We define a solution as a set of rules (that is, a classifier), and establish two objectives to be maximized: interpretability and accuracy. We decided to solve this problem by applying multi-objective evolutionary algorithms (MOEA) [15,16] as meta-heuristics, and, in particular, two known algorithms: NSGA-II [15] and ENORA [17]. They are both state-of-the-art evolutionary algorithms which have been applied, and compared, on several occasions [18,19,20]. NSGA-II is very well-known and has the advantage of being available in many implementations, while ENORA generally has a higher performance. In the current literature, MOEAs are mainly used for learning RBCs based on fuzzy logic [18,21,22,23,24,25,26]. However, Fuzzy RBCs are designed for numerical data, from which fuzzy sets are constructed and represented by linguistic labels. In this paper, on the contrary, we are interested in RBCs for categorical data, for which a novel approach is necessary.
This paper is organized as follows. In Section 2, we introduce multi-objective constrained optimization, the evolutionary algorithms ENORA and NSGA-II, and the well-known rule-based classifier learning systems PART, JRip, OneR and ZeroR. In Section 3, we describe the structure of an RBC for categorical data, and we propose the use of multi-objective optimization for the task of learning a classifier. In Section 4, we show the result of our experiments, performed on the well-known publicly accessible datasets Breast Cancer, Monk’s Problem 2, Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery. The experiments allow a comparison among the performance of the classifiers learned by our technique against those of classifiers learned by PART, JRip, OneR and ZeroR, as well as a comparison between ENORA and NSGA-II for the purposes of this task. In Section 5, the results are analyzed and discussed, before concluding in Section 6. Appendix A and Appendix B show the tables of the statistical tests results. Appendix C shows the symbols and the nomenclature used in the paper.

2. Background

2.1. Multi-Objective Constrained Optimization

The term optimization [27] refers to the selection of the best element, with regard to some criteria, from a set of alternative elements. Mathematical programming [28] deals with the theory, algorithms, methods and techniques to represent and solve optimization problems. In this paper, we are interested in a class of mathematical programming problems called multi-objective constrained optimization problems [29], which can be formally defined, for l objectives and m constraints, as follows:
M i n . / M a x . f i x , i = 1 , , l s u b j e c t t o g j x 0 , j = 1 , , m
where f i x (usually called objectives) and g j x are arbitrary functions. Optimization problems can be naturally separated into two categories: those with discrete variables, which we call combinatorial, and those with continuous variables. In combinatorial problems, we are looking for objects from a finite, or countably infinite, set X , where objects are typically integers, sets, permutations, or graphs. In problems with continuous variables, instead, we look for real parameters belonging to some continuous domain. In Equation (1), x = x 1 , x 2 , , x w X w represents the set of decision variables, where X is the domain for each variable x k , k = 1 , , w .
Now, let F = { x X w | g j x 0 , j = 1 , , m } be the set of all feasible solutions to Equation (1). We want to find a subset of solutions S F called non-dominated set (or Pareto optimal set). A solution x F is non-dominated if there is no other solution x F that dominates x , and a solution x dominates x if and only if there exists i ( 1 i l ) such that f i x improves f i x , and for every i ( 1 i l ), f i x does not improve f i x . In other words, x dominates x if and only if x is better than x for at least one objective, and not worse than x for any other objective. The set S of non dominated solutions of Equation (1) can be formally defined as:
S = x F x ( x F D x , x )
where:
D x , x = i ( 1 i l , f i x < f i x ) i ( 1 i l , f i x f i x ) .
Once the set of optimal solutions is available, the most satisfactory one can be chosen by applying a preference criterion. When all the functions f i are linear, then the problem is a linear programming problem [30], which is the classical mathematical programming problem and for which extremely efficient algorithms to obtain the optimal solution exist (e.g., the simplex method [31]). When any of the functions f i is non-linear then we have a non-linear programming problem [32]. A non-linear programming problem in which the objectives are arbitrary functions is, in general, intractable. In principle, any search algorithm can be used to solve combinatorial optimization problems, although it is not guaranteed that they will find an optimal solution. Metaheuristics methods such as evolutionary algorithms [33] are typically used to find approximate solutions for complex multi-objective optimization problems, including feature selection and fuzzy classification.

2.2. The Multi-Objective Evolutionary Algorithms ENORA and NSGA-II

The MOEA ENORA [17] and NSGA-II [15] use a μ + λ strategy (Algorithm 1) with μ = λ = p o p s i z e , where μ corresponds to the number of parents and λ refers to the number of children ( p o p s i z e is the population size), with binary tournament selection (Algorithm 2) and a rank function based on Pareto fronts and crowding (Algorithms 3 and 4). The difference between NSGA-II and ENORA is how the calculation of the ranking of the individuals in the population is performed. In ENORA, each individual belongs to a slot (as established in [34]) of the objective search space, and the rank of an individual in a population is the non-domination level of the individual in its slot. On the other hand, in NSGA-II, the rank of an individual in a population is the non-domination level of the individual in the whole population. Both ENORA and NSGA-II MOEAs use the same non-dominated sorting algorithm, the fast non-dominated sorting [35]. It compares each solution with the rest of the solutions and stores the results so as to avoid duplicate comparisons between every pair of solutions. For a problem with l objectives and a population with N solutions, this method needs to conduct l · N · ( N 1 ) objective comparisons, which means that it has a time complexity of O ( l · N 2 ) [36]. However, ENORA distributes the population in N slots (in the best case), therefore, the time complexity of ENORA is O ( l · N 2 ) in the worst case and O ( l · N ) in the best case.
Algorithm 1 μ + λ strategy for multi-objective optimization.
Require: T > 1 {Number of generations}
Require: N > 1 {Number of individuals in the population}
 1:
Initialize P with N individuals
 2:
Evaluate all individuals of P
 3:
t 0
 4:
while t < T do
 5:
Q
 6:
i 0
 7:
while i < N do
 8:
  Parent1← Binary tournament selection from P
 9:
  Parent2← Binary tournament selection from P
10:
  Child1, Child2Crossover(Parent1, Parent2)
11:
  Offspring1Mutation(Child1)
12:
  Offspring2Mutation(Child2)
13:
  Evaluate Offspring1
14:
  Evaluate Offspring2
15:
   Q Q Offspring 1 , Offspring 2
16:
   i i + 2
17:
end while
18:
R P Q
19:
P N best individuals from R according to the rank-crowding function in population R
20:
t t + 1
21:
end while
22:
return Non-dominated individuals from P
Algorithm 2 Binary tournament selection.
Require:P {Population}
 1:
I Random selection from P
 2:
J Random selection from P
 3:
ifI is better than J according to the rank-crowding function in population P then
 4:
return I
 5:
else
 6:
return J
 7:
end if
Algorithm 3 Rank-crowding function.
Require:P {Population}
Require: I , J {Individuals to compare}
 1:
if r a n k P , I < r a n k P , J then
 2:
return T r u e
 3:
end if
 4:
if r a n k ( P , J ) < r a n k ( P , I ) then
 5:
return F a l s e
 6:
end if
 7:
return C r o w d i n g _ d i s t a n c e P , I > C r o w d i n g _ d i s t a n c e P , J
The main reason ENORA and NSGA-II behave differently is as follows. NSGA-II never selects the individual dominated by the other in the binary tournament, while, in ENORA, the individual dominated by the other may be the winner of the tournament. Figure 1 shows this behavior graphically. For example, if individuals B and C are selected for a binary tournament with NSGA-II, individual B beats C because B dominates C. Conversely, individual C beats B with ENORA because individual C has a better rank in his slot than individual B. In this way, ENORA allows the individuals in each slot to evolve towards the Pareto front encouraging diversity. Even though in ENORA the individuals of each slot may not be the best of the total individuals, this approach generates a better hypervolume than that of NSGA-II throughout the evolution process.
ENORA is our MOEA, on which we are intensively working over the last decade. We have applied ENORA to constrained real-parameter optimization [17], fuzzy optimization [37], fuzzy classification [18], feature selection for classification [19] and feature selection for regression [34]. In this paper, we apply it to rule-based classification. NSGA-II algorithm was designed by Deb et al. and has been proved to be a very powerful and fast algorithm in multi-objective optimization contexts of all kinds. Most researchers in multi-objective evolutionary computation use NSGA-II as a baseline to compare the performance of their own algorithms. Although NSGA-II was developed in 2002 and remains a state-of-the-art algorithm, it is still a challenge to improve on it. There is a recently updated improved version for many-objective optimization problems called NSGA-III [38].
Algorithm 4 Crowding_distance function.
Require:P {Population}
Require: P {Population}
Require: I {Individual}
Require: l {Number of objectives}
 1:
for j = 1 to l do
 2:
f j m a x max I P { f j I }
 3:
f j m i n min I P { f j I }
 4:
f j s u p j I value of the jth objective for the individual higher adjacent in the jth objective to the individual I
 5:
f j i n f j I value of the jth objective for the individual lower adjacent in the jth objective to the individual I
 6:
end for
 7:
for j = 1 to l do
 8:
if f j I = f j m a x or f j I = f j m i n then
 9:
  return
10:
end if
11:
end for
12:
C D 0.0
13:
for j = 1 to l do
14:
C D C D + f j s u p j I f j i n f j I f j m a x f j m i n
15:
end for
16:
return C D

2.3. PART

PART (Partial DT Method [39]) is a widely used rule learning algorithm that was developed at the University of Waikato in New Zealand [40]. Experiments show that it is a very efficient algorithm in terms of both computational performance and results. PART combines the divide-and-conquer strategy typical of decision tree learning with the separate-and-conquer strategy [41] typical of rule learning, as follows. A decision tree is first constructed (using C4.5 algorithm [42]), and the leaf with the highest coverage is converted into a rule. Then, the set of instances that are covered by that rule are discarded, and the process starts over. The result is an ordered set of rules, completed by a default rule that applies to instances that do not meet any previous rule.

2.4. JRip

JRip is a fast and optimized implementation in Weka of the famous RIPPER (Repeated Incremental Pruning to Produce Error Reduction) algorithm [43]. RIPPER was proposed in [44] as a more efficient version of the incrementally reduced error pruning (IREP) rule learner developed in [45]. IREP and RIPPER work in a similar manner. They begin with a default rule and, using a training dataset, attempt to learn rules that predict exceptions to the default. Each rule learned is a conjunction of propositional literals. Each literal corresponds to a split of the data based on the value of a single feature. This family of algorithms, similar to decision trees, has the advantage of being easy to interpret, and experiments show that JRip is particularly efficient in large datasets. RIPPER and IREP use a strategy based on the separate-and-conquer method to generate an ordered set of rules that are extracted directly from the dataset. The classes are examined one by one, prioritizing those that have more elements. These algorithms are based on four basic steps (growing, pruning, optimizing and selecting) applied repetitively to each class until a stopping condition is met [44]. These steps can be summarized as follows. In the growing phase, rules are created taking into account an increasing number of predictors until the stopping criterion is satisfied (in the Weka implementation, the procedure selects the condition with the highest information gain). In the pruning phase redundancy is eliminated and long rules are reduced. In the optimization phase, the rules generated in the previous steps are improved (if possible) by adding new attributes or by adding new rules. Finally, in the selection phase, the best rules are selected and the others discarded.

2.5. OneR

OneR (One Rule) is a very simple, while reasonably accurate, classifier based on a frequency table. First, OneR generates a set of rules for each attribute of the dataset, and, then, it selects only one rule from that set—the one with the lowest error rate [46]. The set of rules is created using a frequency table constructed for each predictor of the class, and numerical classes are converted into categorical values.

2.6. ZeroR

Finally, ZeroR (Zero Rules [40]) is a classifier learner that does not create any rules and uses no attributes. ZeroR simply creates the class classification table by selecting the most frequent value. Such a classifier is obviously the simplest possible one, and its capabilities are limited to the prediction of the majority class. In the literature, it is not used for practical classifications tasks, but as a generic reference to measure the performance of other classifiers.

3. Multi-Objective Optimization for Categorical Rule-Based Classification

In this section, we propose a general schema for an RBC specifically designed for categorical data. Then, we propose and describe a multi-objective optimization solution to obtain optimal categorical RBCs.

3.1. Rule-Based Classification for Categorical Data

Let Γ be a classifier composed by M rules, where each rule R i Γ i = 1 , , M , has the following structure:
R i Γ : I F x 1 = b i 1 Γ A N D , , A N D x p = b i p Γ T H E N y = c i Γ
where for j = 1 , , p the attribute b i j Γ (called antecedent) takes values in a set { 1 , , v j } ( v j > 1 ), and c i Γ (called consequent) takes values in { 1 , , w } ( w > 1 ). Now, let x = { x 1 , , x p } be an observed example, with x j { 1 , , v j } , for each j = 1 , , p . We propose maximum matching as reasoning method, where the compatibility degree of the rule R i Γ for the example x (denoted by φ i Γ ( x ) ) is calculated as the number of attributes whose value coincides with that of the corresponding antecedent in R i Γ , that is
φ i Γ ( x ) = j = 1 p μ i j Γ ( x )
where:
μ i j Γ ( x ) = 1 i f x j = b i j Γ 0 i f x j b i j Γ
The association degree for the example x with a class c { 1 , , w } is computed by adding the compatibility degrees for the example x of each rule R i Γ whose consequent c i Γ is equal to class c, that is:
λ c Γ ( x ) = i = 1 M η i c Γ ( x )
where:
η i c Γ ( x ) = φ i Γ ( x ) i f c = c i Γ 0 i f c c i Γ
Therefore, the classification (or output) of the classifier Γ for the example x corresponds to the class whose association degree is maximum, that is:
f Γ x = arg c max c = 1 w λ c Γ x

3.2. A Multi-Objective Optimization Solution

Let D be a dataset of K instances with p categorical input attributes, p > 0 , and a categorical output attribute. Each input attribute j can take a category x j 1 , , v j , v j > 1 , j = 1 , , p , and the output attribute can take a class c 1 , , w , w > 1 . The problem of finding an optimal classifier Γ , as described in the previous section, can be formulated as an instance of the multi-objective constrained problem in Equation (1) with two objectives and two constraints:
M a x . / M i n . F D ( Γ ) M i n . NR ( Γ ) s u b j e c t t o : NR ( Γ ) w NR ( Γ ) M m a x
In the problem (Equation (3)), the function F D ( Γ ) is a performance measure of the classifier Γ over the dataset D , the function NR ( Γ ) is the number of rules of the classifier Γ , and the constraints NR ( Γ ) w and NR ( Γ ) M m a x limit the number of rules of the classifier Γ to the interval [ w , M m a x ] , where w is the number of classes of the output attribute and M m a x is given by a user. Objectives F D ( Γ ) and NR ( Γ ) are in conflict. The fewer rules the classifier has, the fewer instances it can cover, that is, if the classifier is simpler it will have less capacity for prediction. There is, therefore, an intrinsic conflict between problem objectives (e.g., maximize accuracy and minimize model complexity) which cannot be easily aggregated to a single objective. Both objectives are typically optimized simultaneously in many other classification systems, such as neural networks or decision trees [47,48]. Figure 2 shows the Pareto front of a dummy binary classification problem described as in Equation (3), with M m a x = 6 rules, where F D ( Γ ) is maximized. This front is composed of three non-dominated solutions (three possible classifiers) with two, three and four rules, respectively. The solutions with five and six rules are dominated (both by the solution with four rules).
Both ENORA and NSGA-II have been adapted to solve the problem described in Equation (3) with variable-length representation based on a Pittsburgh approach, uniform random initialization, binary tournament selection, handling constraints, ranking based on non-domination level with crowding distance, and self-adaptive variation operators. Self-adaptive variation operators work on different levels of the classifier: rule crossover, rule incremental crossover, rule incremental mutation, and integer mutation.

3.2.1. Representation

We use a variable-length representation based on a Pittsburgh approach [49], where each individual I of a population contains a variable number of rules M I , and each rule R i I , i = 1 , , is codified in the following components:
  • Integer values associated to the antecedents b i j I { 1 , , v j } , for i = 1 , , M I and j = 1 , , p .
  • Integer values associated to the consequent c i I { 1 , , w } , for i = 1 , , M I .
Additionally, to carry out self-adaptive crossing and mutation, each individual has two discrete parameters d I { 0 , , δ } and e I { 0 , , ϵ } associated with crossing and mutation, where δ 0 is the number of crossing operators and ε 0 is the number of mutation operators. Values d I and e I for self-adaptive variation are randomly generated from { 0 , δ } and { 0 , ϵ } , respectively. Table 1 summarizes the representation of an individual.

3.2.2. Constraint Handling

The constraints NR ( Γ ) w and NR ( Γ ) M m a x are satisfied by means of specialized initialization and variation operators, which always generate individuals with a number of rules between w and M m a x .

3.2.3. Initial Population

The initial population (Algorithm 5) is randomly generated with the following conditions:
  • Individuals are uniformly distributed with respect to the number of rules with values between w and M m a x , and with an additional constraint that specifies that there must be at least one individual for each number of rules (Steps 4–8). This ensures an adequate initial diversity in the search space in terms of the second objective of the optimization model.
  • All individuals contain at least one rule for any output class between 1 and w (Steps 16–20).
Algorithm 5 Initialize population.
Require: p > 0 {Number of categorical input attributes}
Require: v 1 , , v p , v j > 1 , j = 1 , , p {Number of categories for the input attributes}
Require: w > 1 , {Number of classes for the output attribute}
Require: δ > 0 {Number of crossing operators}
Require: ϵ > 0 {Number of mutation operators}
Require: M m a x w {Maximum number of rules}
Require: N > 1 {Number of individuals in the population}
 1:
P
 2:
for k = 1 to N do
 3:
I new Individual
 4:
if k M m a x w + 1 then
 5:
   M I k + w 1
 6:
else
 7:
   M I Int Random(w, M m a x )
 8:
end if
 9:
 {Random rule R i I }
10:
for i = 1 to M I do
11:
  {Random integer values associated with the antecedents}
12:
  for j = 1 to p do
13:
    b i j I Random(1, v j )
14:
  end for
15:
  {Random integer value associated with the consequent}
16:
  if i < w then
17:
    c i I = j
18:
  else
19:
    c i I Random(1,w)
20:
  end if
21:
end for
22:
 {Random integer values for adaptive variation}
23:
d I Random(0, δ )
24:
e I Random(0, ϵ )
25:
P P I
26:
end for
27:
returnP

3.2.4. Fitness Functions

Since the optimization model encompasses two objectives, each individual must be evaluated with two fitness functions, which correspond to the objective functions F D ( Γ ) and NR ( Γ ) of the problem (Equation (3)). The selection of the best individuals is done using the Pareto concept in a binary tournament.

3.2.5. Variation Operators

We use self-adaptive crossover and mutation, which means that the selection of the operators is made by means of an adaptive technique. As we have explained (cf. Section 3.2.1), each individual I has two integer parameters d I 0 , , δ and e I 0 , , ϵ to indicate which crossover or mutation is carried out. In our case, δ = 2 and ϵ = 2 are two crossover operators and two mutation operators, so that d I , e I { 0 , 1 , 2 } . Note that value 0 indicates that no crossover or no mutation is performed. Self-adaptive variation (Algorithm 6) generates two children from two parents by self-adaptive crossover (Algorithm 7) and self-adaptive mutation (Algorithm 8). Self-adaptive crossover of individuals I , J and self-adaptive mutation of individual I are similar to each other. First, with a probability p v , the values d I and e I are replaced by a random value. Additionally, in the case of crossover, the value d J is replaced by d I . Then, the crossover indicated by d I or the mutation indicated by e I is performed. In summary, if an individual comes from a given crossover or a given mutation, that specific crossover and mutation are preserved to their offspring with probability p v , so the value of p v must be small enough to ensure a controlled evolution (in our case, we use p v = 0.1 ). Although the probability of the crossover and mutation is not explicitly represented, it can be computed as the ratio of the individuals for which crossover and mutation values are set to 1. As the population evolves, individuals with more successful types of crossover and mutation will be more common, so that the probability of selecting the more successful crossover and mutation types will increase. Using self-adaptive crossover and mutation operators helps to realize the goals of maintaining diversity in the population and sustaining the convergence capacity of the evolutionary algorithm, also eliminating the need of setting an a priori operator probability to each operator. In other approaches (e.g., [50]), the probabilities of crossover and mutation vary depending on the fitness value of the solutions.
Both ENORA and NSGA-II have been implemented with two crossover operators, rule crossover (Algorithm 9) and rule incremental crossover (Algorithm 10), and two mutation operators: rule incremental mutation (Algorithm 11) and integer mutation (Algorithm 12). Rule crossover randomly exchanges two rules selected from the parents, and rule incremental crossover adds to each parent a rule randomly selected from the other parent if its number of rules is less than the maximum number of rules. On the other hand, rule incremental mutation adds a new rule to the individual if the number of rules of the individual is less than the maximum number of rules, while integer mutation carries out a uniform mutation of a random antecedent belonging to a randomly selected rule.
Algorithm 6 Variation.
Require: P a r e n t 1 , P a r e n t 2 {Individuals for variation}
 1:
C h i l d 1 P a r e n t 1
 2:
C h i l d 1 P a r e n t 2
 3:
Self-adaptive crossover C h i l d 1 , C h i l d 2
 4:
Self-adaptive mutation C h i l d 1
 5:
Self-adaptive mutation C h i l d 2
 6:
return C h i l d 1 , C h i l d 2
Algorithm 7 Self-adaptive crossover.
Require:I, J {Individuals for crossing}
Require: p v ( 0 < p v < 1 ) {Probability of variation}
Require: δ > 0 {Number of different crossover operators}
 1:
if a random Bernoulli variable with probability p v takes the value 1 then
 2:
d I Random(0, δ )
 3:
end if
 4:
d J d I
 5:
Carry out the type of crossover specified by d I :
{0: No cross}
{1: Rule crossover}
{2: Rule incremental crossover}
Algorithm 8 Self-adaptive mutation.
Require:I {Individual for mutation}
Require: p v ( 0 < p v < 1 ) {Probability of variation}
Require: ϵ > 0 {Number of different mutation operators}
 1:
if a random Bernoulli variable with probability p v takes the value 1 then
 2:
e I Random(0, ϵ )
 3:
end if
 4:
Carry out the type of mutation specified by e I :
{0: No mutation}
{1: Rule incremental mutation}
{2: Integer mutation}
Algorithm 9 Rule crossover.
Require:I, J {Individuals for crossing}
 1:
i Random(1, M I )
 2:
j Random(1, M J )
 3:
Exchange rules R i I and R j J
Algorithm 10 Rule incremental crossover.
Require:I, J {Individuals for crossing}
Require: M m a x {Maximum number of rules}
 1:
if M I < M m a x then
 2:
j Random(1, M J )
 3:
 Add R j J to individual I
 4:
end if
 5:
if M J < M m a x then
 6:
i Random(1, M I )
 7:
 Add R i I to individual J
 8:
end if
Algorithm 11 Rule incremental mutation.
Require:I {Individual for mutation}
Require: M m a x {Maximum number of rules}
 1:
if M I < M m a x then
 2:
 Add a new random rule to I
 3:
end if
Algorithm 12 Integer mutation.
Require:I {Individual for mutation}
Require: p > 0 {Number of categorical input attributes}
Require: v 1 , , v p ,   v j > 1 ,   j = 1 , , p {Number of categories for the input attributes}
 1:
i Random(1, M I )
 2:
j Random(1,p)
 3:
b i j I Random(1, v j )

4. Experiment and Results

To ensure the reproducibility of the experiments, we have used publicly available datasets. In particular, we have designed two sets of experiments, one using the Breast Cancer [51] dataset, and the other using the Monk’s Problem 2 [52] dataset.

4.1. The Breast Cancer Dataset

Breast Cancer encompasses 286 instances. Each instance corresponds to a patient who suffered from breast cancer and uses nine attributes to describe each patient. The class to be predicted is binary and represents whether the patient has suffered a recurring cancer event. In this dataset, 85 instances are positive and 201 are negative. Table 2 summarizes the attributes of the dataset. Among all instances, nine present some missing values; in the pre-processing phase, these have been replaced by the mode of the corresponding attribute.

4.2. The Monk’s Problem 2 Dataset

In July 1991, the monks of Corsendonk Priory attended a summer course that was being held in their priory, namely the 2nd European Summer School on Machine Learning. After a week, the monks could not yet clearly identify the best ML algorithms, or which algorithms to avoid in which cases. For this reason, they decided to create the three so-called Monk’s problems, and used them to determine which ML algorithms were the best. These problems, rather simple and completely artificial, became later famous (because of their peculiar origin), and have been used as a comparison for many algorithms on several occasions. In particular, in [53], they have been used to test the performance of state-of-the-art (at that time) learning algorithms such as AQ17-DCI, AQ17-HCI, AQ17-FCLS, AQ14-NT, AQ15-GA, Assistant Professional, mFOIL, ID5R, IDL, ID5R-hat, TDIDT, ID3, AQR, CN2, WEB CLASS, ECOBWEB, PRISM, Backpropagation, and Cascade Correlation. For our research, we have used the Monk’s Problem 2, which contains six categorical input attributes and a binary output attribute, summarized in Table 3. The target concept associated with the Monk’s Problem 2 is the binary outcome of the logical formula:
Exactly two of:
{heap_shape= round, body_shape=round, is_smiling=yes, holding=sword, jacket_color=red,
has_tie=yes}
In this dataset, the original training and testing sets were merged to allow other sampling procedures. The set contains a total of 601 instances, and no missing values.

4.3. Optimization Models

We have conducted different experiments with different optimization models to calculate the overall performance of our proposed technique and to see the effect of optimizing different objectives for the same problem. First, we have designed a multi-objective constrained optimization model based on the accuracy:
M a x . ACC D ( Γ ) M i n . NR ( Γ ) s u b j e c t t o : NR ( Γ ) w NR ( Γ ) M m a x
where ACC D ( Γ ) is the proportion of correctly classified instances (both true positives and true negatives) among the total number of instances [54] obtained with the classifier Γ for the dataset D . ACC D ( Γ ) is defined as:
ACC D ( Γ ) = 1 K i = 1 K T D ( Γ , i )
where K is the number of instances of the dataset D , and T D ( Γ , i ) is the result of the classification of the instance i in D with the classifier Γ , that is:
T D ( Γ , i ) = 1 i f c ^ i Γ = c D i 0 i f c ^ i Γ c D i
where c ^ i Γ is the predicted value of the ith instance in Γ , and c D i is the corresponding true value in D . Our second optimization model is based on the area under the ROC curve:
M a x . AUC D ( Γ ) M i n . NR ( Γ ) s u b j e c t t o : NR ( Γ ) w NR ( Γ ) M m a x
where AUC D ( Γ ) is the area under the ROC curve obtained with the classifier Γ with the dataset D . The ROC (Receiver Operating Characteristic) curve [55] is a graphical representation of the sensitivity versus the specificity index for a classifier varying the discrimination threshold value. Such a curve can be used to generate statistics that summarize the performance of a classifier, and it has been shown in [54] to be a simple, yet complete, empirical description of the decision threshold effect, indicating all possible combinations of the relative frequencies of the various kinds of correct and incorrect decisions. The area under the ROC curve can be computed as follows [56]:
AUC D ( Γ ) = 0 1 S D ( Γ , E D 1 ( Γ , v ) ) d v
where S D ( Γ , t ) (sensitivity) is the proportion of positive instances classified as positive by the classifier Γ in D , 1 E D ( Γ , t ) (specificity) is the proportion of negative instances classified as negative by Γ in D , and t is the discrimination threshold. Finally, our third constrained optimization model is based on the root mean square error (RMSE):
M a x . / M i n . RMSE D ( Γ ) M i n . NR ( Γ ) s u b j e c t t o : NR ( Γ ) w NR ( Γ ) M m a x
where RMSE D ( Γ ) is defined as the square root of the mean square error obtained with a classifier Γ in the dataset D :
RMSE D ( Γ ) = 1 K i = 1 K ( c ^ i Γ c D i ) 2
where c ^ i Γ is the predicted value of the ith instance for the classifier Γ , and c D i is the corresponding output value in the database D . Accuracy, area under the ROC curve, and root mean square error are all well-accepted measures used to evaluate the performance of a classifier. Therefore, it is natural to use such measures as fitting functions. In this way, we can establish which one behaves better in the optimization phase, and we can compare the results with those in the literature.

4.4. Choosing the Best Pareto Front

To compare the performance of ENORA and NSGA-II as metaheuristics in this particular optimization task, we use the hypervolume metric [57,58]. The hypervolume measures, simultaneously, the diversity and the optimality of the non-dominated solutions. The main advantage of using hypervolume against other standard measures, such as the error ratio, the generational distance, the maximum Pareto-optimal front error, the spread, the maximum spread, or the chi-square-like deviation, is that it can be computed without an optimal population, which is not always known [15]. The hypervolume is defined as the volume of the search space dominated by a population P, and is formulated as:
H V P = i = 1 Q v i
where Q P is the set of non-dominated individuals of P, and v i is the volume of the individual i. Subsequently, the hypervolume ratio (HVR) is defined as the ratio of the volume of the non-dominated search space over the volume of the entire search space, and is formulated as follows:
H V R P = 1 H P V S
where V S is the volume of the search space. Computing HVR requires reference points that identify the maximum and minimum values for each objective. For RBC optimization, as proposed in this work, the following minimum ( F D l o w e r , NR l o w e r ) and maximum ( F D u p p e r , NR u p p e r ) points, for each objective, are set in the multi-objective optimization models in Equations (4)–(6):
F D l o w e r = 0 , F D u p p e r = 1 , NR l o w e r = w , NR u p p e r = M m a x
A first single execution of all six models (three driven by ENORA, and three driven by NSGA-II), over both datasets, has been designed for the purpose of showing the aspect of the final Pareto front, and compare the hypervolume ratio of the models. The results of this single execution, with population size equal to 50 and 20,000 generations (1,000,000 evaluations in total), are shown in Figure 3 and Figure 4 (by default, M m a x is set to 10, to which we add 2, because both datasets have a binary class). Regarding the configuration of the number of generations and the size of the population, our criterion has been established as follows: once the number of evaluations is set to 1,000,000, we can decide to use a population size of 100 individuals and 10,000 generations, or to use a population size of 50 individuals and 20,000 generations. The first configuration (100 × 10,000) allows a greater diversity with respect to the number of rules of the classifiers, while the second one (50 × 20,000) allows a better adjustment of the classifier parameters and therefore, a greater precision. Given the fact that the maximum number of rules of the classifiers is not greater than 12, we think that 50 individuals are sufficient to represent four classifiers on average for each number of rules (4 × 12 = 48∼50). Thus, we prefer the second configuration ( 50 × 20,000) because having more generations increases the chances of building classifiers with a higher precision.
Experiments were executed in a computer x64-based PC with one processor Intel64 Family 6 Model 60 Stepping 3 GenuineIntel 3201 Mhz, RAM 8131 MB. Table 4 shows the run time for each method over both datasets. Note that, although ENORA has less algorithmic complexity than NSGA-II, it has taken longer in experiments than NSGA-II. This is because the evaluation time of individuals in ENORA is higher than that of NSGA-II since ENORA has more diversity than NSGA-II, and therefore ENORA evaluates classifiers with more rules than NSGA-II.
From these results, we can deduce that, first, ENORA maintains a higher diversity of the population, and achieves a better hypervolume ratio with respect to NSGA-II, and, second, using accuracy as the first objective generates better fronts than using the area under the ROC curve, which, in turn, performs better than using the root mean square error.

4.5. Comparing Our Method with Other Classifier Learning Systems (Full Training Mode)

To perform an initial comparison between the performance of the classifiers obtained with the proposed method and the ones obtained with classical methods (PART, JRip, OneR and ZeroR), we have executed again the six models in full training mode.
The parameters have been configured as in the previous experiment (population size equal to 50 and 20,000 generations), excepting the M m a x parameter that was set to 2 for the Breast Cancer dataset (this case), while, for the Monk’s Problem 2, it was set to 9. Observe that, since M m i n = 2 in both cases, executing the optimization models using M m a x = 2 leads to a single objective search for the Breast Cancer dataset. In fact, after the preliminary experiments were run, it turned out that the classical classifier learning systems tend to return very small, although not very precise, set of rules on Breast Cancer, and that justifies our choice. On the other hand, executing the classical rule learners on Monk’s Problem 2 returns more diverse sets of rules, which justifies choosing a higher M m a x in that case. To decide, a posteriori, which individual is chosen from the final front, we have used the default algorithm: the individual with the best value on the first objective is returned. In the case of Monk’s Problem 2, that individual has seven rules. The comparison is shown in Table 5 and Table 6, which show, for each classifier, the following information: number of rules, percent correct, true positive rate, false positive rate, precision, recall, F-measure, Matthews correlation coefficient, area under the ROC curve, area under precision-recall curve, and root mean square error. As for the Breast Cancer dataset (observe that the best result emerged from the proposed method), in the optimization model driven by NSGA-II, with root mean square error as the first objective (see Table 7), only PART was able to achieve similar results, although slightly worse, but at the price of having 15 rules, making the system clearly not interpretable. In the case of the Monk’s Problem 2 dataset, PART returned a model with 47 rules, which is not interpretable by any standard, although it is very accurate. The best interpretable result is the one with seven rules returned by ENORA, driven by the root mean square error (see Table 8). The experiments for classical learners have been conducted using the default parameters.

4.6. Comparing Our Method with Other Classifier Learning Systems (Cross-Validation and Train/Test Percentage Split Mode)

To test the capabilities of our methodology in a more significant way, we proceeded as follows. First, we designed a cross-validated experiment for the Breast Cancer dataset, in which we iterated three times a 10-fold cross-validation learning process [59] and considered the average value of the performance metrics percent correct, area under the ROC curve, and serialized model size of all results. Second, we designed a train/test percentage split experiment for the Monk’s Problem 2 dataset, in which we iterated ten times a 66% (training) versus 33% (testing) split and considered, again, the average result of the same metrics. Finally, we performed a statistical test over on results, to understand if they show any statistically significant difference. An execution of our methodology, and of standard classical learners, has been performed to obtain the models to be tested precisely under the same conditions of the experiment Section 4.5. It is worth observing that using two different types of evaluations allows us to make sure that our results are not influenced by the type of experiment. The results of the experiments are shown in Table 9 and Table 10.
The statistical tests aim to verify if there are significant differences among the means of each metric: percent correct, area under the ROC curve and serialized model size. We proceeded as follows. First, we checked normality and sphericity of each sample by means of the Shapiro–Wilk normality test. Then, if normality and sphericity conditions were met, we applied one way repeated measures ANOVA; otherwise, we applied the Friedman test. In the latter case, when statistically significant differences were detected, we applied the Nemenyi post-hoc test to locate where these differences were. Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11 and Table A12 in Appendix A show the results of the performed tests for the Breast Cancer dataset for each of the three metrics, and Table A13, Table A14, Table A15, Table A16, Table A17, Table A18, Table A19, Table A20, Table A21, Table A22, Table A23 and Table A24 in Appendix B show the results for the Monk’s Problem 2 dataset.

4.7. Additional Experiments

Finally, we show the results of the evaluation with 10-fold cross-validation for Monk’s problem 2 dataset and for the following four other datasets:
  • Tic-Tac-Toe-Endgame dataset, with 9 input attributes, 958 instances, and binary class (Table 11).
  • Car dataset, with 6 input attributes, 1728 instances, and 4 output classes (Table 12).
  • Chess (King-Rook vs. King-Pawn) (kr-vs-kp), with 36 input attributes, 3196 instances, and binary class (Table 13).
  • Nursery dataset, with 8 input attributes, 12,960 instances, and 5 output classes (Table 14).
We have used the ENORA algorithm together with the ACC D and RMSE D objective functions in this case because these combinations have produced the best results for the Breast Cancer and Monk’s problem 2 datasets evaluated in 10-fold cross-validation (population size equal to 50, 20,000 generations and M m a x = 10 + number of classes). Table 15 shows the results of the best combination ENORA-ACC or ENORA-RMSE together with the results of the classical rule-based classifiers.

5. Analysis of Results and Discussion

The results of our tests allow for several considerations. The first interesting observation is that NSGA-II identifies fewer solutions than ENORA on the Pareto front, which implies less diversity and therefore a worse hypervolume ratio, as shown in Figure 3 and Figure 4. This is not surprising: in several other occasions [19,34,60], it has been shown that ENORA maintains a higher diversity in the population than other well-known evolutionary algorithms, with generally positive influence on the final results. Comparing the results in full training mode against the results in cross-validation or in splitting mode makes it evident that our solution produces classification models that are more resilient to over-fitting. For example, the classifier learned by PART with Monk’s Problem 2 presents a 94.01% accuracy in full training mode that drops to 73.51% in splitting mode. A similar, although with a more contained drop in accuracy, is shown by the classifier learned with Breast Cancer dataset; at the same time, the classifier learned by ENORA driven by accuracy shows only a 5.57% drop in one case, and even an improvement in the other case (see Table 5, Table 6, Table 9, and Table 10). This phenomenon is easily explained by looking at the number of rules: the more rules in a classifier, the higher the risk of over-fitting; PART produces very accurate classifiers, but at the price of adding many rules, which not only affects the interpretability of the model but also its resilience to over-fitting. Full training results seem to indicate that when the optimization model is driven by RMSE the classifiers are more accurate; nevertheless, they are also more prone to over-fitting, indicating that, on average, the optimization models driven by the accuracy are preferable.
From the statistical tests (whose results are shown in the Appendix A and Appendix B) we conclude that among the six variants of the proposed optimization model there are no statistical significative differences, which suggests that the advantages of our method do not depend directly on a specific evolutionary algorithm or on the specific performance measure that is used to drive the evolutions. Significant statistical differences between our method and very simple classical methods such as OneR were expectable. Significant statistical differences between our method and a well-consolidated one such as PART have not been found, but the price to be paid for using PART in order to have similar results to ours is a very high number of rules (15 vs. 2 in one case and 47 vs. 7 in the other case).
We would like to highlight that both the Breast Cancer dataset and the Monk’s problem 2 dataset are difficult to approximate with interpretable classifiers and that none of the analyzed classifiers obtains high accuracy rates using the cross-validation technique. Even powerful black-box classifiers, such as Random Forest and Logistic, obtain success rates below 70 % in 10-fold cross-validation for these datasets. However, ENORA obtains a better balance (trade-off) between precision and interpretability than the rest of the classifiers. For the rest of the analyzed datasets, the accuracy obtained using ENORA is substantially higher. For example, for the Tic-Tac-Toe-Endgame dataset, ENORA obtains a 98.3299 % success percentage with only two rules in cross-validation, while PART obtains 94.2589 % with 49 rules, and JRip obtains 97.8079 % with nine rules. With respect to the results obtained in the datasets Car, kr-vs-kp and Nursery, we want to comment that better success percentage can be obtained if the maximum number of evaluations is increased. However, better success percentages imply a greater number of rules, which is to the detriment of the interpretability of the models.

6. Conclusions and Future Works

In this paper, we have proposed a novel technique for categorical classifier learning. Our proposal is based on defining the problem of learning a classifier as a multi-objective optimization problem, and solving it by suitably adapting an evolutionary algorithm to this task; our two objectives are minimizing the number of rules (for a better interpretability of the classifier) and maximizing a metric of performance. Depending on the particular metric that is chosen, (slightly) different optimization models arise. We have tested our proposal, in a first instance, on two different publicly available datasets, Breast Cancer (in which each instance represents a patient that has suffered from breast cancer and is described by nine attributes, and the class to be predicted represents the fact that the patient has suffered a recurring event) and Monk’s Problem 2 (which is an artificial, well-known dataset in which the class to be predicted represents a logical function), using two different evolutionary algorithms, namely ENORA and NSGA-II, and three different choices as a performance metric, i.e., accuracy, the area under the ROC curve, and the root mean square error. Additionally, we have shown the results of the evaluation in 10-fold cross-validation of the publicly available Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery datasets.
Our initial motivation was to design a classifier learning system that produces interpretable, yet accurate, classifiers: since interpretability is a direct function of the number of rules, we conclude that such an objective has been achieved. As an aside, observe that our approach allows the user to decide, beforehand, a maximum number of rules; this can also be done in PART and JRip, but only indirectly. Finally, the idea underlying our approach is that multiple classifiers are explored at the same time in the same execution, and this allows us to choose the best compromise between the performance and the interpretability of a classifier a posteriori.
As a future work, we envisage that our methodology can benefit from an embedded future selection mechanism. In fact, all attributes are (ideally) used in every rule of a classifier learned by our optimization model. By simply relaxing such a constraint, and by suitably re-defining the first objective in the optimization model (e.g., by minimizing the sum of the lengths of all rules, or similar measures), the resulting classifiers will naturally present rules that use more features as well as rules that use less (clearly, the implementation must be adapted to obtain an initial population in which the classifiers have rules of different lengths as well as mutation operators that allow a rule to grow or to shrink). Although this approach does not follow the classical definition of feature selection mechanisms (in which a subset of features is selected that reduces the dataset over which a classifier is learned), it is natural to imagine that it may produce even more accurate classifiers, and more interpretable at the same time.
Currently, we are implementing our own version of multi-objective differential evolution (MODE) for rule-based classification for inclusion in the Weka Open Source Software issued under the GNU General Public License. The implementation of other algorithms, such as MOEA/D, their adaptation in the Weka development platform and subsequent analysis and comparison are planned for future work.

Author Contributions

Conceptualization, F.J. and G.S. (Gracia Sánchez); Methodology, F.J. and G.S. (Guido Sciavicco); Software, G.S. (Gracia Sánchez) and C.M.; Validation, F.J., G.S. (Gracia Sánchez) and C.M.; Formal Analysis, F.J. and G.S. (Guido Sciavicco); Investigation, F.J. and G.S. (Gracia Sánchez); Resources, L.M.; Data Curation, L.M.; Writing—Original Draft Preparation, F.J., L.M. and G.S. (Guido Sciavicco); Writing—Review and Editing, F.J., L.M. and G.S. (Guido Sciavicco); Visualization, F.J.; Supervision, F.J.; Project Administration, F.J.; and Funding Acquisition, F.J., L.M., G.S. (Gracia Sánchez) and G.S. (Guido Sciavicco).

Funding

This research received no external funding.

Acknowledgments

This study was partially supported by computing facilities of Extremadura Research Centre for Advanced Technologies (CETA-CIEMAT), funded by the European Regional Development Fund (ERDF). CETA-CIEMAT belongs to CIEMAT and the Government of Spain.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MLMachine learning
ANNArtificial neural networks
DLNNDeep learning neural networks
CEOChief executive officer
SVMSupport vector machines
IBLInstance-based learning
DTDecision trees
RBCRule-based classifiers
ROCReceiver operating characteristic
RMSERoot mean square error performance metric
FLFuzzy logic
MOEAMulti-objective evolutionary algorithms
NSGA-IINon-dominated sorting genetic algorithm, 2nd version
ENORAEvolutionary non-dominated radial slots based algorithm
PARTPartial decision tree classifier
JRipRIPPER classifier of Weka
RIPPERRepeated incremental pruning to produce error reduction
OneROne rule classifier
ZeroRZero rule classifier
ENORA-ACCENORA with objective function defined as accuracy
ENORA-AUCENORA with objective function defined as area under the ROC curve
ENORA-RMSEENORA with RMSE objective function
NSGA-II-ACCNSGA-II with objective function defined as accuracy
NSGA-II-AUCNSGA-II with objective function defined as area under the ROC curve
NSGA-II-RMSENSGA-II with RMSE objective function
HVRHypervolume ratio
TPTrue positive
FPFalse positive
MCCMatthews correlation coefficient
PRCPrecision-recall curve

Appendix A. Statistical Tests for Breast Cancer Dataset

Table A1. Shapiro–Wilk normality test p-values for percent correct metric—Breast Cancer dataset.
Table A1. Shapiro–Wilk normality test p-values for percent correct metric—Breast Cancer dataset.
Algorithmp-ValueNull Hypothesis
ENORA-ACC0.5316Not Rejected
ENORA-AUC0.3035Not Rejected
ENORA-RMSE0.7609Not Rejected
NSGA-II-ACC0.1734Not Rejected
NSGA-II-AUC0.3802Not Rejected
NSGA-II-RMSE0.6013Not Rejected
PART0.0711Not Rejected
JRip0.5477Not Rejected
OneR0.316Not Rejected
ZeroR3.818 × 10 06 Rejected
Table A2. Friedman p-value for percent correct metric—Breast Cancer dataset.
Table A2. Friedman p-value for percent correct metric—Breast Cancer dataset.
p-ValueNull Hypothesis
Friedman5.111 × 10 04 Rejected
Table A3. Nemenyi post-hoc procedure for percent correct metric—Breast Cancer dataset.
Table A3. Nemenyi post-hoc procedure for percent correct metric—Breast Cancer dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPARTJRipOneR
ENORA-AUC0.2597--------
ENORA-RMSE0.96270.9627-------
NSGA-II-ACC0.99810.80471.0000------
NSGA-II-AUC0.29511.00000.97350.8386-----
NSGA-II-RMSE1.00000.21690.94360.99600.2486----
PART0.17901.00000.91860.69971.00000.1461---
JRip0.99090.89561.00001.00000.91860.98400.8164--
OneR0.00040.64140.04510.01080.59610.00020.75460.0212-
ZeroR0.23771.00000.95380.78031.00000.19731.00000.87830.6709
Table A4. Summary of statistically significant differences for percent correct metric—Breast Cancer dataset.
Table A4. Summary of statistically significant differences for percent correct metric—Breast Cancer dataset.
ENORA-ACCENORA-RMSENSGA-II-ACCNSGA-II-RMSEJRip
OneRENORA-ACCENORA-RMSENSGA-II-ACCNSGA-II-RMSEJRip
Table A5. Shapiro–Wilk normality test p-values for area under the ROC curve metric—Breast Cancer dataset.
Table A5. Shapiro–Wilk normality test p-values for area under the ROC curve metric—Breast Cancer dataset.
Algorithmp-ValueNull Hypothesis
ENORA-ACC0.6807Not Rejected
ENORA-AUC0.3171Not Rejected
ENORA-RMSE0.6125Not Rejected
NSGA-II-ACC0.0871Not Rejected
NSGA-II-AUC0.5478Not Rejected
NSGA-II-RMSE0.6008Not Rejected
PART0.6066Not Rejected
JRip0.2978Not Rejected
OneR0.4531Not Rejected
ZeroR0.0000Rejected
Table A6. Friedman p-value for area under the ROC curve metric—Breast Cancer dataset.
Table A6. Friedman p-value for area under the ROC curve metric—Breast Cancer dataset.
p-ValueNull Hypothesis
Friedman8.232 × 10 10 Rejected
Table A7. Nemenyi post-hoc procedure for area under the ROC curve metric—Breast Cancer dataset.
Table A7. Nemenyi post-hoc procedure for area under the ROC curve metric—Breast Cancer dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPARTJRipOneR
ENORA-AUC1.0000--------
ENORA-RMSE0.99720.9990-------
NSGA-II-ACC0.99991.00001.0000------
NSGA-II-AUC1.00001.00001.00001.0000-----
NSGA-II-RMSE0.99900.99971.00001.00001.0000----
PART0.99991.00001.00001.00001.00001.0000---
JRip1.00001.00000.99921.00001.00000.99981.0000--
OneR0.00410.00620.07900.03230.02810.05820.03450.0067-
ZeroR3.8 × 10 07 7.2 × 10 07 4.6 × 10 05 9.8 × 10 06 7.8 × 10 06 2.7 × 10 05 1.1 × 10 05 8.1 × 10 07 0.6854
Table A8. Summary of statistically significant differences for area under the ROC curve metric—Breast Cancer dataset.
Table A8. Summary of statistically significant differences for area under the ROC curve metric—Breast Cancer dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPARTJRip
OneRENORA-ACCENORA-AUC-NSGA-II-ACCNSGA-II-AUC-PARTJRip
ZeroRENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPARTJRip
Table A9. Shapiro–Wilk normality test p-values for serialized model size metric—Breast Cancer dataset.
Table A9. Shapiro–Wilk normality test p-values for serialized model size metric—Breast Cancer dataset.
Algorithmp-ValueNull Hypothesis
ENORA-ACC5.042 × 10 05 Rejected
ENORA-AUC2.997 × 10 07 Rejected
ENORA-RMSE4.762 × 10 04 Rejected
NSGA-II-ACC4.88 × 10 06 Rejected
NSGA-II-AUC2.339 × 10 07 Rejected
NSGA-II-RMSE2.708 × 10 06 Rejected
PART0.3585Not Rejected
JRip9.086 × 10 03 Rejected
OneR1.007 × 10 07 Rejected
ZeroR0.0000Rejected
Table A10. Friedman p-value for serialized model size metric—Breast Cancer dataset.
Table A10. Friedman p-value for serialized model size metric—Breast Cancer dataset.
p-ValueNull Hypothesis
Friedman2.2 × 10 16 Rejected
Table A11. Nemenyi post-hoc procedure for serialized model size metric—Breast Cancer dataset.
Table A11. Nemenyi post-hoc procedure for serialized model size metric—Breast Cancer dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPARTJRipOneR
ENORA-AUC0.9998--------
ENORA-RMSE0.00530.0004-------
NSGA-II-ACC0.38710.09420.8872------
NSGA-II-AUC0.88720.48940.38710.9988-----
NSGA-II-RMSE4.1 × 10 05 1.3 × 10 06 0.98600.21690.0244----
PART4.7 × 10 09 5.6 × 10 11 0.19730.00133.3 × 10 05 0.8689---
JRip0.27120.69971.2 × 10 08 7.0 × 10 05 0.00256.3 × 10 12 6.9 × 10 14 --
OneR0.00620.05461.5 × 10 12 5.5 × 10 08 5.5 × 10 06 8.3 × 10 14 8.3 × 10 14 0.9584-
ZeroR1.9 × 10 05 0.00047.3 × 10 14 8.6 × 10 12 2.3 × 10 09 8.5 × 10 14 <2 × 10 16 0.23770.9584
Table A12. Summary of statistically significant differences for serialized model size metric—Breast Cancer dataset.
Table A12. Summary of statistically significant differences for serialized model size metric—Breast Cancer dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPART
ENORA-RMSEENORA-ACCNSGA-II-AUC-----
NSGA-II-RMSEENORA-ACCENORA-AUC--NSGA-II-AUC--
PARTENORA-ACCENORA-AUC-NSGA-II-ACCNSGA-II-AUC--
JRip--JRipJRipJRipJRipJRip
OneROneR-OneROneROneROneROneR
ZeroRZeroRZeroRZeroRZeroRZeroRZeroRZeroR

Appendix B. Statistical Tests for Monk’s Problem 2 Dataset

Table A13. Shapiro–Wilk normality test p-values for percent correct metric—Monk’s Problem 2 dataset.
Table A13. Shapiro–Wilk normality test p-values for percent correct metric—Monk’s Problem 2 dataset.
Algorithmp-ValueNull Hypothesis
ENORA-ACC0.6543Not Rejected
ENORA-AUC0.6842Not Rejected
ENORA-RMSE0.0135Rejected
NSGA-II-ACC0.979Not Rejected
NSGA-II-AUC0.382Not Rejected
NSGA-II-RMSE0.0486Rejected
PART0.5671Not Rejected
JRip0.075Rejected
OneR4.672 × 10 06 Rejected
ZeroR4.672 × 10 06 Rejected
Table A14. Friedman p-value for percent correct metric—Monk’s Problem 2 dataset.
Table A14. Friedman p-value for percent correct metric—Monk’s Problem 2 dataset.
p-ValueNull Hypothesis
Frideman1.292 × 10 07 Rejected
Table A15. Nemenyi post-hoc procedure for percent correct metric—Monk’s Problem 2 dataset.
Table A15. Nemenyi post-hoc procedure for percent correct metric—Monk’s Problem 2 dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPARTJRipOneR
ENORA-AUC0.8363--------
ENORA-RMSE1.00000.9471-------
NSGA-II-ACC0.19070.99020.3481------
NSGA-II-AUC0.01260.62940.03420.9958-----
NSGA-II-RMSE0.01260.62940.03420.99581.0000----
PART0.87141.00000.96310.98410.57690.5769---
JRip2.1 × 10 06 0.00481.0 × 10 05 0.13410.68060.68060.0036--
OneR0.00010.07430.00060.60320.98750.98750.06010.9984-
ZeroR0.00010.07430.00060.60320.98750.98750.06010.99841.0000
Table A16. Summary of statistically significant differences for percent correct metric—Monk’s Problem 2 dataset.
Table A16. Summary of statistically significant differences for percent correct metric—Monk’s Problem 2 dataset.
ENORA-ACCENORA-AUCENORA-RMSEPART
NSGA-II-AUCENORA-ACC-ENORA-RMSE-
NSGA-II-RMSEENORA-ACC-ENORA-RMSE-
JRipENORA-ACCENORA-AUCENORA-RMSEPART
OneRENORA-ACC-ENORA-RMSE-
ZeroRENORA-ACC-ENORA-RMSE-
Table A17. Shapiro–Wilk normality test p-values for area under the ROC curve metric—Monk’s Problem 2 dataset.
Table A17. Shapiro–Wilk normality test p-values for area under the ROC curve metric—Monk’s Problem 2 dataset.
Algorithmp-ValueNull Hypothesis
ENORA-ACC0.4318Not Rejected
ENORA-AUC0.7044Not Rejected
ENORA-RMSE0.0033Rejected
NSGA-II-ACC0.3082Not Rejected
NSGA-II-AUC0.0243Rejected
NSGA-II-RMSE0.7802Not Rejected
PART0.1641Not Rejected
JRip0.3581Not Rejected
OneR0.0000Rejected
ZeroR0.0000Rejected
Table A18. Friedman p-value for area under the ROC curve metric—Monk’s Problem 2 dataset.
Table A18. Friedman p-value for area under the ROC curve metric—Monk’s Problem 2 dataset.
p-ValueNull Hypothesis
Frideman1.051 × 10 08 Rejected
Table A19. Nemenyi post-hoc procedure for area under the ROC curve metric—Monk’s Problem 2 dataset.
Table A19. Nemenyi post-hoc procedure for area under the ROC curve metric—Monk’s Problem 2 dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPARTJRipOneR
ENORA-AUC0.8363--------
ENORA-RMSE1.00000.7054-------
NSGA-II-ACC0.88700.05390.9556------
NSGA-II-AUC1.00000.85441.00000.8713-----
NSGA-II-RMSE0.55040.00840.70540.99990.5239----
PART0.70541.00000.55040.02690.72950.0036---
JRip0.02382.3 × 10 05 0.04820.68060.02110.94717.0 × 10 06 --
OneR0.00844.7 × 10 06 0.01860.47150.00730.83631.4 × 10 06 1.0000-
ZeroR0.00844.7 × 10 06 0.01860.47150.00730.83631.4 × 10 06 1.00001.0000
Table A20. Summary of statistically significant differences for area under the ROC curve metric—Monk’s Problem 2 dataset.
Table A20. Summary of statistically significant differences for area under the ROC curve metric—Monk’s Problem 2 dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCPART
NSGA-II-RMSE-ENORA-AUC-----
PART---PART-PART-
JRipENORA-ACCENORA-AUCENORA-RMSE-NSGA-II-AUC-PART
OneRENORA-ACCENORA-AUCENORA-RMSE-NSGA-II-AUC-PART
ZeroRENORA-ACCENORA-AUCENORA-RMSE-NSGA-II-AUC-PART
Table A21. Shapiro–Wilk normality test p-values for serialized model size metric—Monk’s Problem 2 dataset.
Table A21. Shapiro–Wilk normality test p-values for serialized model size metric—Monk’s Problem 2 dataset.
Algorithmp-ValueNull Hypothesis
ENORA-ACC4.08 × 10 05 Rejected
ENORA-AUC0.0002Rejected
ENORA-RMSE0.0094Rejected
NSGA-II-ACC0.0192Rejected
NSGA-II-AUC0.0846Rejected
NSGA-II-RMSE0.0037Rejected
PART0.9721Not Rejected
JRip0.0068Rejected
OneR0.0000Rejected
ZeroR0.0000Rejected
Table A22. Friedman p-value for serialized model size metric—Monk’s Problem 2 dataset.
Table A22. Friedman p-value for serialized model size metric—Monk’s Problem 2 dataset.
p-ValueNull Hypothesis
Frideman2.657 × 10 13 Rejected
Table A23. Nemenyi post-hoc procedure for serialized model size metric—Monk’s Problem 2 dataset.
Table A23. Nemenyi post-hoc procedure for serialized model size metric—Monk’s Problem 2 dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPARTJRipOneR
ENORA-AUC1.0000--------
ENORA-RMSE1.00001.0000-------
NSGA-II-ACC1.00001.00001.0000------
NSGA-II-AUC0.99250.96960.99840.9841-----
NSGA-II-RMSE0.88700.95560.79660.92670.2622----
PART0.28240.17520.39570.22460.90150.0027---
JRip0.17520.28240.11100.22460.00840.97521.0 × 10 05 --
OneR0.02110.04310.01100.03040.00040.65521.5 × 10 07 0.9993-
ZeroR0.00120.00310.00060.00201.0 × 10 05 0.19071.3 × 10 09 0.90150.9993
Table A24. Summary of statistically significant differences for serialized model size metric—Monk’s Problem 2 dataset.
Table A24. Summary of statistically significant differences for serialized model size metric—Monk’s Problem 2 dataset.
ENORA-ACCENORA-AUCENORA-RMSENSGA-II-ACCNSGA-II-AUCNSGA-II-RMSEPART
PART-----NSGA-II-RMSE-
JRip----JRip-JRip
OneROneROneROneROneROneR-OneR
ZeroRZeroRZeroRZeroRZeroRZeroR-ZeroR

Appendix C. Nomenclature

Table A25. Nomenclature table (Part I).
Table A25. Nomenclature table (Part I).
SymbolDefinition
Equation (1): Multi-objective constrained optimization
x k k-th decision variable
x Set of decision variables
f i x i-th objective function
g j x j-th constraint
l > 0 Number of objectives
m > 0 Number of constraints
w > 0 Number of decision variables
X Domain for each each decision variable x k
X w Domain for the set of decision variables
F Set of all feasible solutions
S Set of non-dominated solutions or Pareto optimal set
D x , x Pareto domination function
Equation (2): Rule-based classification for categorical data
D Dataset
x i i t h categorical input attribute in the dataset D
x Categorical input attributes in the dataset D
yCategorical output attribute in the dataset D
1 , , v i Domain of i-th categorical input attribute in the dataset D
1 , , w Domain of categorical output attribute in the dataset D
p 0 Number of categorical input attributes in the dataset D
Γ Rule-based classifier
R i Γ i t h rule of classifier Γ
b i j Γ Category for j t h categorical input attribute and i t h rule of classifier Γ
c i Γ Category for categorical output attribute and i t h rule of classifier Γ
φ i Γ x Compatibility degree of the i t h rule of classifier Γ for the example x
μ i j Γ ( x ) Result of the i t h rule of classifier Γ and j t h categorical input attribute x j
λ c Γ x Association degree of classifier Γ for the example x with the class c
η i c Γ ( x ) Result of of the i t h rule of classifier Γ for the example x with the class c
f Γ x Classification or output of the classifier Γ for the example x
Equation (3): Multi-objective constrained optimization problem for rule-based classification
F D ( Γ ) Performance objective function of the classifier Γ in the dataset D
NR ( Γ ) Number of rules of the classifier Γ
M m a x Maximum number of rules allowed for classifiers
Equations (4)–(6): Optimization models
ACC D ( Γ ) Acurracy: proportion of correctly classified instances with the classifier Γ in the dataset D
KNumber of instances in the dataset D
T D ( Γ , i ) Result of the classification of the i t h instance in the dataset D with the classifier Γ
c ^ i Γ Predicted value of the i t h instance in the dataset D with the classifier Γ
c D i Corresponding true value for the i t h instance in the dataset D .
AUC D ( Γ ) Area under the ROC curve obtained with the classifier Γ in the dataset D .
S D ( Γ , t ) Sensitivity: proportion of positive instances classified as positive with the classifier Γ in the dataset D
1 E D ( Γ , t ) Specificity: proportion of negative instances classified as negative with the classifier Γ in the dataset D
tDiscrimination threshold
RMSE D ( Γ ) Square root of the mean square error obtained with the classifier Γ in the dataset D
Table A26. Nomenclature table (Part II).
Table A26. Nomenclature table (Part II).
Equations (7) and (8): Hypervolume metric
PPopulation
Q P Set of non-dominated individuals of P
v i Volume of the search space dominated by the individual i
H V ( P ) Hypervolume: volume of the search space dominated by population P
H ( P ) Volume of the search space non-dominated by population P
H V R ( P ) Hypervolume ratio: ratio of H ( P ) over the volume of the entire search space
V S Volume of the search space
F D l o w e r Minimum value for objective F D
F D u p p e r Maximum value for objective F D
NR l o w e r Minimum value for objective N R
NR u p p e r Maximum value for objective N R

References

  1. Bishop, C.M. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  2. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall Press: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
  3. Davalo, É. Neural Networks; MacMillan Computer Science; Macmillan Education: London, UK, 1991. [Google Scholar]
  4. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  5. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef] [Green Version]
  6. Aha, D.W.; Kibler, D.; Albert, M.K. Instance-based learning algorithms. Mach. Learn. 1991, 6, 37–66. [Google Scholar] [CrossRef] [Green Version]
  7. Gacto, M.; Alcalá, R.; Herrera, F. Interpretability of linguistic fuzzy rule-based systems: An overview of interpretability measures. Inf. Sci. 2011, 181, 4340–4360. [Google Scholar] [CrossRef]
  8. Cano, A.; Zafra, A.; Ventura, S. An EP algorithm for learning highly interpretable classifiers. In Proceedings of the 11th International Conference on Intelligent Systems Design and Applications, Cordoba, Spain, 22–24 November 2011; pp. 325–330. [Google Scholar]
  9. Liu, H.; Gegov, A. Collaborative Decision Making by Ensemble Rule Based Classification Systems. In Granular Computing and Decision-Making: Interactive and Iterative Approaches; Pedrycz, W., Chen, S.-M., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 245–264. [Google Scholar]
  10. Sulzmann, J.N.; Fürnkranz, J. Rule Stacking: An Approach for Compressing an Ensemble of Rule Sets into a Single Classifier; Elomaa, T., Hollmén, J., Mannila, H., Eds.; Discovery Science; Springer: Berlin/Heidelberg, Germany, 2011; pp. 323–334. [Google Scholar]
  11. Jin, Y. Fuzzy Modeling of High-Dimensional Systems: Complexity Reduction and Interpretability Improvement. IEEE Trans. Fuzzy Syst. 2000, 8, 212–220. [Google Scholar] [CrossRef]
  12. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Wadsworth and Brooks: Monterey, CA, USA, 1984. [Google Scholar]
  13. Novák, V.; Perfilieva, I.; Mockor, J. Mathematical Principles of Fuzzy Logic; Springer Science + Business Media: Heidelberg, Germany, 2012. [Google Scholar]
  14. Freund, Y.; Schapire, R.E. A Short Introduction to Boosting. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 31 July–6 August 1999; pp. 1401–1406. [Google Scholar]
  15. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley and Sons: London, UK, 2001. [Google Scholar]
  16. Coello, C.A.C.; van Veldhuizen, D.A.; Lamont, G.B. Evolutionary Algorithms for Solving Multi-Objective Problems; Kluwer Academic/Plenum Publishers: New York, NY, USA, 2002. [Google Scholar]
  17. Jiménez, F.; Gómez-Skarmeta, A.; Sánchez, G.; Deb, K. An evolutionary algorithm for constrained multi-objective optimization. In Proceedings of the 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; pp. 1133–1138. [Google Scholar]
  18. Jiménez, F.; Sánchez, G.; Juárez, J.M. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction. Artif. Intell. Med. 2014, 60, 197–219. [Google Scholar] [CrossRef] [PubMed]
  19. Jiménez, F.; Marzano, E.; Sánchez, G.; Sciavicco, G.; Vitacolonna, N. Attribute selection via multi-objective evolutionary computation applied to multi-skill contact center data classification. In Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, 7–10 December 2015; pp. 488–495. [Google Scholar]
  20. Jiménez, F.; Jódar, R.; del Pilar Martín, M.; Sánchez, G.; Sciavicco, G. Unsupervised feature selection for interpretable classification in behavioral assessment of children. Expert Syst. 2017, 34, e12173. [Google Scholar] [CrossRef]
  21. Rey, M.; Galende, M.; Fuente, M.; Sainz-Palmero, G. Multi-objective based Fuzzy Rule Based Systems (FRBSs) for trade-off improvement in accuracy and interpretability: A rule relevance point of view. Knowl.-Based Syst. 2017, 127, 67–84. [Google Scholar] [CrossRef]
  22. Ducange, P.; Lazzerini, B.; Marcelloni, F. Multi-objective genetic fuzzy classifiers for imbalanced and cost-sensitive datasets. Soft Comput. 2010, 14, 713–728. [Google Scholar] [CrossRef]
  23. Gorzalczany, M.B.; Rudzinski, F. A multi-objective genetic optimization for fast, fuzzy rule-based credit classification with balanced accuracy and interpretability. Appl. Soft Comput. 2016, 40, 206–220. [Google Scholar] [CrossRef]
  24. Ducange, P.; Mannara, G.; Marcelloni, F.; Pecori, R.; Vecchio, M. A novel approach for internet traffic classification based on multi-objective evolutionary fuzzy classifiers. In Proceedings of the 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Naples, Italy, 9–12 July 2017; pp. 1–6. [Google Scholar]
  25. Antonelli, M.; Bernardo, D.; Hagras, H.; Marcelloni, F. Multiobjective Evolutionary Optimization of Type-2 Fuzzy Rule-Based Systems for Financial Data Classification. IEEE Trans. Fuzzy Syst. 2017, 25, 249–264. [Google Scholar] [CrossRef] [Green Version]
  26. Carmona, C.J.; González, P.; Deljesus, M.J.; Herrera, F. NMEEF-SD: Non-dominated multiobjective evolutionary algorithm for extracting fuzzy rules in subgroup discovery. IEEE Trans. Fuzzy Syst. 2010, 18, 958–970. [Google Scholar] [CrossRef]
  27. Hubertus, T.; Klaus, M.; Eberhard, T. Optimization Theory; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2004. [Google Scholar]
  28. Sinha, S. Mathematical Programming: Theory and Methods; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  29. Collette, Y.; Siarry, P. Multiobjective Optimization: Principles and Case Studies; Springer-Verlag Berlin Heidelberg: New York, NY, USA, 2004. [Google Scholar]
  30. Karloff, H. Linear Programming; Birkhauser Basel: Boston, MA, USA, 1991. [Google Scholar]
  31. Maros, I.; Mitra, G. Simplex algorithms. In Advances in Linear and Integer Programming; Beasley, J.E., Ed.; Oxford University Press: Oxford, UK, 1996; pp. 1–46. [Google Scholar]
  32. Bertsekas, D. Nonlinear Programming, 2nd ed.; Athena Scientific: Cambridge, MA, USA, 1999. [Google Scholar]
  33. Jiménez, F.; Verdegay, J.L. Computational Intelligence in Theory and Practice. In Advances in Soft Computing; Reusch, B., Temme, K.-H., Eds.; Springer: Heidelberg, Germany, 2001; pp. 167–182. [Google Scholar]
  34. Jiménez, F.; Sánchez, G.; García, J.; Sciavicco, G.; Miralles, L. Multi-objective evolutionary feature selection for online sales forecasting. Neurocomputing 2017, 234, 75–92. [Google Scholar] [CrossRef]
  35. Deb, K.; Pratab, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  36. Bao, C.; Xu, L.; Goodman, E.D.; Cao, L. A novel non-dominated sorting algorithm for evolutionary multi-objective optimization. J. Comput. Sci. 2017, 23, 31–43. [Google Scholar] [CrossRef]
  37. Jiménez, F.; Sánchez, G.; Vasant, P. A Multi-objective Evolutionary Approach for Fuzzy Optimization in Production Planning. J. Intell. Fuzzy Syst. 2013, 25, 441–455. [Google Scholar]
  38. Deb, K.; Jain, H. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  39. Frank, E.; Witten, I.H. Generating Accurate Rule Sets without Global Optimization; Department of Computer Science, University of Waikato: Waikato, New Zealand, 1998; pp. 144–151. [Google Scholar]
  40. Witten, I.H.; Frank, E.; Hall, M.A. Introduction to Weka. In Data Mining: Practical Machine Learning Tools and Techniques, 3rd ed.; Witten, I.H., Frank, E., Hall, M.A., Eds.; The Morgan Kaufmann Series in Data Management Systems; Morgan Kaufmann: Boston, MA, USA, 2011; pp. 403–406. [Google Scholar]
  41. Michalski, R.S. On the quasi-minimal solution of the general covering problem. In Proceedings of the V International Symposium on Information Processing (FCIP 69), Bled, Yugoslavia, 8–11 October 1969; pp. 125–128. [Google Scholar]
  42. Quinlan, J.R. C4. 5: Programs for Machine Learning; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  43. Rajput, A.; Aharwal, R.P.; Dubey, M.; Saxena, S.; Raghuvanshi, M. J48 and JRIP rules for e-governance data. IJCSS 2011, 5, 201. [Google Scholar]
  44. Cohen, W.W. Fast effective rule induction. In Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, CA, USA, 9–12 July 1995; pp. 115–123. [Google Scholar]
  45. Fürnkranz, J.; Widmer, G. Incremental reduced error pruning. In Proceedings of the Eleventh International Conference, New Brunswick, NJ, USA, 10–13 July 1994; pp. 70–77. [Google Scholar]
  46. Holte, R.C. Very simple classification rules perform well on most commonly used datasets. Mach. Learn. 1993, 11, 63–90. [Google Scholar] [CrossRef]
  47. Mukhopadhyay, A.; Maulik, U.; Bandyopadhyay, S.; Coello, C.A.C. A Survey of Multiobjective Evolutionary Algorithms for Data Mining: Part I. IEEE Trans. Evol. Comput. 2014, 18, 4–19. [Google Scholar] [CrossRef]
  48. Mukhopadhyay, A.; Maulik, U.; Bandyopadhyay, S.; Coello, C.A.C. Survey of Multiobjective Evolutionary Algorithms for Data Mining: Part II. IEEE Trans. Evol. Comput. 2014, 18, 20–35. [Google Scholar] [CrossRef]
  49. Ishibuchi, H.; Murata, T.; Turksen, I. Single-objective and two-objective genetic algorithms for selecting linguistic rules for pattern classification problems. Fuzzy Sets Syst. 1997, 89, 135–150. [Google Scholar] [CrossRef]
  50. Srinivas, M.; Patnaik, L. Adaptive probabilities of crossover and mutation in genetic algorithms. IEEE Trans. Syst. Man Cybern B Cybern. 1994, 24, 656–667. [Google Scholar] [CrossRef]
  51. Zwitter, M.; Soklic, M. Breast Cancer Data Set. Yugoslavia. Available online: http://archive.ics.uci.edu/ml/datasets/Breast+Cancer (accessed on 5 September 2018).
  52. Thrun, S. MONK’s Problem 2 Data Set. Available online: https://www.openml.org/d/334 (accessed on 5 September 2018).
  53. Thrun, S.B.; Bala, J.; Bloedorn, E.; Bratko, I.; Cestnik, B.; Cheng, J.; Jong, K.D.; Dzeroski, S.; Fahlman, S.E.; Fisher, D.; et al. The MONK’s Problems A Performance Comparison of Different LearningAlgorithms. Available online: http://digilib.gmu.edu/jspui/bitstream/handle/1920/1685/91-46.pdf?sequence=1 (accessed on 5 September 2018).
  54. Metz, C.E. Basic principles of ROC analysis. Semin. Nucl. Med. 1978, 8, 283–298. [Google Scholar] [CrossRef]
  55. Fawcett, T. An Introduction to ROC Analysis. Pattern Recogn. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  56. Hand, D.J. Measuring classifier performance: A coherent alternative to the area under the ROC curve. Mach. Learn. 2009, 77, 103–123. [Google Scholar] [CrossRef]
  57. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multiobjective evolutionary algorithms: Empirical results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed]
  58. Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.; Grunert da Fonseca, V. Performance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE Trans. Evol. Comput. 2002, 7, 117–132. [Google Scholar] [CrossRef]
  59. Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (II), Montreal, QC, Canada, 20–25 August 1995; pp. 1137–1143. [Google Scholar]
  60. Jiménez, F.; Jodár, R.; Sánchez, G.; Martín, M.; Sciavicco, G. Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children. In Proceedings of the 9th International Conference on Educational Data Mining EDM 2016, Raleigh, NC, USA, 29 June–2 July 2016; pp. 1888–1897. [Google Scholar]
Figure 1. Rank assignment of individuals with ENORA vs. NSGA-II.
Figure 1. Rank assignment of individuals with ENORA vs. NSGA-II.
Entropy 20 00684 g001
Figure 2. A Pareto front of a binary classification problem as formulated in Equation (3) where F D ( Γ ) is minimized and NR ( Γ ) is minimized.
Figure 2. A Pareto front of a binary classification problem as formulated in Equation (3) where F D ( Γ ) is minimized and NR ( Γ ) is minimized.
Entropy 20 00684 g002
Figure 3. Pareto fronts of one execution of ENORA and NSGA-II, with M m a x = 12 , on the Breast Cancer dataset, and their respective HVR. Note that in the case of multi-objective classification where F D is maximized ( ACC D and AUC D ), function F D has been converted to minimization for a better understanding of the Pareto front.
Figure 3. Pareto fronts of one execution of ENORA and NSGA-II, with M m a x = 12 , on the Breast Cancer dataset, and their respective HVR. Note that in the case of multi-objective classification where F D is maximized ( ACC D and AUC D ), function F D has been converted to minimization for a better understanding of the Pareto front.
Entropy 20 00684 g003
Figure 4. Pareto fronts of one execution of ENORA and NSGA-II, with M m a x = 12 , on the Monk’s Problem 2 dataset, and their respective HVR. Note that in the case of multi-objective classification where F D is maximized ( ACC D and AUC D ), function F D has been converted to minimization for a better understanding of the Pareto front.
Figure 4. Pareto fronts of one execution of ENORA and NSGA-II, with M m a x = 12 , on the Monk’s Problem 2 dataset, and their respective HVR. Note that in the case of multi-objective classification where F D is maximized ( ACC D and AUC D ), function F D has been converted to minimization for a better understanding of the Pareto front.
Entropy 20 00684 g004
Table 1. Chromosome coding for an individual I.
Table 1. Chromosome coding for an individual I.
Codification for Rule SetCodification for Adaptive Crossing and Mutation
AntecedentsConsequentAssociated CrossingAssociated Mutation
b 11 I b 21 I b q 1 I c 1 I
d I e I
b 1 M I I b 2 M I I b q M I I c M I I
Table 2. Attribute description of the Breast Cancer dataset.
Table 2. Attribute description of the Breast Cancer dataset.
#Attribute NameTypePossible Values
1agecategorical10–19, 20–29, 30–39, 40–49, 50–59, 60–69, 70–79, 80–89, 90–99.
2menopausecategoricallt40, ge40, premeno
3tumour-sizecategorical0–4, 5–9, 10–14, 15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49, 50–54, 55–59
4inv-nodescategorical0–2, 3–5, 6–8, 9–11, 12–14, 15–17, 18–20, 21–23, 24–26, 27–29, 30–32, 33–35, 36–39
5node-capscategoricalyes, no
6deg-maligncategorical1, 2, 3
7breastcategoricalleft, right
8breast-quadcategoricalleft-up, left-low, right-up, right-low, central
9irradiatcategoricalyes, no
10classcategoricalno-recurrence-events, recurrence-events
Table 3. Attribute description of the MONK’s Problem 2 dataset.
Table 3. Attribute description of the MONK’s Problem 2 dataset.
#Atttribute NameTypePossible Values
1head_shapecategoricalround, square, octagon
2body_shapecategoricalround, square, octagon
3is_smilingcategoricalyes, no
4holdingcategoricalsword, balloon, flag
5jacket_colorcategoricalred, yellow, green, blue
6has_tiecategoricalyes, no
7classcategoricalyes, no
Table 4. Run times of ENORA and NSGA-II for Breast Cancer and Monk’s Problem 2 datasets.
Table 4. Run times of ENORA and NSGA-II for Breast Cancer and Monk’s Problem 2 datasets.
MethodBreast CancerMonk’s Problem 2
ENORA-ACC244.92 s.428.14 s.
ENORA-AUC294.75 s.553.11 s.
ENORA-RMSE243.30 s.414.42 s.
NSGA-II-ACC127.13 s.260.83 s.
NSGA-II-AUC197.07 s.424.83 s.
NSGA-II-RMSE134.87 s.278.19 s.
Table 5. Comparison of the performance of the learning models in full training mode—Breast Cancer dataset.
Table 5. Comparison of the performance of the learning models in full training mode—Breast Cancer dataset.
Learning ModelNumber of RulesPercent CorrectTP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaRMSE
ENORA-ACC279.020.7900.4490.7960.7900.7620.4550.6710.6970.458
ENORA-AUC275.870.7590.3740.7510.7590.7540.4020.6930.6960.491
ENORA-RMSE277.620.7760.4750.7780.7760.7440.4100.6510.6800.473
NSGA-II-ACC277.970.7800.5010.8050.7800.7380.4290.6400.6790.469
NSGA-II-AUC275.520.7550.3680.7490.7550.7520.3990.6930.6960.495
NSGA-II-RMSE279.370.7940.4470.8030.7940.7650.4670.6730.7000.454
PART1578.320.7830.3970.7730.7830.7690.4420.7770.7930.398
JRip376.920.7690.4710.7620.7690.7400.3890.6500.6800.421
OneR172.720.7270.5630.7030.7270.6800.2410.5820.6290.522
ZeroR-70.270.7030.7030.4940.7030.5800.0000.5000.5820.457
Table 6. Comparison of the performance of the learning models in full training mode—Monk’s Problem 2 dataset.
Table 6. Comparison of the performance of the learning models in full training mode—Monk’s Problem 2 dataset.
Learning ModelNumber of RulesPercent CorrectTP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaRMSE
ENORA-ACC775.870.7590.3700.7530.7590.7450.4360.6950.6800.491
ENORA-AUC768.710.6870.1630.8360.6870.6870.5230.7620.7290.559
ENORA-RMSE777.700.7770.3600.7770.7770.7620.4810.7080.6950.472
NSGA-II-ACC768.380.6840.5880.7040.6840.5970.2030.5480.5800.562
NSGA-II-AUC766.380.6640.1750.8300.6640.6610.4970.7440.7150.580
NSGA-II-RMSE768.710.6870.5910.7370.6870.5950.2260.5480.5830.559
PART4794.010.9400.0870.9400.9400.9400.8660.9800.9790.218
JRip165.720.6570.6570.4320.6570.5210.0000.5000.5490.475
OneR165.720.6570.6570.4320.6570.5210.0000.5000.5490.585
ZeroR-65.720.6570.6570.4320.6570.5210.0000.5000.5490.475
Table 7. Rule-based classifier obtained with NSGA-II-RMSE for Breast Cancer dataset.
Table 7. Rule-based classifier obtained with NSGA-II-RMSE for Breast Cancer dataset.
RuleAntecedentsConsequent
R 1 :IFage = 50–59ANDinv-nodes = 0–2ANDnode-caps = no
ANDdeg-malig = 1ANDbreast = rightANDbreast-quad = left-lowTHENclass = no-recurrence-events
R 2 :IFage = 60–69ANDinv-nodes = 18–20ANDnode-caps = yes
ANDdeg-malig = 3ANDbreast = leftANDbreast-quad = right-upTHENclass = recurrence-events
Table 8. Rule-based classifier obtained with ENORA-RMSE for Monk’s Problem 2 dataset.
Table 8. Rule-based classifier obtained with ENORA-RMSE for Monk’s Problem 2 dataset.
RuleAntecedentsConsequent
R 1 :IFhead_shape = roundANDbody_shape = roundANDis_smiling = no
ANDholding = swordANDjacket_color = redANDhas_tie = yesTHENclass = yes
R 2 :IFhead_shape = octagonANDbody_shape = roundANDis_smiling = no
ANDholding = swordANDjacket_color = redANDhas_tie = noTHENclass = yes
R 3 :IFhead_shape = roundANDbody_shape = roundANDis_smiling = no
ANDholding = swordANDjacket_color = yellowANDhas_tie = yesTHENclass = yes
R 4 :IFhead_shape = roundANDbody_shape = roundANDis_smiling = no
ANDholding = swordANDjacket_color = redANDhas_tie = noTHENclass = yes
R 5 :IFhead_shape = squareANDbody_shape = squareANDis_smiling = yes
ANDholding = flagANDjacket_color = yellowANDhas_tie = noTHENclass = no
R 6 :IFhead_shape = octagonANDbody_shape = roundANDis_smiling = yes
ANDholding = balloonANDjacket_color = blueANDhas_tie = noTHENclass = no
R 7 :IFhead_shape = octagonANDbody_shape = octagonANDis_smiling = yes
ANDholding = swordANDjacket_color = greenANDhas_tie = noTHENclass = no
Table 9. Comparison of the performance of the learning models in 10-fold cross-validation mode (three repetitions)—Breast Cancer dataset.
Table 9. Comparison of the performance of the learning models in 10-fold cross-validation mode (three repetitions)—Breast Cancer dataset.
Learning ModelPercent CorrectROC AreaSerialized Model Size
ENORA-ACC73.450.619554.80
ENORA-AUC70.160.629554.63
ENORA-RMSE72.390.609557.77
NSGA-II-ACC72.500.609556.20
NSGA-II-AUC70.030.619555.70
NSGA-II-RMSE73.340.609558.60
PART68.920.6155,298.13
JRip71.820.617664.07
OneR67.150.551524.00
ZeroR70.300.50915.00
Table 10. Comparison of the performance of the learning models in split mode—Monk’s problem 2 dataset.
Table 10. Comparison of the performance of the learning models in split mode—Monk’s problem 2 dataset.
Learning ModelPercent CorrectROC AreaSerialized Model Size
ENORA-ACC76.690.709586.50
ENORA-AUC72.820.799589.30
ENORA-RMSE75.660.689585.30
NSGA-II-ACC70.070.599590.60
NSGA-II-AUC67.080.709619.70
NSGA-II-RMSE67.630.549565.90
PART73.510.7973,115.90
JRip64.050.505956.90
OneR65.720.501313.00
ZeroR65.720.50888.00
Table 11. Attribute description of the Tic-Tac-Toe-Endgame dataset.
Table 11. Attribute description of the Tic-Tac-Toe-Endgame dataset.
#Attribute NameTypePossible Values
1top-left-squarecategoricalx, o, b
2top-middle-squarecategoricalx, o, b
3top-right-squarecategoricalx, o, b
4middle-left-squarecategoricalx, o, b
5middle-middle-squarecategoricalx, o, b
6middle-right-squarecategoricalx, o, b
7bottom-left-squarecategoricalx, o, b
8bottom-middle-squarecategoricalx, o, b
9bottom-right-squarecategoricalx, o, b
10classcategoricalpositive, negative
Table 12. Attribute description of the Car dataset.
Table 12. Attribute description of the Car dataset.
#Attribute NameTypePossible Values
1buyingcategoricalvhigh, high, med, low
2maintcategoricalvhigh, high, med, low
3doorscategorical2, 3, 4, 5-more
4personscategorical2, 4, more
5lug_bootcategoricalsmall, med, big
6safetycategoricallow, med, high
7classcategoricalunacc, acc, good, vgood
Table 13. Attribute description of the kr-vs-kp dataset.
Table 13. Attribute description of the kr-vs-kp dataset.
#Attribute NameTypePossible Values
1bkblkcategoricalt, f
2bknwycategoricalt, f
3bkon8categoricalt, f
4bkonacategoricalt, f
5bksprcategoricalt, f
6bkxbqcategoricalt, f
7bkxcrcategoricalt, f
8bkxwpcategoricalt, f
9blxwpcategoricalt, f
10bxqsqcategoricalt, f
11cntxtcategoricalt, f
12dsoppcategoricalt, f
13dwipdcategoricalg, l
14hdchkcategoricalt, f
15katricategoricalb, n, w
16mulchcategoricalt, f
17qxmsqcategoricalt, f
18r2ar8categoricalt, f
19reskdcategoricalt, f
20reskrcategoricalt, f
21rimmxcategoricalt, f
22rkxwpcategoricalt, f
23rxmsqcategoricalt, f
24simplcategoricalt, f
25skachcategoricalt, f
26skewrcategoricalt, f
27skrxpcategoricalt, f
28spcopcategoricalt, f
29stlmtcategoricalt, f
30thrskcategoricalt, f
31wkcticategoricalt, f
32wkna8categoricalt, f
33wknckcategoricalt, f
34wkovlcategoricalt, f
35wkposcategoricalt, f
36wtoegcategoricaln, t, f
37classcategoricalwon, nowin
Table 14. Attribute description of the Nursery dataset.
Table 14. Attribute description of the Nursery dataset.
#Attribute NameTypePossible Values
1parentscategoricalusual, pretentious, great_pret
2has_nurscategoricalproper, less_proper, improper, critical, very_crit
3formcategoricalcomplete, completed, incomplete, foster
4childrencategorical1, 2, 3, more
5housingcategoricalconvenient, less_conv, critical
6financecategoricalconvenient, inconv
7socialcategoricalnonprob, slightly_prob, problematic
8healthcategoricalrecommended, priority, not_recom
9classcategoricalnot_recom, recommend, very_recom, priority, spec_prior
Table 15. Comparison of the performance of the learning models in 10-fold cross-validation mode—Monk’s Problem 2, Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery datasets.
Table 15. Comparison of the performance of the learning models in 10-fold cross-validation mode—Monk’s Problem 2, Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery datasets.
Learning ModelNumber of RulesPercent CorrectTP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaRMSE
Monk’s problem 2
ENORA-ACC777.700.7770.3600.7770.7770.7620.4810.7080.6950.472
PART4779.530.7950.2530.7950.7950.7950.5440.8840.8930.380
JRip162.900.6290.6460.5260.6290.535−0.0340.4780.5370.482
OneR165.720.6570.6570.4320.6570.5210.0000.5000.5490.586
ZeroR-65.720.6570.6570.4320.6570.5210.0000.4910.5450.457
Tic-Tac-Toe-Endgame
ENORA-ACC/RMSE298.330.9830.0310.9840.9830.9830.9630.9760.9730.129
PART4994.260.9430.0760.9420.9430.9420.8730.9740.9690.220
JRip997.810.9780.0310.9780.9780.9780.9510.9770.9770.138
OneR169.940.6990.3570.7010.6990.7000.3400.6710.6510.548
ZeroR-65.350.6530.6530.4270.6530.5160.0000.4960.5450.476
Car
ENORA-RMSE1486.570.8660.0890.8660.8660.8460.7660.8890.8050.259
PART6895.780.9580.0160.9590.9580.9580.9290.9900.9790.1276
JRip4986.460.8650.0640.8810.8650.8700.7610.9470.8990.224
OneR170.020.7000.7000.4900.7000.5770.0000.5000.5430.387
ZeroR-70.020.7000.7000.4900.7000.5770.0000.4970.5420.338
kr-vs-kp
ENORA-RMSE1094.870.9490.0500.9500.9490.9490.8980.9500.9270.227
PART2399.060.9910.0100.9910.9910.9910.9810.9970.9960.088
JRip1699.190.9920.0080.9920.9920.9920.9840.9950.9930.088
OneR166.460.6650.3500.6750.6650.6550.3340.6570.6070.579
ZeroR-52.220.5220.5220.2730.5220.3580.0000.4990.5000.500
Nursery
ENORA-ACC1588.410.8840.0550.8700.8840.8730.8240.9150.8180.2153
PART22099.210.9920.0030.9920.9920.9920.9890.9990.9970.053
JRip13196.840.9680.0120.9680.9680.9680.9570.9930.9740.103
OneR170.970.7100.1370.6950.7100.7020.5700.7860.6320.341
ZeroR-33.330.3330.3330.1110.3330.1670.0000.5000.3170.370

Share and Cite

MDPI and ACS Style

Jiménez, F.; Martínez, C.; Miralles-Pechuán, L.; Sánchez, G.; Sciavicco, G. Multi-Objective Evolutionary Rule-Based Classification with Categorical Data. Entropy 2018, 20, 684. https://doi.org/10.3390/e20090684

AMA Style

Jiménez F, Martínez C, Miralles-Pechuán L, Sánchez G, Sciavicco G. Multi-Objective Evolutionary Rule-Based Classification with Categorical Data. Entropy. 2018; 20(9):684. https://doi.org/10.3390/e20090684

Chicago/Turabian Style

Jiménez, Fernando, Carlos Martínez, Luis Miralles-Pechuán, Gracia Sánchez, and Guido Sciavicco. 2018. "Multi-Objective Evolutionary Rule-Based Classification with Categorical Data" Entropy 20, no. 9: 684. https://doi.org/10.3390/e20090684

APA Style

Jiménez, F., Martínez, C., Miralles-Pechuán, L., Sánchez, G., & Sciavicco, G. (2018). Multi-Objective Evolutionary Rule-Based Classification with Categorical Data. Entropy, 20(9), 684. https://doi.org/10.3390/e20090684

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop