Next Article in Journal
Digital Educational Support Groups Administered through WhatsApp Messenger Improve Health-Related Knowledge and Health Behaviors of New Adolescent Mothers in the Dominican Republic: A Multi-Method Study
Previous Article in Journal
Deep Learning Model for Industrial Leakage Detection Using Acoustic Emission Signal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigation of Combining Logitboost(M5P) under Active Learning Classification Tasks

Department of Mathematics, University of Patras, 26500 Patras, Greece
*
Author to whom correspondence should be addressed.
Informatics 2020, 7(4), 50; https://doi.org/10.3390/informatics7040050
Submission received: 16 October 2020 / Accepted: 29 October 2020 / Published: 3 November 2020
(This article belongs to the Section Machine Learning)

Abstract

:
Active learning is the category of partially supervised algorithms that is differentiated by its strategy to combine both the predictive ability of a base learner and the human knowledge so as to exploit adequately the existence of unlabeled data. Its ambition is to compose powerful learning algorithms which otherwise would be based only on insufficient labelled samples. Since the latter kind of information could raise important monetization costs and time obstacles, the human contribution should be seriously restricted compared with the former. For this reason, we investigate the use of the Logitboost wrapper classifier, a popular variant of ensemble algorithms which adopts the technique of boosting along with a regression base learner based on Model trees into 3 different active learning query strategies. We study its efficiency against 10 separate learners under a well-described active learning framework over 91 datasets which have been split to binary and multi-class problems. We also included one typical Logitboost variant with a separate internal regressor for discriminating the benefits of adopting a more accurate regression tree than one-node trees, while we examined the efficacy of one hyperparameter of the proposed algorithm. Since the application of the boosting technique may provide overall less biased predictions, we assume that the proposed algorithm, named as Logitboost(M5P), could provide both accurate and robust decisions under active learning scenarios that would be beneficial on real-life weakly supervised classification tasks. Its smoother weighting stage over the misclassified cases during training as well as the accurate behavior of M5P are the main factors that lead towards this performance. Proper statistical comparisons over the metric of classification accuracy verify our assumptions, while adoption of M5P instead of weak decision trees was proven to be more competitive for the majority of the examined problems. We present our results through appropriate summarization approaches and explanatory visualizations, commenting our results per case.

1. Introduction

Without a doubt, the last two decades have been characterized by massive production of data with regards to the fields of Computer Science (CS) and Artificial Intelligence (AI). Several real-life applications contribute to this phenomenon, operating as rich sources of data over all possible kinds: structured, semi-structured or unstructured [1,2]. We distinguish the next fields: social media platforms, economic transactions, medical recordings and Internet of Things (IoT) where Industry 4.0, constitutes a highly affected application coming from the latter field [3,4,5]. Although these applications offer advanced mechanisms for producing the necessary data under automated protocols and/or mechanisms, they still, in their majority, cannot address the data annotation under a similar and still accurate manner.
Therefore, the most widely known variant of Machine Learning (ML) algorithms, the supervised ones, do not stand as a proper solution for obtaining informative insights over the collected data, since the restricted number of annotated examples that may be acquired, even by a manual procedure, do not establish a sufficient training subset. Weakly Supervised Learning (WSL) and/or Partially Supervised Learning (PSL) approaches tackle this problem, trying to exploit the existence of the clearly larger amounts of non-annotated examples in order to mine useful information that might hide based on the aforementioned available annotated examples [6,7].
Different approaches of WSL and/or PSL approaches have been recorded in the related literature since their emergence. The common factor is the much smaller size of the labeled subset (L) against the corresponding unlabeled subset (U), while the most important points over which they differentiate are the following [8]:
  • inductive/transductive approaches, where an explicit learning rule is formatted using the train set during the former one, trying to apply this on a distinct test set, while these two sets are both provided in advance during the latter;
  • incomplete/inaccurate supervision, where both labeled and unlabeled examples are initially gathered regarding the first category, on contrast with the second one which is distinguished because of the noise that may govern the provided labeled examples, a fact that would cause intense deterioration on learning a specific task; and
  • active/semi-supervised learning, where there is a straightforward separation of the approaches that need or demand human intervention so as to blend human’s knowledge into their total learning kernel for acquiring safer decisions instead of being based solely on a base learner’s predictions building a more automated learning chain but with greater risks.
Following the recipes of WSL/PSL approaches, the abundance of collected unlabeled data might act as a valuable source of information, reducing the negative effect of the difficulty on obtaining much labeled data, because of the inherent difficulties that take place on several domains. Ethical issues, scarcity of specific incidents and highly expensive labeling procedures are some of the obstacles that usually prevent us from handling successfully the creation of a large enough L per case [9]. However, based on the assumption that both L and U are produced from the same underlying data distribution P , the connection between the conditional distribution P ( y | x ) and the marginal distribution P ( x ) could lead potentially to more accurate learning functions: f :   X Y , where x = { x 1 , x 2 ,   ,   x N } stands for a typical example with N features while y symbolizes the label that accompanies any such sample, being either known or unknown in the cases of L and U, respectively. Furthermore, through X ,   Y , we depict the space of the examples and the labels.
In this work, we aim to propose an accurate and robust batch-based inductive Active Learning (AL) algorithm for pool-based scenario, regarding the manner that the data are initially concerned. The base learner of the proposed AL algorithm is based on the adoption of an ensemble learner into its learning structure so as to cover efficiently the shortage of labeled data ( l i   ϵ   L ) along with the existence of a human oracle ( H o r a c l e ) that may provide us with trustworthy annotations. At the same time, a quite larger pool of unlabeled examples ( u i   ϵ   U ) is available for mining its context [10]. Since the need for well-established predictions during the labeling stage of u i is one of the most crucial point during AL strategies, the exploitation of ensemble learners seems mandatory in order to capture better the insights of the examined data. Several recent works are also directed towards introducing ensemble learners into AL or other WSL variants, such as Semi-supervised Learning (SSL) [11], Cooperative Learning (CL) [12]—also known as AL + SSL [13,14]—or even Transfer Learning (TL) [15], while the field of supervised ensemble learners is still in bloom [16].
In our case, we prefer the adoption of the Logitboost ensemble learner, a product of the well-known procedure of generating ensemble learners through serially formatting a Generalized Additive Model (GAM), which does not prevent us from adopting additive tree models, as follows:
f a d d i t i v e ( x ) = M w · f ( x ) ,
According to its strong theory, Logitboost manipulates the distribution of the training dataset based on the errors that occur during their categorization, where M depicts the learning rounds of this iterative procedure [17]. Following the generic concept of boosting, during the training stage, we fit a number of learners f ( · ) which try to emphasize better on the instances that are misclassified. This is made through the weighting vector w that enables the fitted learner to modify its decisions towards covering the most difficult cases based on their weight factors. After M such rounds, we reach an iteratively boosted learning function, which has ideally transformed to a strong learner by continuously reducing the errors that the initial weak model and its previous variants faced.
Although the variant of AdaBoost is the most popular product of the boosting family algorithms, Logitboost actually constitutes a choice that may reward us compared to some defects that AdaBoost presents [18]. To be more specific, Logitboost uses a smoother weighting function than the default AdaBoost classifier, a fact that allows to address better the examples that are highly misclassified since the direction towards learning the proper mapping function per each learning round is not heavily affected by them. Instead, their importance does not overwhelm the corresponding importance of the examples with smaller misclassified errors, providing more robust confidence scores: p ( y | x ) . Taking into consideration that the choice of the most informative u i examples are more often than not selected through suitable metrics that depend on p ( y i | u i ) , such a behavior may be proven quite successful in practice [19,20]. Additionally, the convergence of the Logitboost scheme is not violated, as the general boosting procedure guarantees, since the logit-loss function which is described later is asymptotically minimized.
The favoring properties of the Logitboost ensemble learner have also been noticed in the related literature, although their cardinality is restricted. To be more specific, apart from using only univariate regressors inside this scheme, generalized functions could also be applied, increasing thus the total predictive ability but probably disrupting interpretability [21]. Otero and Sanchez proposed the use of descriptive fuzzy learners inside Logitboost, modifying slightly the usual structure of default fuzzy learners and overpassing the behavior of a similar fuzzy-based AdaBoost version [22], while a modification of the internal scoring mechanism based on distance from the decision regions using weak learners under Logitboost scheme was tested in [23]. Naive Bayes (NB) has also been combined appropriately with this scheme, improving its total performance against other popular variants of Bayesian Networks [24], while Logitboost’s operation was totally matched into the learning procedure proposed by Leathart et al., introducing Probability Calibration Trees (PCT) in the context of regression task, separating the input task space and fitting local predictors [25].
Logitboost autoregressive networks made use of the same scheme for modeling conditional distributions, offering a procedure that could be parallelized, exploiting the advantages of boosting ensembles for which the hyperparameters are clearly less than that of Neural Networks (NNs) and appeared to converge for several examined cases into same values, at least for the shrinkage factor [26]. More sophisticated multi-class expansions of Logitboost could further improve its applicability as it has been mentioned by the corresponding authors. This direction has been actually studied recently by some works, providing interesting expansions of the default multi-class operation of Logitboost scheme: Adaptive Base class (ABC) [27] and Adaptive One vs. One (AOSO) Logitboost [28].
Moreover, since the pool-based inductive AL strategies are inherently iterative procedures which are based on a few initially provided data, exploiting appropriately at least one H o r a c l e for detecting the most informative u i —further analysis is presented on the next Section—the importance of obtaining accurate predictions is highly considered, but time limitations may occur when much complex learning models are embedded into these strategies. Trying to satisfy this trade-off under the Logitboost wrapper, we propose the use of M5P, a model tree regressor that tackles efficiently high-dimensional data since it builds linear models after having grown its preferred decision tree structure, taking advantage of its widely accepted decent learning performance over various scientific fields [29,30]. On the other hand, with the greedy manner under which Logitboost acts, although it is applied under a number of learning rounds (M), its total complexity does not differentiate heavier than other state-of-the-art classification algorithms [27]. Thus, its integration under AL strategies would not induce prohibitive time response in practice. Furthermore, since we maintain a regression tree as its base learner, both binary and multi-class classification problems can be addressed efficiently without inserting further modifications that would probably raise the computational complexity of the total algorithm.
Consequently, we propose the adoption of Logitboost(M5P) under pool-based AL classification problems, exploiting its favoring properties both for selecting informative unlabeled instances and for evaluating the final learning hypothesis, built into the gradually augmented L, based on the annotations that a powerful human oracle provides us. This combination has been recently examined in the scenario of SSL [31], presenting remarkable performance. For investigating the overall ability of Logitboost(M5P) under AL scenario, we examined 91 different datasets, separated into binary and multi-class under 3 different query strategies against the baseline strategy of Random Sampling (RS), comparing its performance against 10 other well-known learning algorithms as well as the default use of weak Decision Trees (DTs)—to be more particular, one-node trees [32]—into Logitboost, as it is usually met in the literature and related ML packages [33]. A further study tuning one hyperparameter of the proposed algorithm was also made, proving that its learning performance may still improve under suitable preprocess stages which however are not easy to trust under the existence of limited training instances.
More details regarding AL and a description of the proposed framework for examining the efficacy of Logitboost(M5P) in the case of an AL ecosystem are provided in the next two sections, along with the experimental procedure, the results and our comments, following the structure of the current journal. The last section summarizes our contributions and the pros and cons of our proposed combination, based mainly on our results, while future directions are posed.

2. Materials and Methods

The main reason that we resort to PSL methods is the coexistence of both L and U, while the amount of the latter (size(U)) is much larger than of the former (size(L)): s i z e ( U ) s i z e ( L ) [6]. One of the subcategories of PSL algorithms is AL, where Settles has demonstrated a great survey work on this kind of algorithms [10]. Trying not to present many details, we highlight the most important parts of such a learning strategy.
First of all, we employ a probabilistic classifier (f) acting towards two different directions: searching for the most compatible u i s and evaluating the final model after a predefined number of iterations is reached or until any other set stopping criterion is satisfied. The first part is handled by exploiting a proper sampling Query Strategy (QS) which defines a specific criterion or metric ( m e t r i c u s e f u l n e s s ) , so as to measure the informativeness or the utility of all the available u i s. In order to detect the most convenient of them for creating a batch (B) of potentially informative u i s so as to increase the learning ability of the total AL learning procedure, we select the top b highly ranked instances:
Q S : U × n u m b e r   o f   c l a s s e s B ,   w i t h   B U ,
B : s e l e c t   t o p b   u i s U from vector r a n k ( m e t r i c u s e f u l n e s s ( U , f ( U ) ) ) ,
where B is actually a subset of the applied U during each iteration. The second is resolved through employing one or more human oracles or sources of information, like known crowdsourcing platforms, e.g., Amazon Mechanical Turk and CrowdFlower [34].
This means that, after having detected the B, we ask the available H o r a c l e to assign the corresponding label based on its knowledge background. Then, we merge the pairs of {B, H o r a c l e ( B ) } with the initially collected L during the first iteration or the current version of L for next iterations Liter, where iter depicts the current iteration. Then, we refine f and repeat this procedure until a terminating condition is satisfied. In contrast with pure SSL approaches or in general with the wide spectrum of WSL approaches, the terminating condition of the empty U pool is not a realistic one here, since this would demand much effort on the side of the human factor. The participation of the latter introduces several trade-off situations that should be considered carefully.
According to the related literature [20], there are 3 general kinds of QS models on the field of pool-based AL and a group of hybrid ones that combine more than one strategy:
  • heterogeneity-based,
  • performance-based,
  • representativeness-based, and
  • hybrid ones,
where more details are provided in [35]. A quite important research orientation of the related community is the proposal of a new QS, either introducing new metrics which may measure a behavior that seems more favorable for specific tasks [36,37] or trying to capture better the reasoning of some choices made by similar methods [38]. One representative work related to this last category is the work of Vu-Linh Nguyen et al. [39], exploring further Uncertainty Sampling (UncS) QS, discriminating this into epistemic and aleatoric sampling strategies, highlighting their differences and proposing the first variant as more promising.
We actually adopted UncS in our AL framework, which tries to distinguish the u i instances for which the applied learning algorithm being trained on Liter is less confident. For a binary problem, such an instance would induce p ( y i = 0 | u i )   p ( y i = 1 | u i )   0.5 . This strategy favors time-efficient solutions for the majority of ML algorithms because its time complexity demands a training stage of f over Liter and an evaluation stage of Uiter. Since the cardinality of the former is smaller than the latter, especially for low labeled Ratios (R)—where R is defined as the ratio of the initial L’s cardinality against the total amount of both L and U—the needed computational resources can be bounded based on the computational complexity of the base learner. Of course, the size of the batches (b) and the number of executed iterations (k) also play important roles.
In order to investigate the efficiency of Logitboost(M5P) under UncS, we employed 3 separate metrics inside this wrapper strategy, comparing them each time with the baseline of RS, where no sophisticated criterion was assessed for selecting the participant u i of each batch but a random pick took place before the corresponding batch was provided to H o r a c l e . This strategy comes with no time costs during the mining of U. This means that any examined QS should outreach this performance for being qualified as a valid one for the concept of AL. The relationship of each of the utilized m e t r i c u s e f u l n e s s (Least Confident: LConf, Smallest Margin: SMar and Ent: Ent) is given here:
f L C o n f ( u i ) = a r g   min u i   U p ( y | u i ) ,
f S M a r ( u i ) = a r g   min u i   U [ p ( y 1 | u i ) p ( y 2 | u i ) ] ,
f E n t ( u i ) = a r g   max u i   U y p ( y | u i ) log p ( y | u i ) ,
where p ( y | u i ) is the confidence of the base learner on the examined u i , while Equation (5) computes the difference between the two most probable classes y 1   a n d   y 2 of the same u i so as to return the most compatible choice from the available into U.
With regards to the proposed base learner, Logitboost(M5P), more details are given here. Logitboost is an additive logistic regression algorithm that can be seen as a convex optimization problem. An additive model, like simple linear models or regression trees, for solving a binary problem has the function of the following form:
f L o g i t b o o s t ( x ) = s i g n ( f a d d i t i v e ( x ) ) = s i g n ( m = 1 M w m · h m ( x ; γ m ) ) ,
where m is the number of classifiers, w m is the constants to be determined and h m is the chosen base functions along with their internal parameters γ m . Assuming now that f a d d i t i v e ( x ) is the mapping that we need to fit our strong aggregate hypothesis and h m is the separate weak hypotheses, then the two-class boosting algorithm is fit by minimizing the next criterion:
J L o g i t B o o s t ( f a d d i t i v e ( x ) ) = ε [ l o g ( 1 + e 2 * ( y * f a d d i t i v e ( x ) ) )   ] ,
where y is the true class label and y · f a d d i t i v e ( x ) is the voting margin term [40], while ε [ · ] denotes the expected value.
Adopting the negative binomial log-likelihood does not affect the minimizer of Equation (8) compared with the typical boosting function, enabling at the same time a smoother weighting of examples for which predictions of Logitboost learner are far away from the discriminating decision threshold, constituting a great asset. This procedure takes place using the Newton-like steps, a more complicated optimization process than in the case of exponential loss function of AdaBoost algorithm. However, this fact does not affect the ambition of minimizing the set loss function during the training process.
Regression trees are known for their ability to deal efficiently with large datasets in terms of both features and instances, added to their simplicity and robustness [41]. The M5P regressor is a recreation of M5 algorithm [42], where the portion of the dataset that reaches the leaf is classified by a linear regression model stored in each branch of the tree. For the dataset split, certain attributes are chosen using the standard deviation error (SDR) as a criterion for the best attributes to split the dataset at each node. The chosen attribute is the one with the maximum expectation to error reduction:
S D R = S D ( T r e e ) T r e e i T r e e * S D ( T r e e i ) ,
where T r e e i refers to the subset of examples that have the ith result of the potential test and S D ( · ) refers to the standard deviation of its argument. The stopping criteria is either the number of remaining instances to reach a certain number or a very small change in class value.
The successful competition of M5P against other regression trees or other conventional ML learners has been stated in recent literature [30,43,44]. Its exploitation under the wrapper scheme of Logitboost could lead to a robust classifier that operates on the field of AL, both for choosing informative u i instances and for providing remarkable classification performance. Moreover, possible inaccurate predictions that would appear because of either a shortage of l i that covers the total range of the output values or weak indicators originally existing in the feature space of a dataset could be alleviated by the smooth weighting function that Logitboost applies, avoiding the overfitting phenomena that might discard its decisions [45].
The last mechanism that needs to be described before more technical details of the experimental procedure is the increase in L during the iterative process of a typical AL environment. To be more specific, the queried instances (b) are chosen in batches. The size of each batch relies on the size of the initial labeled set of each dataset and the number of predefined iterations (k). This happens because we adopted an augmenting strategy that aims to double the size of the labeled instances at the final (kth) iteration of the experiment. According to this concept, the steady value of the parameter b per dataset is computed as follows:
b =   s i z e ( L ) k ,
Thus, to execute a complete AL experiment fairly, we adopted a flexible process for which the pseudocode is placed in Algorithm 1. Initially, we start with size(L) collected labeled instances, and before the final evaluation, 2*size(L) instances are gathered, where the additional labeled instances have been assigned with pseudo-labels by H o r a c l e after their selection through the combined interaction of any chosen QS and our selected base learner. During each iteration, a batch B that consists of b instances is extracted from the U subset and is added along with the decisions of the employed H o r a c l e into the current L subset. For the evaluation process, a test set is used to examine the accuracy of the Logitboost(M5P), which is now trained on the augmented labeled set through the process of the AL. The total procedure is as follows:
Algorithm 1Active learning scheme
  1:    Mode:
  2:    Pool-based scenario over a provided dataset D = Xn x N ⋃ Yn x 1
  3:    xi—i-th instance vector with N features xi: <x1, x2, … xN> ∀ 1 ≤ i ≤ n
  4:    yi—scalar class variable with yi ∊ {0, 1} or unknown ∀ 1 ≤ i ≤ n
  5:    n—number of instances n = size(L) + size(U)
  6:    B—batch of unlabeled samples that are labeled per iteration
  7:    Input:
  8:    Liter (Uiter)—(un)labeled instances during the iter-th iteration, Liter ⊂ D, Uiter ⊂ D
  9:    k—number of executed iterations
  10:   base learner—the selected classifier
  11:   QS(metric)—the selected Query Strategy along with its embedded metric
  12:   Preprocess:
  13:   b—size of batch B computed by Equation (10)
  14:   Main Procedure:
  15:   Set iter = 1
  16:   While iter < k do
17: Train base learner on Liter
18: Assign class probabilities over each ui ∊ Uiter
19: Rank ui according to QS(metric)
20:Select the top-b ranked ui formatting current B
21: Provide batch B to human oracle and obtain their pseudo-labels: H o r a c l e ( B )
22: Update L: Liter+1 ← Liter ⋃ {B, H o r a c l e ( B ) }
23: Update U: Uiter+1 ← Uiter\{B}
24: iter = iter + 1
  25: Output:
  26: Train base learner on Lk for predicting class labels of test data

3. Results

More technical details are revealed in this section to both describe better the volume of our executed experiments trying to better clarify the importance of the Logitboost(M5P) as a robust inductive learner under the AL concept and to favor the reproducibility of the total experimental procedure. Therefore, we firstly describe the basic properties of the examined datasets, providing later the details of the experimental phase regarding mainly the parameters of the compared algorithms as well as the open-source platform that we utilized to execute them. Finally, we share a link with all the produced results and visualizations due to lack of space.

3.1. Data

All of the mined datasets come from the well-known University of California-Irvine (UCI) repository [46]. The next two figures represent their formulation, having separated them based on the number of class into binary and multi-class datasets. Thus, in Figure 1, the third column—depicting the number (#) of classes—contains recordings equal to 2 for all datasets, while in Figure 2, this parameter varies from 3 to 28 for our case. The last column depicts the ratio that the class with the most instances (majority class) and the class with the least instances (minority class) capture against the rest ones. In the former case (Table 1), these two values sum up to 100%, while in the latter case, we added the corresponding ratios of the two most minor classes, since some datasets contain some extremely rare classes, and depicting only their contribution to the total instances would be neither convenient nor informative for the reader. Therefore, the last column of all datasets with exactly 3 classes presented in Table 2 sums up also to 100% (contribution of largest class/contribution of the two most minor classes), while for the rest multiclass datasets, these values are not constrained. The fact that the majority of the datasets are imbalanced sets some kind of difficulty over the examined AL algorithms, but this property at the same time is met in the most remarkable challenging real-life problems. Thus, no preprocess stage was applied for balancing the datasets based on the cardinality of their classes. One exception is the “texture” dataset, which contains 11 classes with exactly 500 instances per each class, leading to perfect balance.
As it concerns the rest of the information that characterizes the structure of our examined datasets, we mention that their cardinalities range from 57 (“labor”) to 67,557 instances (“connect-4”), while their feature space ranges from 2 (“banana”) to 90 (“movement-libras”), covering thus a wide spectrum of cases regarding these two properties. Additionally, the 5th column distinguishes and counts accordingly the number of categorical and numerical features. Analogous to their participation, there are datasets for which features are related solely with one category (numerical or nominal) or with both of them, usually called mixed.

3.2. Active Learning Components

3.2.1. Classifiers

In order to investigate properly the distinguishing ability of the proposed combination of Logitboost with the M5P under the AL concept, 7 different classifiers have been selected so as to operate under the same AL framework, as this was combined in Algorithm 1. Some brief details of these classifiers along with their original references are recorded here:
  • k-Nearest Neighbors [47], the most representative classification algorithm from the family of lazy learners, also referred to as an instance-based algorithm since it does not consume any resources during the training stage. Instead, it computes based on appropriate distance metrics the k-nearest neighbors of each test instance and exports its decision through a simple majority vote about the class of the latter one. Three different variants of this algorithm were included: 1-NN, 3-NN and 5-NN, increasing the value of the k parameter;
  • Decision Trees (DTs) [48], where J48 and Random tree algorithms from this category were preferred. The first one constitutes a popular implementation of C4.5 generating a pruned variant that exploits Gain Ratio to determine how to split the tree, while the second one considers just a randomly chosen subset of the initial feature space before growing an unpruned decision tree. Logistic Model Trees (LMT) [49] was also employed as a powerful ensemble algorithm. Based on this, a tree structure is suitably grown, but proper logistic regression models are built at its leaves, exploiting in this manner only the most relevant attributes;
  • JRip [50], a rule-based learner that tries to produce rules so as to capture all the included instances into the provided training set;
  • Naive Bayes (NB) [51], a simple Bayesian-based method that assumes that all the features inside the original feature space are independent. Although this assumption seldom holds, especially in real-life cases, this generative approach has found great acceptance at the literature; and
  • AdaBoost (Ada) [18], the most popular boosting algorithm that minimizes exponential loss.
As all the mentioned algorithms as well as the proposed combined Logitboost(M5P) learner were considered, they were mined from the Waikato Environment for Knowledge Analysis (WEKA) environment [33], keeping the default parameters of all their original implementations, enforcing the reproducibility and conducting fair comparisons.

3.2.2. Experiment Details

Considering the evaluation of our results, we applied a 3-fold-cross-validation procedure (3-CV) where the 2 out of three folds were held for the training data and the rest was for testing. Afterwards, the training data were split to L and U under 4 different R values: 5%, 10%, 15% and 20%. After executing 15 iterations, the size of the labeled data compared to the total training data increased to 10%, 20%, 30% and 40%, respectively. This procedure was repeated 3 times per case, executing a 3 × 3-CV evaluation stage. For implementing our pool-based experiments with the UncS and the aforementioned metrics, along with the RS strategy, we used the ‘Java Class Library for Active Learning’ (JCLAL) library, a Java-based library that interacts directly with the WEKA facilitating the use of the algorithm classifiers in this framework [52]. Furthermore, we conducted the next comparisons (the proposed learner versus 1. simple or 2. ensemble learners) regarding the classification accuracy (Acc%) metric, verified later by a nonparametric statistical comparison process:
  • Logitboost(M5P) vs. 1-NN vs. 3-NN vs. 5-NN vs. J48 vs. JRip vs. NB vs. RandomTree
  • Logitboost(M5P) vs. Logitboost(DStump) vs. Bagging(J48) vs. Ada(DStump) vs. LMT
The second set of comparisons is an investigation between the learning behavior of the proposed combination of M5P under the Logitboost scheme with the default Logitboost, which exploits the Decision Stump algorithm (DStump) [32], a simple one-node tree that discriminates each example using only one feature which acts as the root of this simplified tree. Additionally, the well-known Bagging scheme along with the J48 algorithm has been also examined for consistency reasons, comparing our proposed base learner with another ensemble approach, based again on DTs [53,54] along with the Ada and LMT learning approaches that have found great acceptance in practice.

3.3. Figures, Tables and Schemes

Here, we record the results of the smallest R-based scenario (R = 5%) only for the UncS(Ent) QS. We provide two tables for each comparison setting, separating thus on binary and multi-class datasets (Table 3, Table 4, Table 5 and Table 6).
Due to the volume of produced results and to maintain a balance between the extension of the presented results and the main body of the rest manuscript, our included tables and figures present our results only for a small portion of our total experiments. The following link contains all results: http://ml.math.upatras.gr/wp-content/uploads/2020/10/MDPI_Informatics_AL_Logitboost_M5P.7z.
In the preceding 4 tables, we highlighted the best value per dataset (row) achieved by all the included algorithms (columns) in bold format to facilitate visualization of these results. However, some implications occurred during our experiments and we have to record them here. Initially, it has to be mentioned that the JRip algorithm did not manage to export decisions for the majority of the datasets under the SMar metric. This behavior is recorded because of JRip’s inherent inability to produce rules for all the existing classes, especially when the training data are not sufficient. Thus, we have removed this algorithm from the corresponding files, but it is still included for the other QSs. Furthermore, the LMT algorithm as well as Ada(DStump) did not manage to export predictions for 20 (15 binary and 5 multi-class) datasets, which made us remove all of them from the second set of comparisons: the proposed algorithm versus the ensemble ones.
Continuing with our results, in order to present some better insights into the total experiments, we have gathered the victory frequencies for these two cases into Table 7 and Table 8, separated internally depending on the number of the existing classes and summarizing the performance per learner for all the examined R-based scenarios and the distinct query strategies applied. We have ignored the RS case, since this acts as the baseline in the AL concept, but we have taken into consideration this strategy in the statistical comparisons that follow. In the aforementioned case of JRip under UncS(SMar) strategy, we did not record any value since it has been rejected by this kind of experiments.
Considering the statistical comparison of the first part of the results, we have applied the well-known nonparametric Friedman test, which examines if the null hypothesis about the similarity of the participating algorithms as it considers their performance holds [55]. Since both Friedman and Iman–Davenport statistics highly favored the rejection of the null hypothesis, a proper post hoc test was applied to investigate further the statistical importance of the acquired results. In our case, the post hoc of Nemenyi was selected using an alpha level parameter equal to 0.05 [56]. The value of the critical difference (CD) that should be overpassed between the learning behavior of two algorithms for being considered as statistically different is 1.21. The next figure depicts the achieved scores per base learner for both the binary and multi-class datasets, discriminating the performance of the 3 examined metrics under UncS against the RS. A violin plot was chosen.
Regarding the second part of the experiments, a more targeted comparison was made to verify the predictive ability of the proposed variant of Logitboost against its default setup, as has been implemented on WEKA API, which actually uses a one-node tree as a weak learner, while the other 3 ensemble learners have been included. Therefore, we applied a twofold comparison, examining their efficacy on both the binary and multi-label datasets, following the same statistical verification as previously. We have already measured the frequency of the best achieved performance for all examined datasets per ensemble learner, QS and R-based scenario in Table 9, while in Table 10, we present the corresponding Freidman rankings. For acquiring better insight of the relative importance of the learning performances of this kind of comparison, we recorded the corresponding statistical ranking per separate R-based scenario.
Indeed, we can verify that different behaviors are recorded in the largest R-based scenario compared with the rest ones during the binary datasets, a fact that would not have been noticed under an average of the separate rankings, being merged and leading to erroneous conclusions. Despite this fact, the underlying CD value for this set of experiments is equal to 0.568, a score that settles that the proposed approach is significantly superior against the other examined algorithms under the same AL framework in 5 out of the 8 cases.
Meanwhile, the next 3 figures depict the performance of the Logitboost(M5P) against its 4 ensemble opponents for the 9 largest datasets, across both kind of datasets, through compatible error-bar plots that summarize the performance on each evaluation during the 3-fold-CV process as well as the corresponding average value for each iteration.

4. Discussion

In this section, we briefly discuss the obtained results from both comparisons to summarize the overall results and to perceive better the assets of the proposed AL approach that is based on the combination of Logitboost scheme with the M5P regressor. First, application of Logitboost under AL has not been recorded in the literature in contrast to several other ML models [57]. Consequently, it was reasonable to adopt a common AL framework to make fair comparisons with the selected approaches that are based on state-of-the-art algorithms during both of the examined settings. Thus, no tuning stages were inserted into the learning pipeline. The total experimental procedure was conducted regarding 4 different values of the R (%) parameter, trying to investigate further the behavior of all the examined approaches, assuming that the human oracle did not introduce any noisy decisions at all. Thus, we do not insert noisy decisions during augmentation of the initial labeled set, which shifts the responsibility of selecting informative instances to the learning ability of the base learner per case. Additionally, we are more interested on the lowest R-based scenarios since they constitute more realistic simulations of real-life WSL problems.
From the accuracy score recorded in Table 3 through Table 6, we can see that the proposed combination highly outperformed its rivals on the majority of the 91 datasets. The aggregated number of victories for both sets of comparisons—versus simple and ensemble base learners—have been placed in Table 7 and Table 8, where the proposed set managed to capture the best performance in 774 cases out of 1112 (69.6%) against 6 approaches and in 499 out of 866 cases (57.6%) against 4 approaches. Moreover, its defects seem to appear on specified datasets (e.g., “iris” and “post-operative” from binary datasets as well as “heart-h”, “saheart” and “newthyroid” from multiclass ones). The structure of these datasets should be examined further, but probably a tuning stage of Logitboost scheme, for which the parameters could be expanded because of the presence of the internal regressor, might lead to better performance against algorithms like kNN or learners that are based on DTs, either individually or under an ensemble fashion.
As it concerns the first experimental scenario, the distribution of the number of victories of the proposed AL approach was similar across the different query strategies and the labeled ratios, denoting its general efficacy against the other approaches. The 3 distinct kNN learners also performed cumulatively 243 victories, constituting a useful proof about their robustness despite the restricted number of labeled examples [58]. On the other hand, the performance of NB as a base learner was disappointing, affected by the aforementioned shortage of numerous initial examples since it did not record any victory.
During the second and, of course, more challenging scenario against ensemble base learners, the proposed algorithm was again more competitive and outperformed the rest in the majority of the grouped experiments. However, LMT-based approaches managed also to score several winning accuracies per dataset, especially in binary problems when larger R-based experiments were conducted. In fact, LMT expands the Logitboost procedure internally into its main learning kernel but, at its final stage, exploits only a subset of the initial feature space so as to build its logistic model. This property seems favorable for the aforementioned case, as the statistical rankings placed in Table 10 prove. Specifically, for binary problems, the proposed algorithm performed significantly better behavior than the LMT-based AL approach only for the case of R = 5%, while for the next two comparisons, no statistical difference was recorded, ranking Logitboost(M5P)-based approaches in first with a slight lead, while in the last scenario, where R = 20%, the LMT significantly outperformed the proposed one. However, in multi-class problems, this behavior was not repeated. This kind of result possibly denotes the existence of noisy features that highly affect binary problems and/or highlights the overfitting phenomena that Logitboost may face when outliers are inserted into its training stage.
Returning to the first set of experiments, a statistical comparison was executed for verifying the results obtained from the examined query strategies against also the baseline of random sampling. Figure 2 captures the performance per district learner, where the proposed base learner recorded a statistically significant behavior against its baseline as well as the rest of the AL approaches, being ranked always as the best across all the conducted scenarios. It is remarkable that the RS(Logitboost(M5P)) approach managed also to outreach the other simple algorithms on average, proving the overall predictive ability of the proposed ensemble learner. This last note highlights also the implications that may occur when ranking the available unlabeled examples, a fact that may deteriorate the total learning behavior since the less informative the selected instances are labeled, the more redundancy that occurs in the gradually augmented training subset.
Discussing again the second experiment setup, a similar procedure was implemented, where besides a comparison with the LMT ensemble learner, useful conclusions were drawn through the conducted comparisons. The replacement of a weak one-node tree with the M5P of model trees under the Logitboost scheme was substantially examined, along with the use of the AdaBoost procedure. This amendment helped us to clarify even better the overall benefits of the proposed boosting approach, since the improvement that was noticed mainly against these two approaches was impressive. Additionally, two separate figures (Figure 3 and Figure 4) were produced to better visualize the discriminative ability of the proposed approach against other ensemble learners, which is clearly formatted by the initial iterations in 8 out of 9 selected datasets, recording also more robust learning behavior judging by its fluctuations along the iterative procedure of the applied AL framework. The instable behavior of AdaBoost(DStump) is also remarkable, showing clearly its untrustworthy behavior compared with the proposed one, as intense fluctuation was recorded.
Finally, we also conducted a study of one hyperparameter of the Logitboost scheme, without tuning further the internal base learner of M5P, in order to verify its optimality regarding at least one parameter of this scheme. This hyperparameter was selected to be the number of iterations that are executed during its training stage, a property that affects both its spent computational resources and the main drawback of boosting procedures: overfitting. We noticed that, more often than not, the default approach with 10 internal iterations did not achieve the best performance. This fact leaves much space for further investigation on the parameters of the Logitboost scheme under AL learning scenarios, which is however not easy to shed light on because of the limited initial data that are in practice provided. We pose the corresponding statistical rankings in Table 10.

5. Conclusions

To sum up, in this work, we proposed the use of the Logitboost scheme along with an M5P regressor under a properly designed pool-based AL learning scheme. We assumed that the smooth learning behavior of Logitboost could lead to safer predictions, especially when it selects the most informative unlabeled instances from the corresponding U pool, favoring thus the overall learning rates of an AL algorithm. Its combination with M5P managed to increase the overall accuracy compared with the performance of simpler tree-based models, leading also to superior performance against various ensemble state-of-the-art algorithms evaluated in the same AL framework over 4 R-based scenarios under 3 separate metrics embedded into uncertainty sampling query strategy. The performed statistical comparisons verified the significantly better performance of the proposed batch-based inductive active learning algorithm, recording its better generalization ability through a wide range of experiments.
Our future directions are mainly related to internal investigation of the Logitboost scheme, since its application on both the AL and SSL fields has been proven to be really promising, while at the same time, the related community has not highly benefited by this. Feature selection could be really useful in several real-life cases, since removing noisy or irrelevant variables would further improve the predictive ability of the Logitboost scheme. A similar preprocessing strategy was considered in [59], before creating an ensemble of Logitboost that exploits random forest as a base learner in the field of anomaly detection. The use of metrics of informativeness that are popular in the field of AL could boost the predictive performance of SSL methods and vice versa, as the authors of [60] demonstrated, studying the exploitation of centrality measures that stem from graph-based representation of data for capturing data heterogeneity. Use of AL + SSL based on ensemble learners either with UncS or with more targeted query strategies could boost the overall performance on classification tasks without demanding much effort from human annotators or reducing expenses induced by the corresponding crowdsourcing services [14]. The aspect of applying query strategies that avoid using uncertainty-based directions but prefer guidance by interactions among the decisions of multiple learners seems really promising, either for obtaining decisions through distinct iterations of Logitboost-based classifiers or for blending this powerful classifier into a pool of available classification algorithms [61].
Furthermore, a combination of the proposed base learner or adoption of the related boosting learners [62] with more recently stated query strategies could help us reach competitive performance in more complex tasks that stem from real-life applications [63]. Expansion also towards online AL frameworks should be further investigated by our side [64].

Author Contributions

Conceptualization, V.K., S.K. (Stamatis Karlos) and S.K. (Sotiris Kotsiantis); methodology, V.K., S.K. (Stamatis Karlos) and S.K. (Sotiris Kotsiantis); software, V.K.; validation, V.K. and S.K. (Stamatis Karlos); formal analysis, S.K. (Stamatis Karlos) and S.K. (Sotiris Kotsiantis); investigation, V.K., S.K. (Stamatis Karlos) and S.K. (Sotiris Kotsiantis); resources, V.K. and S.K. (Stamatis Karlos); data curation, V.K. and S.K. (Stamatis Karlos); writing—original draft preparation, S.K. (Stamatis Karlos); writing—review and editing, V.K., S.K. (Stamatis Karlos) and S.K. (Sotiris Kotsiantis); visualization, V.K. and S.K. (Stamatis Karlos); supervision, S.K. (Sotiris Kotsiantis); project administration, V.K. and S.K. (Stamatis Karlos). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Papadakis, G.; Tsekouras, L.; Thanos, E.; Giannakopoulos, G.; Palpanas, T.; Koubarakis, M. The return of jedAI: End-to-End Entity Resolution for Structured and Semi-Structured Data. Proc. VLDB Endow. 2018, 11, 1950–1953. [Google Scholar] [CrossRef]
  2. Charton, E.; Meurs, M.-J.; Jean-Louis, L.; Gagnon, M. Using Collaborative Tagging for Text Classification: From Text Classification to Opinion Mining. Informatics 2013, 1, 32–51. [Google Scholar] [CrossRef] [Green Version]
  3. Vanhoeyveld, J.; Martens, D.; Peeters, B. Value-added tax fraud detection with scalable anomaly detection techniques. Appl. Soft Comput. 2020, 86, 105895. [Google Scholar] [CrossRef]
  4. Masood, A.; Al-Jumaily, A. Semi advised learning and classification algorithm for partially labeled skin cancer data analysis. In Proceedings of the 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Nanjing, China, 24–26 November 2017; pp. 1–4. [Google Scholar]
  5. Haseeb, M.; Hussain, H.I.; Ślusarczyk, B.; Jermsittiparsert, K. Industry 4.0: A Solution towards Technology Challenges of Sustainable Business Performance. Soc. Sci. 2019, 8, 154. [Google Scholar] [CrossRef] [Green Version]
  6. Schwenker, F.; Trentin, E. Pattern classification and clustering: A review of partially supervised learning approaches. Pattern Recognit. Lett. 2014, 37, 4–14. [Google Scholar] [CrossRef]
  7. Jain, S.; Kashyap, R.; Kuo, T.-T.; Bhargava, S.; Lin, G.; Hsu, C.-N. Weakly supervised learning of biomedical information extraction from curated data. BMC Bioinform. 2016, 17, 1–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Zhou, Z.-H. A brief introduction to weakly supervised learning. Natl. Sci. Rev. 2017, 5, 44–53. [Google Scholar] [CrossRef] [Green Version]
  9. Ullmann, S.; Tomalin, M. Quarantining online hate speech: Technical and ethical perspectives. Ethic- Inf. Technol. 2019, 22, 69–80. [Google Scholar] [CrossRef] [Green Version]
  10. Settles, B. Active Learning; Morgan & Claypool Publishers: San Rafael, CA, USA, 2012; Volume 6. [Google Scholar]
  11. Karlos, S.; Fazakis, N.; Kotsiantis, S.B.; Sgarbas, K.; Karlos, G. Self-Trained Stacking Model for Semi-Supervised Learning. Int. J. Artif. Intell. Tools 2017, 26. [Google Scholar] [CrossRef]
  12. Zhang, Z.; Cummins, N.; Schuller, B. Advanced Data Exploitation in Speech Analysis: An overview. IEEE Signal Process. Mag. 2017, 34, 107–129. [Google Scholar] [CrossRef]
  13. Sabata, T.; Pulc, P.; Holena, M. Semi-supervised and Active Learning in Video Scene Classification from Statistical Features. In IAL@PKDD/ECML; Springer: Dublin, Ireland, 2018; Volume 2192, pp. 24–35. Available online: http://ceur-ws.org/Vol-2192/ialatecml_paper1.pdf (accessed on 30 October 2020).
  14. Karlos, S.; Kanas, V.G.; Aridas, C.; Fazakis, N.; Kotsiantis, S. Combining Active Learning with Self-Train Algorithm for Classification of Multimodal Problems. In Proceedings of the 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA), Patras, Greece, 15–17 July 2019; pp. 1–8. [Google Scholar] [CrossRef]
  15. Senthilnath, J.; Varia, N.; Dokania, A.; Anand, G.; Benediktsson, J.A. Deep TEC: Deep Transfer Learning with Ensemble Classifier for Road Extraction from UAV Imagery. Remote Sens. 2020, 12, 245. [Google Scholar] [CrossRef] [Green Version]
  16. Menze, B.H.; Kelm, B.M.; Splitthoff, D.N.; Koethe, U.; Hamprecht, F.A. On Oblique Random Forests; Springer Science and Business Media LLC: Heidelberg/Berlin, Germany, 2011; pp. 453–469. [Google Scholar]
  17. Friedman, J.; Hastie, T.; Tibshirani, R. Additive logistic regression: A statistical view of boosting. Ann. Stat. 2000, 28, 337–407. Available online: http://projecteuclid.org/euclid.aos/1016218223 (accessed on 15 March 2016). [CrossRef]
  18. Freund, Y.; Schapire, R.E. Experiments with a New Boosting Algorithm. In ICML; Morgan Kaufmann: Bari, Italy, 1996; pp. 148–156. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.133.1040 (accessed on 1 October 2019).
  19. Reitmaier, T.; Sick, B. Let us know your decision: Pool-based active training of a generative classifier with the selection strategy 4DS. Inf. Sci. 2013, 230, 106–131. [Google Scholar] [CrossRef]
  20. Sharma, M.; Bilgic, M. Evidence-based uncertainty sampling for active learning. Data Min. Knowl. Discov. 2016, 31, 164–202. [Google Scholar] [CrossRef]
  21. Grau, I.; Sengupta, D.; Lorenzo, M.M.G.; Nowe, A. An Interpretable Semi-Supervised Classifier Using Two Different Strategies for Amended Self-Labeling 2020. Available online: http://arxiv.org/abs/2001.09502 (accessed on 7 June 2020).
  22. Otero, J.; Sánchez, L. Induction of descriptive fuzzy classifiers with the Logitboost algorithm. Soft Comput. 2005, 10, 825–835. [Google Scholar] [CrossRef]
  23. Burduk, R.; Bożejko, W. Modified Score Function and Linear Weak Classifiers in LogitBoost Algorithm. In Advances in Intelligent Systems and Computing; Springer: Bydgoszcz, Poland, 2019; pp. 49–56. [Google Scholar] [CrossRef]
  24. Kotsiantis, S.B.; Pintelas, P.E. Logitboost of simple bayesian classifier. Informatica 2005, 29, 53–59. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.136.2277 (accessed on 18 October 2019).
  25. Leathart, T.; Frank, E.; Holmes, G.; Pfahringer, B.; Noh, Y.-K.; Zhang, M.-L. Probability Calibration Trees. In Proceedings of the Ninth Asian Conference on Machine Learning, Seoul, Korea, 15–17 November 2017; pp. 145–160. Available online: http://proceedings.mlr.press/v77/leathart17a/leathart17a.pdf (accessed on 30 October 2020).
  26. Goessling, M. LogitBoost autoregressive networks. Comput. Stat. Data Anal. 2017, 112, 88–98. [Google Scholar] [CrossRef] [Green Version]
  27. Li, P. Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost. arXiv 2012, arXiv:1203.3491. [Google Scholar]
  28. Reid, M.D.; Zhou, J. An improved multiclass LogitBoost using adaptive-one-vs-one. Mach. Learn. 2014, 97, 295–326. [Google Scholar] [CrossRef]
  29. Quinlan, J.R. Learning with continuous classes. Mach. Learn. 1992, 92, 343–348. [Google Scholar] [CrossRef]
  30. Deshpande, N.; Londhe, S.; Kulkarni, S. Modeling compressive strength of recycled aggregate concrete by Artificial Neural Network, Model Tree and Non-linear Regression. Int. J. Sustain. Built Environ. 2014, 3, 187–198. [Google Scholar] [CrossRef] [Green Version]
  31. Karlos, S.; Fazakis, N.; Kotsiantis, S.; Sgarbas, K. Self-Train LogitBoost for Semi-supervised Learning” in Engineering Applications of Neural Networks. In Communications in Computer and Information Science; Springer: Rhodes, Greece, 2015; Volume 517, pp. 139–148. [Google Scholar] [CrossRef] [Green Version]
  32. Iba, W.; Langley, P. Induction of One-Level Decision Trees (Decision Stump). In Proceedings of the Ninth International Conference on Machine Learning, Aberdeen, Scotland, 1–3 July 1992; pp. 233–240. [Google Scholar]
  33. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
  34. Fung, G. Active Learning from Crowds. In ICML; Springer: Bellevue, WA, USA, 2011; pp. 1161–1168. [Google Scholar] [CrossRef]
  35. Aggarwal, C.C. Data Classification: Algorithms and Applications; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  36. Elakkiya, R.; Selvamani, K. An Active Learning Framework for Human Hand Sign Gestures and Handling Movement Epenthesis Using Enhanced Level Building Approach. Procedia Comput. Sci. 2015, 48, 606–611. [Google Scholar] [CrossRef] [Green Version]
  37. Pozo, M.; Chiky, R.; Meziane, F.; Métais, E. Exploiting Past Users’ Interests and Predictions in an Active Learning Method for Dealing with Cold Start in Recommender Systems. Informatics 2018, 5, 35. [Google Scholar] [CrossRef] [Green Version]
  38. Souza, R.R.; Dorn, A.; Piringer, B.; Wandl-Vogt, E. Towards A Taxonomy of Uncertainties: Analysing Sources of Spatio-Temporal Uncertainty on the Example of Non-Standard German Corpora. Informatics 2019, 6, 34. [Google Scholar] [CrossRef] [Green Version]
  39. Nguyen, V.-L.; Destercke, S.; Hüllermeier, E. Epistemic Uncertainty Sampling. In Lecture Notes in Computer Science; Springer: Split, Croatia, 2019; pp. 72–86. [Google Scholar] [CrossRef] [Green Version]
  40. Tang, E.K.; Suganthan, P.N.; Yao, X. An analysis of diversity measures. Mach. Learn. 2006, 65, 247–271. [Google Scholar] [CrossRef] [Green Version]
  41. Olson, D.L.; Wu, D. Regression Tree Models. In Predictive Data Mining Models; Springer: Singapore, 2017; pp. 45–54. [Google Scholar]
  42. Wang, Y.; Witten, I.H. Inducing Model Trees for Continuous Classes. In European Conference on Machine Learning; Springer: Athens, Greece, 1997; pp. 1–10. [Google Scholar]
  43. Alipour, A.; Yarahmadi, J.; Mahdavi, M. Comparative Study of M5 Model Tree and Artificial Neural Network in Estimating Reference Evapotranspiration Using MODIS Products. J. Clim. 2014, 2014, 1–11. [Google Scholar] [CrossRef] [Green Version]
  44. Behnood, A.; Olek, J.; Glinicki, M.A. Predicting modulus elasticity of recycled aggregate concrete using M5′ model tree algorithm. Constr. Build. Mater. 2015, 94, 137–147. [Google Scholar] [CrossRef]
  45. Schapire, R.E. The Boosting Approach to Machine Learning: An Overview. In Nonlinear Estimation and Classification; Denison, D.D., Hansen, M.H., Holmes, C.C., Mallick, B., Yu, B., Eds.; Springer: New York, NY, USA; Volume 171, Lecture Notes in Statistics. [CrossRef]
  46. Linchman, M. UCI Machine Learning Repository. 2013. Available online: http://archive.ics.uci.edu/ml/ (accessed on 30 October 2020).
  47. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Wang, R. Efficient kNN Classification with Different Numbers of Nearest Neighbors. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 1774–1785. [Google Scholar] [CrossRef]
  48. Kotsiantis, S.B. Decision trees: A recent overview. Artif. Intell. Rev. 2013, 39, 261–283. [Google Scholar] [CrossRef]
  49. Landwehr, N.; Hall, M.; Frank, E. Logistic Model Trees. Mach. Learn. 2005, 59, 161–205. [Google Scholar] [CrossRef] [Green Version]
  50. Hühn, J.; Hüllermeier, E. FURIA: An algorithm for unordered fuzzy rule induction. Data Min. Knowl. Discov. 2009, 19, 293–319. [Google Scholar] [CrossRef] [Green Version]
  51. Zhang, H.; Jiang, L.; Yu, L. Class-specific attribute value weighting for Naive Bayes. Inf. Sci. 2020, 508, 260–274. [Google Scholar] [CrossRef]
  52. Reyes, O.; Pérez, E.; Del, M.; Rodríguez-Hernández, C.; Fardoun, H.M.; Ventura, S. JCLAL: A Java Framework for Active Learning. J. Mach. Learn. Res. 2016, 17, 95-1. Available online: http://www.jmlr.org/papers/volume17/15-347/15-347.pdf (accessed on 20 April 2017).
  53. Quinlan, J.R. Bagging, Boosting, and C4.5; AAAI Press: Portland, OR, USA, 1996. [Google Scholar]
  54. Baumgartner, D.; Serpen, G. Performance of global–local hybrid ensemble versus boosting and bagging ensembles. Int. J. Mach. Learn. Cybern. 2012, 4, 301–317. [Google Scholar] [CrossRef]
  55. Eisinga, R.; Heskes, T.; Pelzer, B.; Grotenhuis, M.T. Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers. BMC Bioinform. 2017, 18, 68. [Google Scholar] [CrossRef] [Green Version]
  56. Hollander, M.; Wolfe, D.A.; Chicken, E. Nonparametric Statistical Methods. In Simulation and the Monte Carlo Method, 3rd ed.; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
  57. Ramirez-Loaiza, M.E.; Sharma, M.; Kumar, G.; Bilgic, M. Active learning: An empirical study of common baselines. Data Min. Knowl. Discov. 2016, 31, 287–313. [Google Scholar] [CrossRef]
  58. Li, J.; Zhu, Q. A boosting Self-Training Framework based on Instance Generation with Natural Neighbors for K Nearest Neighbor. Appl. Intell. 2020, 50, 3535–3553. [Google Scholar] [CrossRef]
  59. Kamarudin, M.H.; Maple, C.; Watson, T.; Safa, N.S. A LogitBoost-Based Algorithm for Detecting Known and Unknown Web Attacks. IEEE Access 2017, 5, 26190–26200. [Google Scholar] [CrossRef]
  60. Araújo, B.; Zhao, L. Data heterogeneity consideration in semi-supervised learning. Expert Syst. Appl. 2016, 45, 234–247. [Google Scholar] [CrossRef]
  61. Platanios, E.A.; Kapoor, A.; Horvitz, E. Active Learning amidst Logical Knowledge. arXiv 2017, arXiv:1709.08850. [Google Scholar]
  62. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef] [Green Version]
  63. Santos, D.; Prudêncio, R.B.C.; Carvalho, A.C.P.L.F.; Dos Santos, D.P. Empirical investigation of active learning strategies. Neurocomputing 2019, 326–327, 15–27. [Google Scholar] [CrossRef]
  64. Lughofer, E. On-line active learning: A new paradigm to improve practical useability of data stream modeling methods. Inf. Sci. 2017, 415, 356–376. [Google Scholar] [CrossRef]
Figure 1. Violin plot presenting the distribution of Friedman rankings for all the examined learners comparing the performance of the selected query strategies against active learning’s baseline, clarifying better the total performance of all the included approaches.
Figure 1. Violin plot presenting the distribution of Friedman rankings for all the examined learners comparing the performance of the selected query strategies against active learning’s baseline, clarifying better the total performance of all the included approaches.
Informatics 07 00050 g001
Figure 2. Comparison of the proposed Logitboost(M5P) against 4 ensemble learners over the 9 larger datasets between both binary and multi-class ones for UncS(Ent) with R = 5%.
Figure 2. Comparison of the proposed Logitboost(M5P) against 4 ensemble learners over the 9 larger datasets between both binary and multi-class ones for UncS(Ent) with R = 5%.
Informatics 07 00050 g002
Figure 3. Comparison of the proposed Logitboost(M5P) against 4 ensemble learners over the 9 larger datasets between both binary and multi-class ones for UncS(LConf) with R = 5%.
Figure 3. Comparison of the proposed Logitboost(M5P) against 4 ensemble learners over the 9 larger datasets between both binary and multi-class ones for UncS(LConf) with R = 5%.
Informatics 07 00050 g003
Figure 4. Comparison of the proposed Logitboost(M5P) against 4 ensemble learners over the 9 larger datasets between both binary and multi-class ones for UncS(SMar) with R = 5%.
Figure 4. Comparison of the proposed Logitboost(M5P) against 4 ensemble learners over the 9 larger datasets between both binary and multi-class ones for UncS(SMar) with R = 5%.
Informatics 07 00050 g004
Table 1. Formulation of the examined binary class datasets.
Table 1. Formulation of the examined binary class datasets.
Datasetn# of ClassesNCategorical/Numerical FeaturesMajority/Minority Class
appendicitis106270/780.189/19.811%
banana5300220/255.17/44.83%
bands3652190/1963.014/36.986%
breast-cancer286299/070.28/29.72%
breast-w699290/965.522/34.478%
breast277299/070.758/29.242%
bupa345260/657.971/42.029%
chess319623636/052.222/47.778%
coil200098222850/8594.034/5.966%
colic36822215/763.043/36.957%
colic.orig36822720/766.304/33.696%
credit-a6902159/655.507/44.493%
credit-g100022013/770.0/30.0%
crx6532159/654.671/45.329%
diabetes768280/865.104/34.896%
german100022013/770.0/30.0%
haberman306230/373.529/26.471%
heart-statlog2702130/1355.556/44.444%
heart2702130/1355.556/44.444%
hepatitis15521913/679.355/20.645%
housevotes23221616/053.448/46.552%
ionosphere3512340/3464.103/35.897%
kr-vs-kp319623636/052.222/47.778%
labor572168/864.912/35.088%
magic19,0202100/1064.837/35.163%
mammographic830250/551.446/48.554%
monk-2432260/652.778/47.222%
mushroom812422222/051.797/48.203%
phoneme5404250/570.651/29.349%
pima768280/865.104/34.896%
ring74002200/2050.486/49.514%
saheart462291/865.368/34.632%
sick377222922/793.876/6.124%
sonar2082600/6053.365/46.635%
spambase45972570/5760.583/39.417%
spectfheart2672440/4479.401/20.599%
tic-tac-toe958299/065.344/34.656%
titanic2201230/367.697/32.303%
twonorm74002200/2050.041/49.959%
vote43521616/061.379/38.621%
wdbc5692300/3062.742/37.258%
wisconsin683290/965.007/34.993%
Table 2. Formulation of the examined multi-class datasets.
Table 2. Formulation of the examined multi-class datasets.
Datasetn# of ClassesNCategorical/Numerical FeaturesMajority /Minority Class
abalone41742881/716.507/0.048%
anneal89863832/676.169/0.891%
anneal.orig89863832/676.169/0.891%
audiology226246969/025.221/0.884%
automobile15962510/1530.189/10.063%
autos20572510/1532.683/1.463%
balance-scale625340/446.08/53.92%
balance625340/446.08/53.92%
car1728466/070.023/7.755%
cleveland2975130/1353.872/16.162%
connect-467,55734242/065.83/34.17%
dermatology3586340/3431.006/18.995%
ecoli336870/742.56/1.19%
flare106661111/031.051/12.946%
glass214790/935.514/4.206%
hayes-roth160340/440.625/59.375%
heart-c3035137/654.455/0.0%
heart-h2945137/663.946/0.0%
hypothyroid377242922/792.285/2.572%
iris150340/433.333/66.666%
kr-vs-kp28,0561866/016.228/0.374%
led7digit5001070/711.4/16.4%
letter20,00026160/164.065/7.34%
lymph14841815/354.73/4.054%
lymphography14841815/354.73/4.054%
marketing68769130/1318.252/15.008%
movement_libras36015900/906.667/13.334%
newthyroid215350/569.767/30.232%
nursery12,960588/033.333/2.546%
optdigits562010640/6410.178/19.716%
page-blocks54725100/1089.784/2.102%
penbased10,99210160/1610.408/19.196%
post-operative87388/071.264/28.735%
primary-tumor339221717/024.779/0.295%
satimage64357360/3623.823/9.728%
segment23107190/1914.286/28.572%
shuttle57,999790/978.598/0.039%
soybean683193535/013.47/3.221%
tae151350/534.437/65.563%
texture550011400/409.091/18.182%
thyroid72003210/2192.583/7.417%
vehicle8464180/1825.768/48.581%
Table 3. Classification accuracy of the Logitboost(M5P) against the selected simple base learners for binary datasets under UncS(Ent) strategy with R = 5%.
Table 3. Classification accuracy of the Logitboost(M5P) against the selected simple base learners for binary datasets under UncS(Ent) strategy with R = 5%.
DatasetsLogitboost
(M5P)
1NN3NN5-NNJ48JRipRandom
Tree
NB
appendicitis75.8480.5682.0781.1380.1680.4779.5978.63
banana87.0684.8684.6986.6471.5478.8783.3083.08
bands58.6346.7640.0839.0050.9639.0942.5646.76
breast-cancer66.0872.1570.6369.8270.1570.4067.0367.84
breast-w95.6688.7593.6694.9087.9486.5588.0890.10
breast69.9272.3171.7172.6869.6769.1869.3269.47
bupa61.2648.2144.2544.0649.3743.2952.1752.24
chess97.6880.4479.8179.6292.9888.6482.6489.66
coil200092.6392.4993.7193.9994.0394.0291.5492.73
colic.ORIG66.3964.7766.7565.6766.1369.8567.2167.82
colic78.5472.4766.5669.8463.0564.9562.0568.51
credit-a81.7467.8363.8667.5484.3070.5864.3072.21
credit-g69.4769.4370.5370.1367.2769.8767.2768.87
crx79.4868.3062.6267.5986.1770.4860.5970.18
diabetes71.2267.6265.9365.6768.3266.6268.8468.89
german69.2369.3470.5770.4769.9769.8367.1468.73
haberman72.2243.1432.5731.9242.1631.7040.7448.22
heart-c77.2368.3266.1270.8564.2558.6460.6265.49
heart-h73.3673.4774.9469.0565.9967.1266.7869.09
heart-statlog75.0672.9663.2163.9570.2559.2665.6866.67
heart74.8172.3561.9865.9363.9562.3563.5866.91
hepatitis78.2859.8461.7664.3754.0146.2344.9356.48
housevotes96.1190.5189.5090.8088.4993.8187.0492.32
ionosphere84.5279.6873.0369.8065.6253.3759.4565.78
kr-vs-kp97.4880.0980.0380.0490.9391.2883.0790.61
labor76.0253.8074.2749.7142.6944.4452.0557.50
magic84.5376.8274.2177.2080.6479.6779.4481.21
mammographic81.6565.4663.3861.2166.7564.5870.2072.14
monk-298.3073.5371.6070.1497.2295.5281.4091.74
mushroom99.7799.9899.9899.9898.5299.2099.1099.36
phoneme81.4978.2374.1776.1773.3072.4876.4976.82
pima71.0965.4566.7165.5866.8465.3269.7068.71
ring89.2076.5073.0869.5781.2865.7479.5378.16
saheart62.6363.3565.4466.0964.8666.0263.2863.97
sick98.3794.7295.2395.0995.6496.3494.8796.53
sonar65.7155.7749.3548.5656.8948.2256.0756.67
spambase90.8075.2671.5371.0385.8784.4879.5284.93
spectfheart73.0347.4449.5644.5756.0545.4449.9456.14
tic-tac-toe83.6176.2371.9970.8465.6268.2368.0973.31
titanic77.9277.6277.0977.1673.8073.3077.3476.19
twonorm96.2391.3991.2593.5878.4579.3477.0384.20
vote95.1092.0393.3393.8793.0389.3588.5891.01
wdbc95.6688.6988.7091.0484.2981.4386.1887.76
wisconsin96.1087.8595.1795.9087.2687.9488.9691.00
Bold: the best value per dataset (row) achieved by all the included algorithms (columns).
Table 4. Classification accuracy of the Logitboost(M5P) against the selected simple base learners for multi-class datasets under UncS(Ent) strategy with R = 5%.
Table 4. Classification accuracy of the Logitboost(M5P) against the selected simple base learners for multi-class datasets under UncS(Ent) strategy with R = 5%.
DatasetsLogitboost
(M5P)
1NN3NN5NNJ48JRipRandom
Tree
NB
abalone22.0317.4217.3821.9220.8411.4817.2516.92
anneal.ORIG84.7183.9384.5283.4875.5473.6481.9980.12
anneal92.8075.2482.1486.2385.8684.5986.3787.92
audiology48.8434.9339.5634.6755.7626.7232.0235.86
automobile47.5934.3833.5426.2136.2717.8239.8335.08
autos43.7428.6121.9518.8437.8811.2135.9330.29
balance-scale85.6073.3977.6079.7864.3761.5567.6871.61
balance87.6271.5777.3979.6865.2364.4767.8973.33
car89.9179.2480.7180.3471.7471.2873.5378.24
cleveland50.8455.4455.5655.2252.9753.2052.1952.08
connect-476.3270.6572.5773.0771.3669.2464.4069.99
dermatology93.9580.4390.7991.7266.8853.7963.9870.57
ecoli73.1258.8369.3569.4462.0052.0857.8461.01
flare72.9566.5767.1063.4461.9267.8664.2068.33
glass51.8739.3838.1340.9636.9336.4741.0043.11
hayes-roth51.8944.5443.1340.6141.6941.8849.6447.80
hypothyroid99.4391.2592.8292.5497.9297.6894.2997.13
iris83.7885.3385.1183.5664.2243.7871.5666.37
kr-vs-kp47.8839.7440.1639.9630.6615.5028.8830.75
led7digit56.0061.3451.4847.5341.3325.3947.4042.93
letter88.4979.0775.2373.9862.6358.1956.7567.81
lymph74.7765.5169.8269.5755.9055.6461.4963.96
lymphography70.7266.4172.2969.4559.4657.1961.7163.21
marketing26.8826.8725.1626.8426.5223.2725.5625.24
movement_libras38.1539.5433.9832.2224.4410.3720.5623.02
newthyroid83.8481.8783.4185.8676.8771.4879.8678.39
nursery99.5785.5987.9286.2588.1282.8183.1688.51
optdigits97.1692.8996.5297.1172.2568.7961.9275.96
page-blocks96.5293.0794.6794.5494.3594.2493.7594.84
penbased99.0595.6098.3998.3886.7082.1782.2787.83
post-operative68.2068.5870.8871.2671.2671.2667.4368.97
primary-tumor30.2929.9929.7927.6324.2925.6628.1228.02
satimage87.3868.3885.9285.3970.0164.7261.5371.21
segment94.4986.8188.7486.5885.1177.5278.1483.38
shuttle99.9899.7299.8399.7999.7999.8199.6999.82
soybean76.5378.6766.9657.3046.1846.9147.2856.91
tae45.2638.4134.2437.9536.4133.8040.5939.88
texture98.2190.7994.5395.2880.8275.1470.7681.37
thyroid99.6062.7581.7087.4698.9298.6093.1497.11
vehicle70.5745.9040.7045.1145.6339.9546.2252.25
vowel49.4326.3314.0415.7935.3518.7226.7331.63
waveform-500082.6563.7968.4075.9368.5162.3559.8968.30
wine96.6377.7084.6286.1560.0746.2855.1066.00
winequalityRed51.5333.2547.0747.5937.9235.8131.0839.48
winequalityWhite49.1637.3145.1245.1541.0035.0834.1839.47
yeast51.8437.7139.9246.4536.0730.2133.7638.61
zoo26.4375.5581.5568.2660.2441.2450.6939.45
Bold: the best value per dataset (row) achieved by all the included algorithms (columns).
Table 5. Classification accuracy of the Logitboost(M5P) against the selected ensemble base learners for binary datasets under UncS(Ent) strategy with R = 5%.
Table 5. Classification accuracy of the Logitboost(M5P) against the selected ensemble base learners for binary datasets under UncS(Ent) strategy with R = 5%.
DatasetsLogitboost
(M5P)
Logitboost
(DStump)
Bagging
(J48)
Ada
(DStump)
LMT
banana87.0684.4883.3559.2571.91
bands58.6349.3247.2155.1457.90
breast-w95.6691.2890.3292.6195.52
chess97.6890.0095.7884.3898.14
coil200092.6392.3093.4994.0394.03
credit-a81.7472.7581.0684.8879.76
credit-g69.4768.5369.5370.6369.80
german69.2368.3768.7370.4769.47
heart-statlog75.0669.1460.2570.4970.25
housevotes96.1191.8294.6794.8295.25
ionosphere84.5269.9262.2074.4583.86
kr-vs-kp97.4890.3995.5886.5798.04
magic84.5381.7382.8877.1484.33
mammographic81.6574.6679.5279.9280.97
monk-298.3090.4897.2295.7693.98
mushroom99.7799.4199.3697.5899.60
phoneme81.4978.2680.4772.2579.42
pima71.0969.8469.0170.6673.13
ring89.2082.3087.6249.5183.88
sick98.3796.5998.1297.5298.34
sonar65.7159.4852.5760.7360.28
spambase90.8085.0889.6683.7692.70
spectfheart73.0359.7049.4469.9172.28
tic-tac-toe83.6175.0167.4069.0772.41
titanic77.9277.1578.2777.6677.56
twonorm96.2385.8286.3684.8197.79
vote95.1091.5688.3592.7285.06
wdbc95.6689.8785.7694.3897.01
wisconsin96.1092.0292.9792.9796.58
Bold: the best value per dataset (row) achieved by all the included algorithms (columns).
Table 6. Classification accuracy of the Logitboost(M5P) against the selected ensemble base learners for multi-class datasets under UncS(Ent) strategy with R = 5%.
Table 6. Classification accuracy of the Logitboost(M5P) against the selected ensemble base learners for multi-class datasets under UncS(Ent) strategy with R = 5%.
DatasetsLogitboost
(M5P)
Logitboost
(DStump)
Bagging
(J48)
Ada
(DStump)
LMT
abalone22.0318.7319.9416.7322.73
anneal.ORIG84.7182.2776.8075.5782.75
anneal92.8089.0385.1577.0292.28
audiology48.8438.9149.1033.9054.27
automobile47.5940.8337.1123.4843.40
balance-scale85.6074.9671.8449.8184.00
balance87.6276.2866.9855.9085.49
car89.9180.5678.3470.7987.29
cleveland50.8451.7053.8753.9853.20
connect-476.3270.2472.6765.8373.44
dermatology93.9576.1786.3148.7093.02
ecoli73.1263.9968.1562.0071.73
flare72.9568.4968.7353.4769.20
glass51.8745.3340.8038.9542.39
hayes-roth51.8949.7845.2448.1246.27
hypothyroid99.4396.9599.5193.8399.11
iris83.7873.9070.2280.0073.56
kr-vs-kp47.8835.8433.5610.0440.32
led7digit56.0048.7847.0014.9457.00
letter88.4971.0268.156.9178.27
marketing26.8825.8926.5918.6429.35
movement_libras38.1527.2424.3510.9340.37
newthyroid83.8480.7078.7981.2689.15
nursery99.5790.4190.7464.5495.40
optdigits97.1678.3581.6018.7495.07
page-blocks96.5295.0496.0692.6096.64
penbased99.0589.7291.9520.5297.45
primary-tumor30.2928.8124.3925.8624.98
satimage87.3873.3781.6633.6383.01
segment94.4985.3489.2528.5191.53
shuttle99.9899.8399.9684.2399.92
soybean76.5360.2443.8813.4774.73
tae45.2641.9135.7635.7437.08
texture98.2183.4584.9316.0899.59
thyroid99.6096.6299.3696.8799.54
vehicle70.5756.3452.4426.0468.36
vowel49.4335.9339.6614.1450.10
waveform-500082.6570.2875.0055.3786.33
wine96.6372.5864.2869.1480.56
winequalityRed51.5340.7046.3242.2150.49
winequalityWhite49.1640.9447.3931.1949.24
yeast51.8441.4047.1521.2954.09
Bold: the best value per dataset (row) achieved by all the included algorithms (columns).
Table 7. Number of best achieved performances over the examined datasets per base learner for the included query strategies and R-based experiments the comparison of simple learners.
Table 7. Number of best achieved performances over the examined datasets per base learner for the included query strategies and R-based experiments the comparison of simple learners.
QS
(metric)
Logitboost
(M5P)
1NN3NN5NNJ48Random
Tree
JRipNB
Binary Datasets
R = 5%
UncS(Ent)322433010
UncS(LConf)332432010
UncS(SMar)332432100
R = 10%
UncS(Ent)300372030
UncS(LConf)300372030
UncS(SMar)310373100
R = 15%
UncS(Ent)260683020
UncS(LConf)260683020
UncS(SMar)270693000
R = 20%
UncS(Ent)232645140
UncS(LConf)232645140
UncS(SMar)262655100
Multi-Class Datasets
R = 5%
UncS(Ent)374322010
UncS(LConf)383312110
UncS(SMar)3734121-0
R = 10%
UncS(Ent)362450000
UncS(LConf)381620000
UncS(SMar)3807110-0
R = 15%
UncS(Ent)345341010
UncS(LConf)353621010
UncS(SMar)3533510-0
R = 20%
UncS(Ent)351334100
UncS(LConf)351425100
UncS(SMar)3622241-0
Total77440105986110240
Table 8. The number of best achieved performance over the examined datasets per base learner for the included query strategies and R-based experiments compared to ensemble learners.
Table 8. The number of best achieved performance over the examined datasets per base learner for the included query strategies and R-based experiments compared to ensemble learners.
QS
(metric)
Logitboost
(M5P)
Logitboost
(DStump)
Bagging
(J48)
Ada
(DStump)
LMT
Binary Datasets
R = 5%
UncS(Ent)170148
UncS(LConf)1601210
UncS(SMar)1601210
R = 10%
UncS(Ent)160339
UncS(LConf)180337
UncS(SMar)180247
R = 15%
UncS(Ent)140439
UncS(LConf)1204411
UncS(SMar)1204312
R = 20%
UncS(Ent)1011314
UncS(LConf)1011314
UncS(SMar)1201314
Multi-Class Datasets
R = 5%
UncS(Ent)2810112
UncS(LConf)2710113
UncS(SMar)2710014
R = 10%
UncS(Ent)285009
UncS(LConf)2830011
UncS(SMar)2620014
R = 15%
UncS(Ent)2910112
UncS(LConf)2610015
UncS(SMar)2720013
R = 20%
UncS(Ent)2810013
UncS(LConf)2640012
UncS(SMar)2830011
Total49951240274
Table 9. Freidman ranking scores of the second experimental setup.
Table 9. Freidman ranking scores of the second experimental setup.
Active Learning ApproachesBinaryMulticlassAverage
R(%)5%10%15%20%5%10%15%20%
UncS(Logitboost(M5P))2.1382.5112.4773.0402.2701.9481.9212.2702.322
UncS(LMT)3.7873.0002.5632.3283.3212.8132.5283.3212.958
RS(LMT)4.4374.2304.5403.6323.4683.6033.6593.4683.880
RS(Logitboost(M5P))4.8515.1094.8394.9833.0793.0163.4523.0794.051
RS(Ada(DStump))5.4376.1326.9436.9778.2388.8899.2468.2387.513
RS(Bagging(J48))5.8336.6726.4836.3165.9216.2986.2865.9216.216
UncS(Ada(DStump))6.6265.7075.9316.5759.3819.3818.9379.3817.740
UncS(Bagging(J48))6.3395.2824.6954.9086.5605.5835.4406.5605.671
RS((Logitboost(DStump)))7.0807.6787.7938.2415.5166.1116.2145.5166.769
UncS((Logitboost(DStump)))8.4718.6788.7368.0007.2467.3577.3177.2467.881
Bold: the best value per dataset (row) achieved by all the included algorithms (columns).
Table 10. Freidman ranking scores for hyperparameter tuning of the Logitboost scheme.
Table 10. Freidman ranking scores for hyperparameter tuning of the Logitboost scheme.
Active Learning ApproachesBinaryMulticlassAverage
R(%)5%10%15%20%5%10%15%20%
Ent(5 iterations)3.323.883.693.503.943.823.953.833.74
Ent(10 iterations)3.382.863.092.932.473.072.893.393.01
Ent(15 iterations)3.012.982.583.203.153.073.022.603.00
Ent(20 iterations)2.362.752.742.512.732.742.662.782.66
Ent(25 iterations)2.932.532.902.852.712.292.482.402.64
LConf(5 iterations)3.393.833.653.513.464.003.953.863.71
LConf(10 iterations)3.312.953.053.182.902.822.992.662.98
LConf(15 iterations)3.022.932.942.943.202.883.203.593.09
LConf(20 iterations)2.302.772.632.512.812.732.512.602.61
LConf(25 iterations)2.992.512.742.852.632.562.352.302.62
SMar(5 iterations)3.353.783.653.473.833.993.823.893.72
SMar(10 iterations)3.332.953.092.902.982.953.173.233.08
SMar(15 iterations)3.032.932.603.152.722.802.632.572.80
SMar(20 iterations)2.302.802.732.492.782.833.072.762.72
SMar(25 iterations)2.992.532.933.002.692.442.312.542.68
Bold: the best value per dataset (row) achieved by all the included algorithms (columns).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kazllarof, V.; Karlos, S.; Kotsiantis, S. Investigation of Combining Logitboost(M5P) under Active Learning Classification Tasks. Informatics 2020, 7, 50. https://doi.org/10.3390/informatics7040050

AMA Style

Kazllarof V, Karlos S, Kotsiantis S. Investigation of Combining Logitboost(M5P) under Active Learning Classification Tasks. Informatics. 2020; 7(4):50. https://doi.org/10.3390/informatics7040050

Chicago/Turabian Style

Kazllarof, Vangjel, Stamatis Karlos, and Sotiris Kotsiantis. 2020. "Investigation of Combining Logitboost(M5P) under Active Learning Classification Tasks" Informatics 7, no. 4: 50. https://doi.org/10.3390/informatics7040050

APA Style

Kazllarof, V., Karlos, S., & Kotsiantis, S. (2020). Investigation of Combining Logitboost(M5P) under Active Learning Classification Tasks. Informatics, 7(4), 50. https://doi.org/10.3390/informatics7040050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop