Next Article in Journal
A Novel Image Encryption Scheme Using the Composite Discrete Chaotic System
Next Article in Special Issue
Voice Activity Detection Using Fuzzy Entropy and Support Vector Machine
Previous Article in Journal
Acoustic Detection of Coronary Occlusions before and after Stent Placement Using an Electronic Stethoscope
Previous Article in Special Issue
A PUT-Based Approach to Automatically Extracting Quantities and Generating Final Answers for Numerical Attributes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Is a Data-Driven Approach Better than Random Choice in Label Space Division for Multi-Label Classification?

by
Piotr Szymański
1,2,*,
Tomasz Kajdanowicz
1 and
Kristian Kersting
3
1
Department of Computational Intelligence, Wrocław University of Technology, Wybrzeże Stanisława Wyspiańskiego 27, 50-370 Wrocław, Poland
2
Illimites Foundation, Gajowicka 64 lok. 1, 53-422 Wrocław, Poland
3
Department of Computer Science, TU Dortmund University, August-Schmidt-Straße 4, 44221 Dortmund, Germany
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(8), 282; https://doi.org/10.3390/e18080282
Submission received: 1 February 2016 / Revised: 12 July 2016 / Accepted: 19 July 2016 / Published: 30 July 2016

Abstract

:
We propose using five data-driven community detection approaches from social networks to partition the label space in the task of multi-label classification as an alternative to random partitioning into equal subsets as performed by RAkELd. We evaluate modularity-maximizing using fast greedy and leading eigenvector approximations, infomap, walktrap and label propagation algorithms. For this purpose, we propose to construct a label co-occurrence graph (both weighted and unweighted versions) based on training data and perform community detection to partition the label set. Then, each partition constitutes a label space for separate multi-label classification sub-problems. As a result, we obtain an ensemble of multi-label classifiers that jointly covers the whole label space. Based on the binary relevance and label powerset classification methods, we compare community detection methods to label space divisions against random baselines on 12 benchmark datasets over five evaluation measures. We discover that data-driven approaches are more efficient and more likely to outperform RAkELd than binary relevance or label powerset is, in every evaluated measure. For all measures, apart from Hamming loss, data-driven approaches are significantly better than RAkELd ( α = 0 . 05 ), and at least one data-driven approach is more likely to outperform RAkELd than a priori methods in the case of RAkELd’s best performance. This is the largest RAkELd evaluation published to date with 250 samplings per value for 10 values of RAkELd parameter k on 12 datasets published to date.

Graphical Abstract

1. Introduction

Shannon’s work on the unpredictability of information content inspired a search for the area of multi-label classification that requires more insight: where has the field still been using random approaches to handling data uncertainty when non-random methods could shed light and provide the ability to make better predictions?
Interestingly enough, random methods are prevalent in well-cited and multi-label classification approaches, especially in the problem of label space partitioning, which is a core issue in the problem-transformation approach to multi-label classification.
A great family of multi-label classification methods, called problem transformation approaches, depends on converting an instance of a multi-label classification problem into one or more single-label single-class or multi-class classification problems, performs such classification and then converts the results back to multi-label classification results.
Such a situation stems from the fact that historically, the field of classification started out with solving single-label classification problems; in general, a classification problem of understanding the relationship (function) between a set of objects and a set of categories that should be assigned to it. If the object is allocated to at most one category, the problem is called a single-label classification. When multiple assignments per instance are allowed, we are dealing with a multi-label classification scenario.
In the single-label scenario, in one variant, we deal with a case when there is only one category, i.e., the problem is a binary choice: whether to assign a category or not, such a scenario is called single-class classification, e.g., the case of classifying whether there is a car in the picture or not. The other case is when we have to choose at most one from many possible classes; such a case is called multi-class classification, i.e., classifying a picture with the dominant brand of cars present in it. The multi-label variant of this example would concern classifying a picture with all car brands present in it.
As both single- and multi-class classification problems have been considerably researched during the last few decades, one can naturally see that it is reasonable to transform the multi-label classification case, by dividing the label space, into a single- or a multi-class scenario. A great introduction to the field was written by [1].
The two basic approaches to such a transformation are binary relevance and label powerset. Binary Relevance (BR) takes an a priori assumption that the label space is entirely separable, thus converting the multi-label problem into a family of single-class problems, one for every label, and making a decision whether to apply it or not. Converting the results back to multi-label classification is based just on taking a union of all assigned labels. Regarding our example, binary relevance assumes that correlations between car brands are not important enough and discards them a priori by classifying with each brand separately.
Label Powerset (LP) makes an opposite a priori assumption: the label space is non-divisible label-wise, and each possible label combination becomes a unique class. Such an approach yields a multi-class classification problem on a set of classes equal to the powerset of the label set, i.e., growing exponentially if one treats all label combinations as possible. In practice, this would be intractable. Thus, as [2] note, label powerset is most commonly implemented to handle only these combinations of classes that occur in the training set and, as such, is prone to overfitting. It is also, per [3], prone to differences in label combination distributions between the training set and the test set, as well as to an imbalance in how label combinations are distributed in general.
To remedy the overfitting of label powerset, Tsoumakas et al. [3] propose to divide the label space into smaller subspaces and use label powerset in these subspaces. The source of proposed improvements come from the fact that it should be easier for label powerset to handle a large number of label space combinations in a smaller space. Two proposed approaches are called random k-label sets (RAkEL). RAkEL comes in two variants: a label space partitioning RAkELd, which divides the label set into k disjoint subsets, and RAkELo, which is a sampling approach that allows overlapping of label subspaces. In our example, RAkEL would randomly select a subset of brands and use the label powerset approach for brand combinations in all of the subspaces.
While these methods were developed, we saw advances in other fields that brought us more and more tools to explore relations between entities in data. Social and complex networks have been flourishing after most of the well-established methods were published. In this paper, we propose a data-driven approach for label space partitioning in multi-label classification. While we tackle the problem of classification, our goal is to spark a reflection on how data-driven approaches to machine learning using new methods from complex/social networks can improve established procedures. We show that this direction is worth pursuing, by comparing method-driven and data-driven approaches towards partitioning the label space.
Why should one rely on label space division at random? Should not a data-driven approach be better than random choice? Are methods that perform simplistic a priori assumptions truly worse than the random approach? What are the variances of result quality upon label space partitioning? Instead of selecting random subspaces of brands, we could consider that some city brands occur more often with each other and less so with other suburban brands. Based on such a premise, we could build a weighted graph depicting the frequency of how often two brands occur together in photos. Then, using well-established community detection methods on this graph, we could provide a data-driven partition for the label space.
In this paper, we wish to follow Shannon’s ambition to search for a data-driven solution, an approach of finding structure instead of accepting uncertainty. We run RAkELd on 12 benchmark datasets, with different values of parameter k, taking k to be equal to 10%, 20%, …, 90% of the label set size. We draw 250 distinct partitions of the label set into subsets of k labels, per every value of k. In case there are less than 250 possible partitions of the label space (e.g., because there are less than 10 labels), we consider all possible partitions. We then compare these results against the performance of methods based both on a priori assumptions—binary relevance and label powerset—and also well-established community detection methods employed in social and complex network analysis to detect patterns in the label space. For each of the measures, we state three hypotheses:
RH1:
a data-driven approach performs statistically better than the average random baseline;
RH2:
a data-driven approach is more likely to outperform RAkELd than methods based on a priori assumptions;
RH3:
a data-driven approach has a higher likelihood to outperform RAkELd in the worst case than methods based on a priori assumptions;
RH4:
a data-driven approach is more likely to perform better than RAkELd, than otherwise, i.e., the worst-case likelihood is greater than 0 . 5 ;
RH5:
the data-driven approach is more time efficient than RAkELd.
We describe the multi-label classification problem and existing methods in Section 3, our new proposed data-driven approach in Section 4 and compare the results of the likelihood of a data-driven approach being better than randomness in Section 6. We provide the technical detail of the experimental scenario in Section 5. We conclude with the main findings of the paper and future work in Section 7.

2. Related Work

Our study builds on two kinds of approaches to multi-label classification: problem transformation and ensemble. We extend the label powerset problem transformation method by employing an ensemble of classifiers to classify obtained partitions of the label space separately. We show that partitioning the label space randomly, as is done in the RAkEL approach, can be improved by using a variety of methods to infer the structure of partitions from the training. We extend the original evaluation of random k-label sets’ performance using a larger sampling of the label space and providing deeper insight into how RAkELd performs. We also provide some insights into the nature of random label space partitioning in RAkELd. Finally, we provide alternatives to random label space partitioning that are very likely to yield better results than RAkELd depending on the selected measure, and we show which methods to use depending on the generalization strategy.
The classifier chains [4] approach to label space partitioning is based on a Bayesian conditioning scheme, in which labels are ordered in a chain, and then, the n-th classification is performed taking into account the output of the last n - 1 classifications. These methods suffer from a variety of challenges: the results are not stable when ordering of labels in the chain changes, and finding the optimal chain is NP-hard. Existing methods that optimize towards best quality cannot handle more than 15 labels in a dataset (e.g., Bayes-optimal Probabilistic Classifier Chains (PCC) [5]). Furthermore, in every classifier chain approach, one always needs to train at least the same number of classifiers as there are labels and if ensemble approaches are applied, much more. In our approach, we use community detection methods to divide the label space into a fewer number of cases to classify as multi-class problems, instead of transforming to a large number of single-class problems that are interdependent. We also do not strive to find the directly optimal solution to community detection problems on the label co-occurrence graph, to avoid overfitting; instead, we perform approximations of the optimal solutions. This approach provides a large overhead over random approaches. We note that it would be an interesting question whether the random orderings in classifier chains are as suboptimal of a solution, as random partitioning turns out to be in label space partitioning. Yet, it is not the subject of the study and is open to further research.
Tsoumakas et al.’s [6] Hierarchy Of Multilabel classifiERs (HOMER) is a method of two-step hierarchical multi-label classification in which the label space is divided based on label assignment vectors; then, observations are classified first with cluster meta-labels; and finally, for each cluster they were labeled with, they are classified with labels of that cluster. We do not compare to HOMER directly in this article, due to the different nature of the classification scheme, as the subject of this study is to evaluate how data-driven label space partitioning using complex/social network methods, which can be seen as weak classifiers (as all objects are assigned automatically to all subsets), can improve the random partitioning multi-label classification. HOMER uses a strong classifier to decide which object should be classified in which subspace. Although we do not compare directly to HOMER, due to the difference in the classification scheme and base classifier, our research shows similarities to Tsoumakas’s result that abandoning random label space partitioning for the k-means-based data-driven approach improves classification results. Thus, our results are in accord, yet we provide a much wider study, as we have performed a much larger sampling of the random space than the authors of HOMER in their method-describing paper.
Madjarov et al. [7] compare the performance of 12 multi-label classifiers on 11 benchmark sets evaluated by 16 measures. To provide statistical support, they use a Friedman multiple comparison procedure with a post hoc Nemenyi test. They include the RAkELo procedure in their study, i.e., the random label subsetting instead of partitioning. They do not evaluate the partitioning strategy RAkELd, which is the main subject of this study. Our main contribution, the study of how RAkELd performs against more informed approaches, therefore fills the unexplored space of Madjarov et al.’s extensive comparison. Note that, due to computational limits, we use Classification and Regression Trees (CART) instead of Support Vector Machines (SVM) as the single-label base classifier, as explained in Section 5.2.
Zhang et al. [8] review theoretical aspects and reported the experimental performance of eight multi-label algorithms and categorize them by the order of correlations taken into account and the evaluation measure that they try to optimize.

3. Multi-Label Classification

In this section, we aim to provide a more rigid description of the methods we use in the experimental scenario. We start by formalizing the notion of classification. Classification aims to understand a phenomenon, a relationship (function f : X Y ) between objects and categories, by generalizing from empirically-collected data D:
  • objects are represented as feature vectors x ¯ from the input space X;
  • categories, i.e., labels or classes come from a set L, and it spans the output space Y:
    in the case of single-label single-class classification, | L | = 1 and Y = 0 , 1
    in the case of single-label multi-class classification, | L | > 1 and Y = 0 , 1 , , | L |
    in the case of multi-label classification, Y = 2 L
  • the empirical evidence collected: D = ( D x , D y ) X × Y ;
  • a quality criterion function q.
In practice the empirical evidence D is split into two groups: the training set for learning the classifier and the test set to use for evaluating the quality of classifier performance. For the purpose of this section, we will denote D t r a i n as the training set.
The goal of classification is to learn a classifier h : X Y , such that h generalizes D t r a i n in a way that maximizes q.
We are focusing on problem-transformation approaches that perform multi-label classification by transforming it to a single-label classification and convert the results back to the multi-label case. In this paper, we use CART decision trees as the single-label base classifier. CART decision trees are a single-label classification method capable of both single- and multi-class classification. A decision tree constructs a binary tree in which every node performs a split based on the value of a chosen feature X i from the feature space X. For every feature, a threshold is found that minimizes an impurity function calculated for a threshold on the data available in the current node’s subtree. For the selected pair of a feature X i and a threshold θ, a new split is performed in the current node. Objects with values of the feature lesser than θ are evaluated in the left subtree of the node, while the rest in the right. The process is repeated recursively for every new node in the binary tree until the depth limit selected for the method is reached or there is just one observation left to evaluate. In all of our scenarios, we use CART as the base single-label single- and multi-class base classifier.
Binary relevance learns | L | single-label single-class base classifiers b j : X { 0 , 1 } for each L j L and outputs the multi-label classification using a classifier h ( x ¯ ) = { L j L : b j ( x ¯ ) = 1 } .
Label powerset constructs a bijection l p : 2 L C between each subset of labels L i and a class C i . Label powerset then learns a single-label multi-class base classifier b : X C and transforms its output into a multi-label classification result l p - 1 ( b ( x ¯ ) ) .
RAkELd performs a random partition of the label set L into k subsets L j | 1 k . For each L j , a label powerset classifier b j : X L j is learned. For a given input vector x ¯ , the RAkELd classifier h performs multi-label classification with each b j classifier and then sums the results, which can be formally described as h ( x ¯ ) = j = 0 k b j ( x ¯ ) . Following the RAkELd scenario from [3], all partitions of the set L into k subsets are equally probable.

4. The Data-Driven Approach

Having described the baseline random scenario of RAkELd, we now turn to explaining how complex/social network community detection methods fit into a data-driven perspective for label space division. In this scenario, we are transforming the problem exactly like RAkELd, but instead of performing random space partitioning, we construct a label co-occurrence graph from the training data and perform community detection on this graph to obtain a label space division.

4.1. Label Co-Occurrence Graph

We construct the label co-occurrence graph as follows. We start with an undirected co-occurrence graph G with the label set L as the vertex set and the set of edges constructed from all pairs of labels that were assigned together at least once to an input object x ¯ in the training set (here, l i , j , . . . denote labels, i.e., elements of the set L):
E = { λ i , λ j } : ( x ¯ , Λ ) D t r a i n λ i Λ λ j Λ
One can also extend this unweighted graph G to a weighted graph by defining a function w : L N :
w ( λ i , λ j ) = number of input objects x ¯ that have both labels assigned =
= | x ¯ : ( x ¯ , Λ ) D t r a i n λ i Λ λ j Λ |
Using such a graph G, weighted or unweighted, we find a label space division by using one of the following community detection methods to partition the graph’s vertex set, which is equal to the label set.

4.2. Dividing the Label Space

Community detection methods are based on different principles as different fields defined communities differently. We are employing a variety of methods.
Modularity-based approaches, such as the fast greedy [9] and the spectral leading eigenvector algorithms, are based on detecting a partition of label sets that maximizes the modularity measure by [10]. Behind this measure lies an assumption that true community structure in a network corresponds to a statistically-surprising arrangement of edges [10], i.e., that a community structure in a real phenomenon should exhibit a structure different from an average case of a random graph, which is generated under a given null model. A well-established null model is the configuration model, which joins vertices together at random, but maintains the degree distribution of vertices in the graph.
For a given partition of the label set, the modularity measure is the difference between how many edges of the empirically-observed graph have both ends inside of a given community, i.e., e ( C ) = { ( u , v ) E : u C v C } versus how many edges starting in this community would end in a different one in the random case: r ( C ) = v C d e g ( v ) | E | . More formally, this is Q ( C ) = c C e ( c ) - r ( c ) . In the case of weights, instead of counting the number of edges, the total weight of edges is used, and instead of taking vertex degrees in r, the vertex strengths are used; a precise description of weighted modularity can be found in Newman’s paper [11].
Finding C ¯ = argmax C Q ( C ) is NP-hard, as shown by Brandes et al. [12]. We thus employ three different approximation-based techniques instead: a greedy, a multi-level hierarchical and a spectral recursively-dividing algorithm.
The fast greedy approach works based on greedy aggregation of communities, starting with singletons and merging the communities iteratively. In each iteration, the algorithm merges two communities based on which merge achieves the highest contribution to modularity. The algorithm stops when there is no possible merge that would increase the value of the current partition’s modularity. Its complexity is O ( N l o g 2 ( N ) ) .
The leading eigenvector approximation method depends on calculating a modularity matrix for a split of the graph into two communities. Such a matrix allows one to rewrite the two-community definition of modularity in a matrix form that can be then maximized using the largest positive eigenvalue and signs of the corresponding elements in the eigenvector of the modularity matrix: negative ones assigned to one community, positive ones to another. The algorithm starts with all labels in one community and performs consecutive splits recursively until all elements of the eigenvector have the same sign or the community is a singleton. The method is based on the simplest variant of spectral modularity approximation as proposed by Newman [13]. Its complexity is O ( M + N 2 ) .
The infomap algorithm concentrates on finding the community structure of the network with respect to flow and to exploit the inference-compression duality to do so [14]. It relies on finding a partition of the vertex set that minimizes the map equation. The map equation divides flows through the graph into intra-community ones and the between-community ones and takes into consideration an entropy-based frequency-weighted average length of codewords used to label nodes in communities and inter-communities. Its complexity is O ( M ) .
The label propagation algorithm [15] assigns a unique tag to every vertex in the graph. Next, it iteratively updates the tag of every vertex with the tag assigned to the majority of the elements’ neighbors. The updating order is randomly selected at each iteration. The algorithm stops when all vertices have tags that are in accord with the dominant tag in their neighborhood. Its complexity is O ( N + M ) .
The walktrap algorithm is based on the intuition that random walks on a graph tend to get “trapped” into densely-connected parts corresponding to communities [16]. It starts with a set of singleton communities and agglomerates obtained communities in a greedy iterative approach based on how close two vertices are in terms of random-walk distance. More precisely, each step merges two communities to maximize the decrease of the mean (averaged over vertices) of squared distances between a vertex and all of the vertices that are in the vertex’s community. The random walk distance between two nodes is measured as the L 2 distance between the random walk probability distribution starting in each of the nodes. The distances are of the same maximum length provided as a parameter to the method. Its expected complexity is O ( N 2 * l o g ( N ) ) .
In complexity notation, N is the number of nodes and M the number of edges.

4.3. Classification Scheme

In our data-driven scheme, the training phase is performed as follows:
  • the label co-occurrence graph is constructed based on the training dataset;
  • the selected community detection algorithm is executed on the label co-occurrence graph;
  • for every community L i , a new training dataset D i is created by taking the original input space with only the label columns that are present in L i ;
  • for every community, a classifier h i is learned on training set D i .
The classification phase is performed by performing classification on all subspaces detected in the training phase and taking the union of assigned labels: h ( x ¯ ) = j = 0 k b j ( x ¯ ) .

5. Experiments and Materials

To prepare the ground for results, in this section, we describe which datasets we have selected for evaluation and why. Then, we present and justify model selection decisions for our experimental scheme. Next, we describe the configuration of our experimental environment. Finally, we describe the measures used for evaluation.

5.1. Datasets

Following Madjarov’s study [7], we have selected 12 different well-cited multi-label classification benchmark datasets. The basic statistics of datasets used in experiments, such as the number of data instances, the number of attributes, the number of labels, the labels’ cardinality, density and the distinct number of label combinations, are available online [17]. We selected the datasets to obtain a balanced representation of problems in terms of the number of objects, the number of labels and domains. At the moment of publishing, this is one of the largest studies of RAkELd, both in terms of datasets examined and in terms of random label partitioning sample count. This study also exhibits a higher ratio of the number of datasets to the number of methods than other studies.
The text domain is represented by 5 datasets: bibtex, delicious, enron, medical and tmc2007-500. Bibtex [18] comes from the ECML/PKDD 2008 Discovery Challenge and is based on data from the Bibsonomy.org publication sharing and bookmarking website. It exemplifies the problem of assigning tags to publications represented as an input space of bibliographic metadata, such as: authors, paper/book title, journal volume, etc. Delicious [6] is another user-tagged dataset. It spans over 983 labels obtained by scraping the 140 most popular tags from the del.icio.us bookmarking website, retrieving the 1000 most recent bookmarks, selecting the 200 most popular, deduplication and filtering tags that were used to tag less than 10 websites. For those labels, websites tagged with them were scraped, and from their contents, the top 500 words ranked by the χ 2 method were selected as input features. Tmc2007-500 [6] contains an input space consisting of the similarly selected top 500 words appearing in flight safety reports. The labels represent the problems being described in these reports. Enron [19] contains emails from senior Enron Corporation employees categorized into topics by the UC Berkeley Enron E-mail Analysis Project [20] with the input space being a bag of word representation of the e-mails. The Medical [4] dataset is from the Medical Natural Language Processing Challenge [21]. The input space is a bag-of-words representation of patient symptom history, and labels represent diseases following the International Classification of Diseases.
The multimedia domain consists of five datasets: scene, corel5k, mediamill, emotions and birds. The image dataset scene [22] semantically indexes still scenes annotated with any of the following categories: beach, fall-foliage, field, mountain, sunset and urban. The birds dataset [23] represents a problem of matching bird voice recordings’ extracted features with a subset of 19 bird species that are present in the recording; each label represents one species. This dataset was introduced [24] during the The 9th Annual MLSP competition. A larger image set corel5k [25] contains normalized-cut segmented images clustered into 499 bins. The bins were labeled with 374 subset labels. The mediamill dataset of annotated video segments [26] was introduced during the 2005 NIST TRECVID challenge [27]. It is annotated with 101 labels referring to elements observable in the video. The emotions dataset [28] represents the problem of the automated detection of emotion in music, assigning a subset of music emotions based on the Tellegen–Watson–Clark model to each of the songs.
The biological domain is represented with two datasets: yeast and genbase. The yeast [29] dataset concerns the problem of assigning functional classes to genes of the Saccharomyces cerevisiae genome. The genbase [30] dataset represents the problem of assigning classes to proteins based on detected motifs that serve as input features.

5.2. Experiment Design

Using 12 benchmark datasets evaluated with five performance measures, we compare eight approaches to label space partitioning for multi-label classification:
  • five methods that divide the label space based on structure inferred from the training data via label co-occurrence graphs, in both unweighted and weighted versions of the graphs;
  • two methods that take an a priori assumption about the nature of the label space: binary relevance and label powerset;
  • one random label space partitioning approach that draws partitions with equal probability: RAkELd.
In the random baseline (RAkELd), we perform 250 samplings of random label space partitions into k-label subsets for each evaluated value of k. If a dataset had more than 10 labels, we took values of k ranging from 10% to 90% with a step of 10%, rounding to the closest integer number if necessary. In case of two datasets with a smaller number of labels, i.e., scene and emotions, we evaluated RAkELd for all possible label space partitions due to their low number. The number of label space division samples per dataset can be found in Appendix A (Table A1).
As no one knows the true distribution of classification quality over label space partitions, we have decided to use a large number of samples, 250 per each of the groups, 2500 altogether, to get as close to a representative sample of the population as was possible with our infrastructure limitations.
As the base classifier, we use CART decision trees. While we recognize that the majority of studies prefer to use SVMs, we note that it is intractable to evaluate nearly 32,500 samples of the random label space partitions using SVMs. We have thus decided to use a classifier that presents a reasonable trade-off between quality and computational speed.
We perform statistical evaluation of our approaches by comparing them to average performance of the random baseline of RAkELd. We average RAkELd results per dataset, which is justified by the fact that this is the expected result one would get without performing extensive parameter optimization. Following Derrac et al.’s [31] de facto standard modus operandi, we use the Friedman test with Iman–Davenport modifications to detect differences between methods, and we check whether a given method is statistically better than the average random baseline using Rom’s post-hoc pairwise test. We use these tests’ results to confirm or reject RH1.
We do not perform statistical evaluation per group (i.e., isolating each value of k from 10% to 90%) due to the lack of non-parametric repeated measure tests, as noted by Demsar in the classic paper [32].
Instead, to account for variation, we consider the probability that a given data-driven approach to label space division is better than random partitioning. These probabilities were calculated per dataset, as the fraction of random outputs that yielded worse results than a given method. Thus, for example, if infomap has a 96.5% probability of having higher better subset accuracy (SA) than the random approach in Corel5k, this means that on this dataset, infomap’s SA score was better than the scores achieved by 96.5% of all RAkELd experiments. We check the median, the mean and the minimal (i.e., worst case) likelihoods. We use these results to confirm or reject RH2, RH3 and RH4.

5.3. Environment

We used scikit-multilearn (Version 0.0.1) [33], a scikit-learn API compatible library for multi-label classification in python that provides its own implementation of several classifiers and uses scikit-learn [34] multi-class classification methods. All of the datasets come from the MULAN [35] dataset library [17] and follow MULAN’s division into the train and test subsets.
We use CART decision trees from the scikit-learn package (Version 0.15), with the Gini index as the impurity function. We employ community detection methods from the Python version of the igraph library [36] for both weighted and unweighted graphs. The performance measures’ implementation comes from the scikit-learn metrics package.

5.4. Evaluation Methods

Following Madjarov et al.’s [7] taxonomy of multi-label classification evaluation measures, we use three example-based measures: Hamming loss, subset accuracy and Jaccard similarity, as well as a label-based measure, F1, as evaluated by two averaging schemes: micro and macro. The following definitions are used:
  • X is the set of objects used in the testing scenario for evaluation
  • L is the set of labels that spans the output space Y;
  • x ¯ denotes an example object undergoing classification;
  • h ( x ¯ ) denotes the label set assigned to object x ¯ by the evaluated classifier h;
  • y denotes the set of true labels for the observation x ¯ ;
  • t p j , f p j , f n j , t n j are respectively true positives, false positives, false negatives and true negatives of the of label L j , counted per label over the output of classifier h on the set of testing objects x ¯ X , i.e., h ( X ) ;
  • the operator [ [ p ] ] converts the logical value to a number, i.e., it yields 1 if p is true and 0 if p is false.

5.4.1. Example-Based Evaluation Methods

Hamming loss is a label-wise decomposable function counting the fraction of labels that were misclassified. ⊗ is the logical exclusive or.
HammingLoss ( h ) = 1 | X | x ¯ X 1 | L | L j L [ [ ( L j h ( x ¯ ) ) ( L j y ) ] ]
The accuracy score and subset 0/1 loss are instance-wise measures that count the fraction of input observations that have been classified exactly the same as in the golden truth.
SubsetAccuracy ( h ) = 1 | X | x ¯ X [ [ h ( x ¯ ) = y ] ]
Jaccard similarity is a measure of the size of similarity between the prediction and the ground truth comparing what is the cardinality of an intersection of the two, compared to the union of the two. In other words, what fraction of all labels taken into account by any of the prediction or ground truth were assigned to the observation in both of the cases.
Jaccard ( h ) = 1 | X | x ¯ X h ( x ¯ ) y h ( x ¯ ) y

5.4.2. Label-Based Evaluation Methods

The F1 measure is a harmonic mean of precision and recall where none of the two are more preferred than the other. Precision is the measure of how much the method is immune to Type I error, i.e., falsely classifying negative cases as positives: false positives or FP. It is the fraction of correctly positively-classified cases (i.e., true positives) to all positively-classified cases. It can be interpreted as the probability that an object without a given label will not be labeled as having it. Recall is the measure of how much the method is immune to the Type II error, i.e., falsely classifying positive cases as negatives: false negatives or FN. It is the fraction of correctly positively-classified cases (i.e., true positives) to all positively-classified label. It can be interpreted as the probability that an object with a given label will be labeled as such.
These measures can be averaged from two perspectives that are not equivalent in practice due to a natural non-uniformity of the distribution of labels among input objects in any testing set. Two averaging techniques are well-established, as noted by [37].
Micro-averaging gives equal weight to every input object and performs a global aggregation of true/false positives/negatives, averaging over all objects first. Thus:
precision m i c r o ( h ) = j = 1 | L | t p j j = 1 | L | t p j + f p j
recall m i c r o ( h ) = j = 1 | L | t p j j = 1 | L | t p j + f n j
F 1 m i c r o ( h ) = 2 · precision m i c r o ( h ) · recall m i c r o ( h ) precision m i c r o ( h ) + recall m i c r o ( h )
In macro-averaging, the measure is first calculated per label, then averaged over the number of labels. Macro averaging thus gives equal weight to each label, regardless of how often the label appears.
precision m a c r o ( h , j ) = t p j t p j + f p j
recall m a c r o ( h , j ) = t p j t p j + f n j
F 1 m a c r o ( h , j ) = 2 · precision m a c r o ( h , j ) · recall m a c r o ( h , j ) precision m a c r o ( h , j ) + recall m a c r o ( h , j )
F 1 m a c r o ( h ) = 1 | L | j = 1 | L | F 1 m a c r o ( h , j )

6. Results and Discussion

We describe the performance per measure first and then look at how methods behave across measures. We evaluate each of the research hypotheses, RH1 to RH4, for each of the measures. We then look at how these methods performed across datasets. We compare the median and the mean of the achieved probabilities to assess the average advantage over randomness; the higher the better. We compare the median and the means, as in some cases, the methods admit a single worst-performing outlier, while in general providing a great advantage over random approaches. We also check how each method performs in the worst case, i.e., what is the minimum probability of it being better than randomness in label space division?

6.1. Micro-Averaged F1 Score

When it comes to ranking of how well methods performed in micro-averaged F1, fast greedy and walktrap approaches used on a weighted label co-occurrence graph performed best, followed by BR, leading eigenvector and unweighted walktrap/modularity-maximizations methods. Furthermore, weighted label propagation and infomap were statistically significantly better than the average random performance. We confirm RH1, with evidence presented in Figure 1 and Figure 2 and Table 1.
In terms of the micro-averaged F1, the weighted fast greedy approach has both the highest mean (86%) and median (92%) likelihood of scoring better than random baseline. Binary relevance and weighted variants of walktrap, leading eigenvector also performed well with a mean likelihood of 83% to 85% and a lower, but still satisfactory median of 85% to 88%. We confirm RH2.
Modularity-based approaches also turn out to be most resilient. The weighted variant of walktrap was the most resilient with a 69.5% likelihood in the worst case, followed closely by a weighted fast greedy approach with 67% and unweighted walktrap with 66.7%. We note that, apart from a single outlying datasets, all methods (apart from label powerset) had better than 50% likelihood of performing better than RAkELd. Binary relevance’s worst case likelihood was exactly 0 . 5 . We thus confirm both RH3 and RH4.
Fast greedy and walktrap weighted approaches yielded the best advantage over RAkELd, both in the average and worst cases. Binary relevance also provided a strong overhead against random label space division, while achieving just 0 . 5 in the worst case scenario. Thus, when it comes to micro-averaged F1 scores, RAkELd random approaches to label space partitions should be dropped in favor of weighted fast greedy and walktrap methods or binary relevance. All of these methods are also statistically significantly better than the average random baseline. We therefore confirm RH1, RH2, RH3 and RH4 for micro-averaged F1 scores.
We also note that RAkELd was better than label powerset on micro-averaged F1 in 57% of the cases in the worst case, while Tsoumakas et al.’s original paper [3] provides argumentation of micro-F1 improvements over LP yielded by RAkELd, using SVMs. We note that our observation is not contrary: LP failed to produce significantly different results than the average random baseline in our setting. Instead, our results are complementary, as we use a different classifier, but the intuition can be used to comment on Tsoumakas et al.’s results. While in some cases, RAkELd provides an improvement over LP in F1 score, on average, the probability of drawing a random subspace better is only 30%. We still note that it is much better to use one of the recommended community detection-based approaches instead of a method based on a priori assumptions.

6.2. Macro-Averaged F1 Score

All methods, apart from unweighted label propagation and infomap, performed significantly better than the average random baseline. The highest ranks were achieved by weighted fast greedy, binary relevance and unweighted fast greedy. We confirm RH1, with evidence presented in Figure 3 and Figure 4 and Table 2.
Fast greedy and walktrap approaches used on the weighted label co-occurrence graph were most likely to perform better than RAkELd samples, followed by BR, leading eigenvector and unweighted walktrap/modularity-maximizations methods. Furthermore, weighted label propagation and infomap were statistically significantly better than the average random performance.
Binary relevance and weighted fast greedy were the two approaches that surpassed the 90% likelihood of being better than random label space divisions in both the median (98.5% and 97%, respectively) and mean (92% and 90%) cases. Weighted walktrap and leading eigenvector followed closely with both the median and the mean likelihood of 87% to 89%. We thus reject RH2, as binary relevance achieved greater likelihoods than the best data-driven approach.
When it comes to resilience, all modularity (apart from unweighted fast greedy) methods achieve the same high worst-case 70% probability of performing better than RAkELd. Binary relevance underperformed in the worst case, being better exactly in 50% of the cases. All methods on all datasets, apart from the outlier case of infomap’s and label propagation’s performance on the scene dataset, are likely to yield a better macro-averaged F1 score than the random approaches. We confirm RH3 and RH4.
We recommend using binary relevance or weighted fast greedy approaches when generalizing to achieve the best macro-averaged F1 score, as they are both significantly better than average random performance and more likely to perform better than RAkELd samplings, and this likelihood is high even in the worst case. Thus, for macro-averaged F1, we confirm hypotheses RH1, RH3 and RH4. Binary relevance had a slightly better likelihood of beating RAkELd than data-driven approaches, and thus, we reject RH2.

6.3. Subset Accuracy

All methods apart from the weighted leading eigenvector modularity maximization approach were statistically significantly better than the average random baseline.
Label propagation, infomap, label powerset, weighted infomap, label propagation and walktrap are the methods that performed statistically significantly better than average random baseline, ordered by ranks. We confirm RH1, with evidence provided in Figure 5 and Figure 6 and Table 3.
Furthermore, unweighted infomap and label propagation are the most likely to yield results of higher subset accuracy than random label space divisions, both regarding the median (96%) and the mean (90% to 91%) likelihood. Label powerset follows with a 95.8% median and 89% mean. Weighted versions of infomap and label powerset are fourth and fifth with five to six percentage points less. We confirm RH2.
Concerning the resiliency of the advantage, only infomap versions proved to be better than RAkELd for more than half of the times: the unweighted version in 58% of cases, the weighted one in 52%. All other methods were below the 50% threshold in the worst case, with label powerset and label propagation likelihood of 33% for both variants. If one or two most wrong outliers were to be discarded, all methods are more than 50% likely to be better than random label space partitioning. We confirm RH3 and RH4.
We thus recommend using unweighted infomap as the data-driven alternative to RAkELd, as it is both significantly better than the random baseline, very likely to perform better than RAkELd and most resilient among the evaluated methods in the worst case. We confirm RH1, RH2, RH3 and RH4 for subset accuracy.

6.4. Jaccard Score

All methods apart from the weighted leading eigenvector modularity maximization approach were statistically significantly better than the average random baseline. Unweighted label propagation, label powerset and infomap were the highest ranked methods. We confirm RH1, with evidence provided in Figure 7 and Figure 8 and Table 4.
Jaccard score is similar to subset accuracy in rewarding exact label set matches. In effect, it is not surprising to see that unweighted infomap and label propagation are the most likely compared to RAkELd to yield a result of higher Jaccard score, both in terms of median (94.5% and 92.9%, respectively) and mean likelihoods (88.9% and 87.9%, resp.). Out of the two, infomap provides the most resilient advantage with a 65% probability of performing better than random approaches in the worst case. Label propagation is in the worst cases only 34% to 35% likely to be better than random space partitions.
We recommend using unweighted infomap approach over RAkELd when Jaccard similarity is of importance and confirm RH1, RH2, RH3 and RH4 for this measure.

6.5. Hamming Loss

Hamming loss is certainly a fascinating case in our experiments. As the measure is evaluated per each label separately, we can expect it to be the most stable over different label space partitions.
The first surprise comes with the Friedman–Iman–Davenport test result, where the test practically fails to find a difference in performance between random approaches and data-driven methods, yielding a p-value of 0 . 049 . While the p-value is lower than α = 0 . 05 , the difference cannot be taken as significant given the characteristics of the test. Lack of significance is confirmed by pairwise tests against random baseline (all hypotheses of difference are strongly rejected). We reject RH1, with evidence provided in Figure 9 and Table 5.
Weighted fast greedy was the only approach to be more likely to yield a lower Hamming loss than RAkELd on average, both in median and mean (55%) likelihoods. The unweighted version was better than slightly over 50% more of the cases than RAkELd, with a median likelihood of 46%. Binary relevance and label powerset achieved likelihoods lower by close to 10 percentage points. We thus confirm RH2.
When it comes to worst-case observations binary relevance, label powerset and infomap in both variants were never better than RAkELd. The methods with the most resilient advantage in likelihood (9%) in the worst case were weighted versions of fast greedy and walktrap. We confirm RH3 and reject RH4.
We conclude that the fast greedy approach can be recommended over RAkELd, as even given such a large standard deviation (std) of likelihoods, it still yields lower Hamming loss than random label space divisions on more than half of the datasets. Yet, we reject RH1, RH2 and RH4 for Hamming loss. We confirm RH3, as a priori methods do not provide better performance than RAkELd in the worst case.

6.6. Efficiency

The efficiency comparison between the data-driven and random method for subspace division is of a special type. It is due to the fact that the data-driven approach requires a single run, whereas the random ones, a number of repetitions to get stable enough results. Thus, comparing the efficiency of both approaches, it must be based on the methods’ overall efficiency and not just the computation time of a single run/sampling.
The efficiency of label space partitioning approaches depends on the number of classifiers trained and the complexity of the partitioning procedure. In the case of the flat classification scheme, which we evaluate in this paper, the number of classifiers is equal to the number of subspaces into which the label space is partitioned. As all objects are classified by all sub-classifiers, the classification time is dependent on the single varying parameter: the label subspace size. RAkELd partitioning is performed in O ( 1 ) , as it is a procedure of randomly drawing a partition. The number of classifiers in the data-driven approach depends on the number of detected communities. Table 6 provides information about the number of detected communities in our experiments.
That is why we evaluate the mean of the proportion between the running time of the data-driven approach to the time of running a single RAkELd, averaged per value of k. We compare this proportion to the average number of RAkELd runs required to match the average likelihood of data-driven approaches yielding better results of RAkELd per value of k, as only that number of RAkELd runs brings a high enough probability that enough samples were taken to yield comparable results. We call that point the efficiency threshold. If for example a data-driven method was better than 60% RAkELd samplings for a given k in a given measure and the total number of samplings for that value of k was 250, then the efficiency threshold for that value of k and that measure would be 60 % × 250 = 150 .
Efficiency figures presented in Appendix B (Figure B1, Figure B2, Figure B3, Figure B4 and Figure B5), are presented per method as log plots on the y axis of proportions averaged per k and thresholds per k per measure. We had to use log plots on the y axis due to the fact that the number of RAkELd runs equivalent to a single data-driven run was so greatly below the efficiency thresholds that the charts would not have been readable. Efficiency results do not vary much across methods; thus, we describe them collectively for all methods.
Points below baseline y = 1 are cases when even a single iteration of RAkELd was slower than a single data-driven run. This happens when the number of classifiers in RAkELd is equal to 10, i.e., with k = 10 % of labels. The smaller the value of k becomes, the greater the efficiency of the data-driven methods becomes. As k increases, the number of classifiers trained by RAkELd decreases, and the single random run becomes faster. For most datasets, the data-driven approach running times become equivalent to six to eight RAkELd runs. The the worst case of data-driven running time reaching 15 RAkELd runs happens for large k on datasets genbase and medical. For these datasets, with a low level of label co-occurrence, the obtained co-occurrence graph is mostly disconnected, which yields many singletons, and that causes a large increase in the advantage of RAkELd over data-driven methods, as we never allow k = 1 in the RAkELd scenario. What happens is that RAkELd trains few classifiers, and data-driven methods train more single-class classifiers representing singletons in the graph. We plan to improve this in the future by joining singletons into one subspace.
The slowest performance of data-driven methods was equivalent to 15 RAkELd runs. The average efficiency threshold of Hamming loss spanned between 50 and 70 runs of RAkELd, while other measures’ thresholds ranged between 80 and 110 runs. We thus note that RAkELd is far less efficient as a method than data-driven approaches, for every evaluated value of k.
We also note that before measuring efficiency per k, one needs to perform parameter estimation of the number of runs for RAkELd’ in our case, it would take at least ten runs of RAkELd, one for each value of k = 10 % , , 90 % of labels. One would usually want to repeat the procedure for each parameter value, at least a few times before selecting the value, as the variance in RAkELd performance is large. With data-driven methods, we gain a high likelihood of being better than RAkELd while running only one iteration.
We thus conclude that data-driven approaches are more efficient than RAkELd for every measure evaluated and for every value of k. We therefore accept RH5.

7. Conclusions

We have compared seven approaches as an alternative to random label space partition. RAkELd served as the random baseline for which we have drawn at most 250 distinct label space partitions for at most ten different values of the parameter k of label subset sizes. Out of the seven methods, five inferred the label space partitioning from training data in the datasets, while the two others were based on an a priori assumption on how to divide the label space. We evaluated these methods on 12 well-established benchmark datasets.
We conclude that in four of five measures, micro-/macro-averaged F1 score, subset accuracy and Jaccard similarity, all of our proposed methods were more likely to yield better scores than RAkELd apart from single outlying datasets; a data-driven approach was better than average random baseline with statistical significance at α = 0 . 05 . The data-driven approach was also better than RAkELd in worst-case scenarios. Thus, hypotheses RH1, RH3 and RH4 have been successfully confirmed with these measures.
When it comes to Research Hypothesis 2 (RH2), we have confirmed that with micro-averaged F1, subset accuracy, Hamming loss and Jaccard similarity, the data-driven approaches have a higher likelihood of outperforming RAkELd than a priori methods do. The only exception to this is the case of macro-averaged F1, where binary relevance was most likely to beat random approaches, followed closely by a data-driven approach: weighted fast greedy.
Hamming loss forms a separate case for discussion, as this measures is most unrelated to label groups: it is calculated per label only. With this measure, most data-driven methods performed much worse than in other methods. Our study failed to observer a statistical difference between data-driven methods and the random baseline; thus, we reject hypothesis RH1. For best performing data-driven methods, the worst-case likelihood of yielding a lower Hamming loss than RAkELd was close to 10%, which is far from a resilient score; thus, we also reject RH4. We confirm RH2 and RH3, as there existed a data-driven approach that performed better than a priori approaches.
All in all, the statistical significance of a data-driven approach performing better than the averaged random baseline (RH1) has been confirmed for all measures, except Hamming loss. We have confirmed that the data-driven approach was more likely than binary relevance/label powerset to perform better than RAkELd (RH2) in all measures apart from the macro-averaged F1 score, where it followed the best binary relevance closely. Data-driven approaches were always more likely to outperform RAkELd in the worst case than binary relevance/label powerset, confirming RH3 for all measures. Finally, for all measures apart from Hamming loss, data-driven approaches were more likely to outperform RAkELd in the worst case, than otherwise. RH4 is thus confirmed for all measures, except Hamming loss.
In case of measures that are label-decomposable, the fast greedy community detection approach computed on a weighted label co-occurrence graph yielded the best results among data-driven perspectives and is the recommended choice for F1 measures and Hamming loss. When the measure is instance-decomposable and not label-decomposable, such as subset accuracy or Jaccard similarity, the infomap algorithm should be used on an unweighted label co-occurrence graph.
Data-driven methods also prove to be more time efficient than RAkELd. In case of small values of k, i.e., a large number of models trained by RAkELd, data-driven methods are more efficient than one iteration of RAkELd on most datasets, whereas the number of models decreases; the advantage of speed gained by RAkELd is not reflected in the likelihood of obtaining a result better than data-driven approaches provide in one run.
We conclude that community detection methods offer a viable alternative to both random and a priori assumption-based label space partitioning approaches. We summarize our findings in Table 7, answering the question in the title of how the data-driven approach to label space partitioning is likely to perform better than random choice.

Acknowledgments

The work was partially supported by the fellowship co-financed by the European Union within the European Social Fund; The European Commission under the 7th Framework Programme, Coordination and Support Action, Grant Agreement Number 316097 (ENGINE); the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 691152 (RENOIR); The National Science Centre research project 2014–2017 Decision No. DEC-2013/09/B/ST6/02317. This work was partially supported by the Faculty of Computer Science and Management, Wrocław University of Science and Technology statutory funds. Kristian Kersting acknowledges the support by the DFG Collaborative Research Center SFB 876 Project A6 “Resource Efficient Analysis of Graphs”. The authors wish to thank Łukasz Augustyniak for helping with proofreading the article.

Author Contributions

Piotr Szymański came up with the concept, conducted the experimental code and wrote most of the paper. Tomasz Kajdanowicz helped design the experiments, supported interpreting the results, supervised the work, wrote parts of the paper and corrected and proofread the paper. Kristian Kersting helped reorient the study towards the data-driven perspective, reorganizing and proofreading the paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Result Tables

Table A1. Number of random samplings from the universum of RAkELd label space partitions for cases different from 250 samples.
Table A1. Number of random samplings from the universum of RAkELd label space partitions for cases different from 250 samples.
Set NamekNumber of Samplings
birds17163
emotions215
310
415
56
scene215
310
415
56
tmc2007-5002122
yeast1291
Table A2. Likelihood of performing better than RAkELd in micro-averaged F1 score aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
Table A2. Likelihood of performing better than RAkELd in micro-averaged F1 score aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
MinimumMedianMeanStd
BR0.5000000.8855560.8400280.152530
LP0.4382800.6402370.7048910.171726
fast greedy0.5652170.8067570.8201980.127463
fast greedy-weighted0.6739130.9226430.8638210.106477
infomap0.4782610.7135650.7200740.153453
infomap-weighted0.4336570.7924260.7480720.170349
label_propagation0.3643090.7341000.7170830.175682
label_propagation-weighted0.4789640.8151000.7500850.174347
leading_eigenvector0.6306060.7832270.8039010.116216
leading_eigenvector-weighted0.6304350.8465060.8342010.107420
walktrap0.6675000.7422320.7819200.102776
walktrap-weighted0.6956520.8568610.8520370.091232
Table A3. Likelihood of performing better than RAkELd in macro-averaged F1 score aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
Table A3. Likelihood of performing better than RAkELd in macro-averaged F1 score aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
MinimumMedianMeanStd
BR0.5000000.9856830.9192450.160273
LP0.5434780.8292830.7794820.163611
fast greedy0.5434780.8833120.8664780.139572
fast greedy-weighted0.6956520.9691110.9004830.116022
infomap0.4782610.8200060.7930860.147108
infomap-weighted0.5000000.8895560.8184760.164492
label_propagation0.4493760.8018900.7542050.182183
label_propagation-weighted0.5217390.8551950.8216630.148561
leading_eigenvector0.6956520.8635650.8516960.108860
leading_eigenvector-weighted0.6956520.8897780.8721190.118911
walktrap0.6956520.8468890.8448070.105977
walktrap-weighted0.6956520.8943350.8852530.095436
Table A4. Likelihood of performing better than RAkELd in subset accuracy aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
Table A4. Likelihood of performing better than RAkELd in subset accuracy aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
MinimumMedianMeanStd
BR0.0000000.4980000.5580570.323240
LP0.3365700.9588890.8854760.190809
fast greedy0.2131300.8270430.7918110.214756
fast greedy-weighted0.0619510.8434130.7476240.288200
infomap0.5805000.9686670.9100390.143954
infomap-weighted0.5256590.8994300.8592310.152637
label_propagation0.3800280.9644440.9005550.174854
label_propagation-weighted0.3365700.9136520.8616420.189690
leading_eigenvector0.0000000.8260870.7349350.340537
leading_eigenvector-weighted0.0000000.8437780.7125550.345277
walktrap0.0781320.8343380.7393590.288072
walktrap-weighted0.4299580.8126670.8105420.175723
Table A5. Likelihood of performing better than RAkELd in Hamming loss aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
Table A5. Likelihood of performing better than RAkELd in Hamming loss aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
MinimumMedianMeanStd
BR0.0000000.3695650.4082520.375954
LP0.0000000.4346530.3800210.338184
fast greedy0.0222220.4691300.5070340.266736
fast greedy-weighted0.0933330.5542080.5582180.280209
infomap0.0000000.3066670.3962990.352963
infomap-weighted0.0000000.3329020.3975760.345339
label_propagation0.0000000.4571300.4159660.361819
label_propagation-weighted0.0000000.4895110.4418130.363469
leading_eigenvector0.0443830.4655110.4564410.330251
leading_eigenvector-weighted0.0004440.4515560.4573610.328774
walktrap0.0706670.4389430.4444240.312845
walktrap-weighted0.0893330.3791500.4849740.331186
Table A6. Likelihood of performing better than RAkELd in Jaccard similarity aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
Table A6. Likelihood of performing better than RAkELd in Jaccard similarity aggregated over datasets. Bold numbers signify the best likelihoods of a method performing better than RAkELd in the worst-case (Minimum) and average cases (Mean and Median).
MinimumMedianMeanStd
BR0.4565220.7780600.7591780.202349
LP0.3559870.9026320.8545450.189086
fast greedy0.5423020.8766670.8375700.135707
fast greedy-weighted0.2981970.8752220.7923860.205646
infomap0.6500000.9452610.8896630.125510
infomap-weighted0.5108650.8894880.8552420.148578
label_propagation0.3458160.9287890.8786220.181165
label_propagation-weighted0.4405920.9202610.8538210.181186
leading_eigenvector0.1631990.8898740.8214360.222219
leading_eigenvector-weighted0.4641700.8124440.7940910.154102
walktrap0.2385580.8913040.8048660.227916
walktrap-weighted0.6448890.8637220.8523690.122837
Table A7. The p-values of the assessment of the performance of the multi-label learning approaches compared against random baseline by the Iman–Davenport–Friedman multiple comparison, per measure.
Table A7. The p-values of the assessment of the performance of the multi-label learning approaches compared against random baseline by the Iman–Davenport–Friedman multiple comparison, per measure.
Iman–Davenport p-Value
Subset Accuracy0.0000000004
F1-macro0.0000000000
F1-micro0.0000000177
Hamming Loss0.0491215784
Jaccard0.0000124790
Table A8. The post-hoc pairwise comparison p-values of the assessment of the performance of the multi-label learning approaches compared against random baseline by the Iman–Davenport–Friedman test with Rom post hoc procedure, per measure. Bold numbers signify methods that perfomed statistically significantly better than RAkELd with a p-value < 0 . 05 .
Table A8. The post-hoc pairwise comparison p-values of the assessment of the performance of the multi-label learning approaches compared against random baseline by the Iman–Davenport–Friedman test with Rom post hoc procedure, per measure. Bold numbers signify methods that perfomed statistically significantly better than RAkELd with a p-value < 0 . 05 .
AccuracyF1-macroF1-microHamming LossJaccard
BR0.35901210.00000030.00005001.00000000.0064229
LP0.00006410.02806730.02058621.00000000.0000705
fast greedy0.05158440.00002340.00006561.00000000.0038623
fast greedy-weighted0.10897040.00000010.00000010.34723740.0085647
infomap0.00002570.04841590.02058621.00000000.0000705
infomap-weighted0.00027170.00107780.00240921.00000000.0001198
label_propagation0.00001120.04841590.02058621.00000000.0000098
label_propagation weighted0.00053150.00158580.00240921.00000000.0001196
leading_eigenvector0.18603190.00012820.00023721.00000000.0056673
leading_eigenvector-weighted0.25701540.00002740.00006531.00000000.0259068
walktrap0.07802640.00003970.00044571.00000000.0056673
walktrap-weighted0.01926760.00002390.00000401.00000000.0018482

Appendix B. Efficiency Figures

Figure B1. Efficiency of fast greedy modularity maximization data-driven approach against RAkELd.
Figure B1. Efficiency of fast greedy modularity maximization data-driven approach against RAkELd.
Entropy 18 00282 g010
Figure B2. Efficiency of the infomap greedy data-driven approach against RAkELd.
Figure B2. Efficiency of the infomap greedy data-driven approach against RAkELd.
Entropy 18 00282 g011
Figure B3. Efficiency of the label propagation data-driven approach against RAkELd.
Figure B3. Efficiency of the label propagation data-driven approach against RAkELd.
Entropy 18 00282 g012
Figure B4. Efficiency of the leading eigenvector modularity maximization data-driven approach against RAkELd.
Figure B4. Efficiency of the leading eigenvector modularity maximization data-driven approach against RAkELd.
Entropy 18 00282 g013
Figure B5. Efficiency of the walktrap data-driven approach against RAkELd.
Figure B5. Efficiency of the walktrap data-driven approach against RAkELd.
Entropy 18 00282 g014

References

  1. Tsoumakas, G.; Katakis, I. Multi-label classification: An overview. Int. J. Data Warehous. Min. 2007, 3, 1–13. [Google Scholar] [CrossRef]
  2. Dembczyński, K.; Waegeman, W.; Cheng, W.; Hüllermeier, E. On label dependence and loss minimization in multi-label classification. Mach. Learn. 2012, 88, 5–45. [Google Scholar] [CrossRef]
  3. Tsoumakas, G.; Katakis, I.; Vlahavas, I. Random k-Labelsets for Multilabel Classification. IEEE Trans. Knowl. Data Eng. 2011, 23, 1079–1089. [Google Scholar] [CrossRef]
  4. Read, J.; Pfahringer, B.; Holmes, G.; Frank, E. Classifier chains for multi-label classification. Mach. Learn. 2011, 85, 333–359. [Google Scholar] [CrossRef]
  5. Dembczynski, K.; Cheng, W.; Hüllermeier, E. Bayes Optimal Multilabel Classification via Probabilistic Classifier Chains. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 279–286.
  6. Tsoumakas, G.; Katakis, I.; Vlahavas, I. Effective and efficient multilabel classification in domains with large number of labels. In Proceedings of the ECML/PKDD 2008 Workshop on Mining Multidimensional Data (MMD ‘08), Antwerp, Belgium, 19 September 2008; pp. 30–44.
  7. Madjarov, G.; Kocev, D.; Gjorgjevikj, D.; Džeroski, S. An extensive experimental comparison of methods for multi-label learning. Pattern Recognit. 2012, 45, 3084–3104. [Google Scholar] [CrossRef]
  8. Zhang, M.L.; Zhou, Z.H. A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng. 2014, 26, 1819–1837. [Google Scholar] [CrossRef]
  9. Clauset, A.; Newman, M.E.J.; Moore, C. Finding community structure in very large networks. Phys. Rev. E 2004, 70, 066111. [Google Scholar] [CrossRef] [PubMed]
  10. Newman, M.E.J.; Girvan, M. Finding and evaluating community structure in networks. Phys. Rev. E 2004, 69, 026113. [Google Scholar] [CrossRef] [PubMed]
  11. Newman, M.E. Analysis of weighted networks. Phys. Rev. E 2004, 70, 056131. [Google Scholar] [CrossRef] [PubMed]
  12. Brandes, U.; Delling, D.; Gaertler, M.; Görke, R.; Hoefer, M.; Nikoloski, Z.; Wagner, D. On Modularity Clustering. IEEE Trans. Knowl. Data E 2008, 20, 172–188. [Google Scholar] [CrossRef]
  13. Newman, M.E.J. Finding community structure in networks using the eigenvectors of matrices. Phys. Rev. E 2006, 74, 036104. [Google Scholar] [CrossRef] [PubMed]
  14. Rosvall, M.; Axelsson, D.; Bergstrom, C.T. The map equation. Eur. Phys. J. Spec. Top. 2009, 178, 13–23. [Google Scholar] [CrossRef]
  15. Raghavan, U.N.; Albert, R.; Kumara, S. Near linear time algorithm to detect community structures in large-scale networks. Phys. Rev. E 2007, 76, 036106. [Google Scholar] [CrossRef] [PubMed]
  16. Pons, P.; Latapy, M. Computing communities in large networks using random walks (long version). 2005; arXiv:physics/0512106. [Google Scholar]
  17. MULAN. Available online: http://mulan.sourceforge.net/datasets-mlc.html (accessed on 21 July 2016).
  18. Katakis, I.; Tsoumakas, G.; Vlahavas, I. Multilabel text classification for automated tag suggestion. Available online: http://www.kde.cs.uni-kassel.de/ws/rsdc08/pdf/all_rsdc_v2.pdf#page=83 (accessed on 21 July 2016).
  19. Klimt, B.; Yang, Y. The enron corpus: A new dataset for email classification research. In Machine Learning: ECML 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 217–226. [Google Scholar]
  20. UC Berkeley Enron Email Analysis. Available online: http://bailando.sims.berkeley.edu/enron_email.html (accessed on 21 July 2016).
  21. Computationalmedicine.org. Available online: http://www.computationalmedicine.org/challenge/ (accessed on 21 July 2016).
  22. Boutell, M.R.; Luo, J.; Shen, X.; Brown, C.M. Learning multi-label scene classification. Pattern Recognit. 2004, 37, 1757–1771. [Google Scholar] [CrossRef]
  23. Briggs, F.; Lakshminarayanan, B.; Neal, L.; Fern, X.Z.; Raich, R.; Hadley, S.J.K.; Hadley, A.S.; Betts, M.G. Acoustic classification of multiple simultaneous bird species: A multi-instance multi-label approach. J. Acoust. Soc. Am. 2012, 131, 4640–4650. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Briggs, F.; Huang, Y.; Raich, R.; Eftaxias, K.; Lei, Z.; Cukierski, W.; Hadley, S.; Hadley, A.; Betts, M.; Fern, X.; et al. The 9th annual MLSP competition: New methods for acoustic classification of multiple simultaneous bird species in a noisy environment. In Proceedings of the 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP ‘13), Southampton, UK, 22–25 September 2013; pp. 1–8.
  25. Duygulu, P.; Barnard, K.; Freitas, J.F.G.D.; Forsyth, D.A. Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary. In Proceedings of the 7th European Conference on Computer Vision-Part IV (ECCV ‘02), Copenhagen, Denmark, 28–31 May 2012; pp. 97–112.
  26. Snoek, C.G.M.; Worring, M.; Gemert, J.C.V.; Geusebroek, J.M.; Smeulders, A.W.M. The challenge problem for automated detection of 101 semantic concepts in multimedia. In Proceedings of the ACM International Conference on Multimedia, Santa Barbara, CA, USA, 23–27 October 2006; pp. 421–430.
  27. MediaMill, Research on Visual Search. Available online: http://www.science.uva.nl/research/mediamill/challenge/ (accessed on 21 July 2016).
  28. Trohidis, K.; Tsoumakas, G.; Kalliris, G.; Vlahavas, I.P. Multi-Label Classification of Music into Emotions. In Proceedings of the Ninth International Conference on Music Information Retrieval (ISMIR 2008), Philadelphia, PA, USA, 14–18 September 2008; Volume 8, pp. 325–330.
  29. Elisseeff, A.; Weston, J. A Kernel Method for Multi-Labelled Classification. In Advances in Neural Information Processing Systems 14; MIT Press: Cambridge, MA, USA, 2001; pp. 681–687. [Google Scholar]
  30. Diplaris, S.; Tsoumakas, G.; Mitkas, P.A.; Vlahavas, I. Protein Classification with Multiple Algorithms. In Advances in Informatics; Springer: Berlin/Heidelberg, Germany, 2005; pp. 448–456. [Google Scholar]
  31. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  32. Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  33. Scikit-Multilearn. Available online: http://scikit-multilearn.github.io/ (accessed on 21 July 2016).
  34. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  35. Tsoumakas, G.; Spyromitros-Xioufis, E.; Vilcek, J.; Vlahavas, I. Mulan: A Java Library for Multi-Label Learning. J. Mach. Learn. Res. 2011, 12, 2411–2414. [Google Scholar]
  36. Csardi, G.; Nepusz, T. The igraph software package for complex network research. Inter. J. Complex Syst. 2006, 1695, 1–9. [Google Scholar]
  37. Yang, Y. An Evaluation of Statistical Approaches to Text Categorization. Inf. Retr. 1999, 1, 69–90. [Google Scholar] [CrossRef]
Figure 1. Statistical evaluation of the method’s performance in terms of micro-averaged F1 score. Gray, baseline; white, statistically identical to the baseline; otherwise, the p-value of the hypothesis that a method performs better than the baseline.
Figure 1. Statistical evaluation of the method’s performance in terms of micro-averaged F1 score. Gray, baseline; white, statistically identical to the baseline; otherwise, the p-value of the hypothesis that a method performs better than the baseline.
Entropy 18 00282 g001
Figure 2. Histogram of the methods’ likelihood of performing better than RAkELd in the micro-averaged F1 score aggregated over datasets.
Figure 2. Histogram of the methods’ likelihood of performing better than RAkELd in the micro-averaged F1 score aggregated over datasets.
Entropy 18 00282 g002
Figure 3. Statistical evaluation of the method’s performance in terms of macro-averaged F1 score. Gray, baseline; white, statistically identical to baseline; otherwise, the p-value of the hypothesis that a method performs better than the baseline.
Figure 3. Statistical evaluation of the method’s performance in terms of macro-averaged F1 score. Gray, baseline; white, statistically identical to baseline; otherwise, the p-value of the hypothesis that a method performs better than the baseline.
Entropy 18 00282 g003
Figure 4. Histogram of the methods’ likelihood of performing better than RAkELd in the macro-averaged F1 score aggregated over datasets.
Figure 4. Histogram of the methods’ likelihood of performing better than RAkELd in the macro-averaged F1 score aggregated over datasets.
Entropy 18 00282 g004
Figure 5. Statistical evaluation of the method’s performance in terms of Jaccard similarity score. Gray, baseline; white, statistically identical to baseline; otherwise, the p-value of hypothesis that a method performs better than the baseline.
Figure 5. Statistical evaluation of the method’s performance in terms of Jaccard similarity score. Gray, baseline; white, statistically identical to baseline; otherwise, the p-value of hypothesis that a method performs better than the baseline.
Entropy 18 00282 g005
Figure 6. Histogram of the methods’ likelihood of performing better than RAkELd in subset accuracy aggregated over datasets.
Figure 6. Histogram of the methods’ likelihood of performing better than RAkELd in subset accuracy aggregated over datasets.
Entropy 18 00282 g006
Figure 7. Statistical evaluation of the method’s performance in terms of micro-averaged F1 score. Gray, baseline; white, statistically identical to baseline, otherwise; the p-value of the hypothesis that a method performs better than the baseline.
Figure 7. Statistical evaluation of the method’s performance in terms of micro-averaged F1 score. Gray, baseline; white, statistically identical to baseline, otherwise; the p-value of the hypothesis that a method performs better than the baseline.
Entropy 18 00282 g007
Figure 8. Histogram of the methods’ likelihood of performing better than RAkELd in Jaccard similarity aggregated over datasets.
Figure 8. Histogram of the methods’ likelihood of performing better than RAkELd in Jaccard similarity aggregated over datasets.
Entropy 18 00282 g008
Figure 9. Histogram of the methods’ likelihood of performing better than RAkELd in Hamming loss similarity aggregated over datasets.
Figure 9. Histogram of the methods’ likelihood of performing better than RAkELd in Hamming loss similarity aggregated over datasets.
Entropy 18 00282 g009
Table 1. Likelihood of performing better than RAkELd in the micro-averaged F1 score of every method for each dataset.
Table 1. Likelihood of performing better than RAkELd in the micro-averaged F1 score of every method for each dataset.
BRLPFast GreedyFast Greedy-WeightedInfomapInfomap-WeightedLabel_PropagationLabel_Propagation-WeightedLeading_EigenvectorLeading_Eigenvector-WeightedWalktrapWalktrap-Weighted
Corel5k0.8564440.6080000.9613330.8040000.5240000.8817780.6013330.5240000.9497780.8182220.7453330.799111
bibtex0.9977780.7826670.7564440.7942220.6648890.8168890.7493330.8822220.8120000.8008890.8351110.833333
birds0.9685620.4382800.8437360.9468330.5917710.4336570.3643090.4789640.6306060.8307910.6948680.836338
delicious0.9146670.8693330.9417780.9364440.8640000.8742220.8928890.8688890.9346670.9186670.9128890.916000
emotions0.5000000.5217390.5652170.6739130.7391300.5869570.6739130.9130430.7173910.6304350.7391300.891304
enron0.8020000.8730000.9345000.9380000.7860000.7765000.8155000.8390000.7760000.9455000.7610000.859500
genbase0.9417780.8800000.8644440.9191110.8622220.9133330.8800000.8826670.8826670.8622220.8826670.880000
mediamill0.7408890.6093330.7697780.9320000.6275560.5622220.6151110.5893330.7093330.8866670.7151110.854222
medical0.9385000.5965000.7690000.7995000.6880000.7360000.7720000.6230000.7700000.7295000.6675000.698500
scene0.6739130.6086960.6956520.6956520.4782610.5869570.5217390.6086960.6739130.6956520.6956520.695652
tmc2007-5000.9993431.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.000000
yeast0.7464580.6711410.7404920.9261740.8150630.8083520.7188670.7912010.7904550.8918720.7337810.960477
Table 2. Likelihood of performing better than RAkELd in the macro-averaged F1 score of every method for each dataset.
Table 2. Likelihood of performing better than RAkELd in the macro-averaged F1 score of every method for each dataset.
BRLPFast GreedyFast Greedy-WeightedInfomapInfomap-WeightedLabel_PropagationLabel_Propagation-WeightedLeading_EigenvectorLeading_Eigenvector-WeightedWalktrapWalktrap-Weighted
Corel5k1.0000000.6657780.9808890.9684440.6151110.9968890.5964440.7124440.8364440.9013330.8453330.969778
bibtex1.0000000.8711110.8391110.8693330.8031110.8871110.8493330.9457780.8800000.8782220.8813330.876000
birds0.9926030.5594080.8839570.9990750.8049010.6736010.4493760.6712900.7364770.8478960.7864080.897365
delicious0.9977780.9377781.0000001.0000000.9293330.9564440.9968890.9742221.0000001.0000001.0000001.000000
emotions0.5000000.5434780.5434780.7173910.7173910.6086960.6304350.9130430.6956520.6956520.7173910.891304
enron0.9915000.9665001.0000000.9730000.9435000.9480000.8755000.9540000.9815000.9920000.9490000.830000
genbase0.9533330.8293330.8817780.8924440.8360000.8920000.8404440.8408890.8826670.8360000.8484440.840444
mediamill0.9644440.8604440.8826670.9697780.8351110.7431110.7924440.7595560.8813330.9648890.8408890.943111
medical0.9775000.7255000.7685000.7505000.6960000.7225000.7300000.6975000.7835000.7030000.7015000.745000
scene0.6739130.5652170.6956520.6956520.4782610.5000000.4782610.5217390.6956520.6956520.6956520.695652
tmc2007-5001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.000000
yeast0.9798660.8292320.9217000.9701720.8583150.8933630.8113350.8695000.8471290.9507830.8717380.934377
Table 3. Likelihood of performing better than RAkELd in the subset accuracy of every method for each dataset.
Table 3. Likelihood of performing better than RAkELd in the subset accuracy of every method for each dataset.
BRLPFast GreedyFast Greedy-WeightedInfomapInfomap-WeightedLabel_PropagationLabel_Propagation-WeightedLeading_EigenvectorLeading_Eigenvector-WeightedWalktrapWalktrap-Weighted
Corel5k0.0000000.9537780.6520000.3017780.9657780.6520000.9537780.8266670.0000000.0000000.3017780.780889
bibtex0.4920000.9751110.8280000.7231110.9715560.7617780.9751110.7617780.8000000.7880000.7991110.761778
birds0.3800280.3365700.2131300.0619510.6518720.5256590.3800280.3365700.0517800.0392970.0781320.429958
delicious1.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.000000
emotions0.3260870.7173910.8260870.8478260.9565220.8913040.8478260.8913040.8260870.8913040.7608701.000000
enron0.5040000.9640000.7755000.8170000.9590000.9410000.9905000.9865000.8655000.7950000.6595000.775500
genbase0.9075560.8715560.8511110.9075560.8511110.9075560.8715560.8715560.8715560.8511110.8715560.871556
mediamill0.3182220.9240000.8013330.8568890.9871110.8586670.9533330.9360000.7768890.8364440.9142220.844444
medical0.8665000.8930000.6865000.8390000.5805000.7825000.8390000.7435000.8140000.8530000.6310000.665500
scene0.2826091.0000000.8695650.6521741.0000001.0000001.0000001.0000000.8260870.5434780.8695650.630435
tmc2007-5001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.000000
yeast0.6196870.9903060.9985090.9642060.9970170.9903060.9955260.9858310.9873230.9530200.9865770.966443
Table 4. Likelihood of performing better than RAkELd in Jaccard similarity of every method for each dataset.
Table 4. Likelihood of performing better than RAkELd in Jaccard similarity of every method for each dataset.
BRLPFast GreedyFast Greedy-WeightedInfomapInfomap-WeightedLabel_PropagationLabel_Propagation-WeightedLeading_EigenvectorLeading_Eigenvector-WeightedWalktrapWalktrap-Weighted
Corel5k0.6751110.8284440.8888890.5626670.7880000.7871110.8031110.5973330.8884440.6444440.5048890.644889
bibtex0.9844440.9986670.7808890.7208890.9755560.7582220.9951110.8662220.8675560.7960000.9382220.824000
birds0.8206200.3559870.5423020.2981970.6537220.5108650.3458160.4405920.1631990.4641700.2385580.725381
delicious0.9760000.9733330.9435560.9004440.9644440.9684440.9924440.9840000.9382220.8288890.9888890.963111
emotions0.4565220.6521740.7608700.6521740.9565220.8260870.8913040.9565220.8913040.6521740.8913041.000000
enron0.7355000.9845000.9510000.9045000.9340000.9605000.9760000.9940000.7845000.9570000.7600000.827000
genbase0.9111110.9044440.8644440.9111110.8693330.9528890.9097780.8840000.9040000.8693330.8840000.909778
mediamill0.5373330.8666670.6822220.9760000.9217780.7746670.8933330.8235560.7062220.8955560.9142220.900444
medical0.8975000.7895000.7655000.8500000.6500000.7450000.8105000.7220000.8275000.7510000.6910000.688500
scene0.4565221.0000000.8913040.7826090.9782611.0000000.9782611.0000000.9130430.7391300.8913040.782609
tmc2007-5000.9980291.0000001.0000001.0000001.0000001.0000001.0000001.0000000.9993431.0000001.0000001.000000
yeast0.6614470.9008200.9798660.9500370.9843400.9791200.9478000.9776290.9739000.9313940.9560030.962714
Table 5. Likelihood of performing better than RAkELd in Hamming loss of every method for each dataset.
Table 5. Likelihood of performing better than RAkELd in Hamming loss of every method for each dataset.
BRLPFast GreedyFast Greedy-WeightedInfomapInfomap-WeightedLabel_PropagationLabel_Propagation-WeightedLeading_EigenvectorLeading_Eigenvector-WeightedWalktrapWalktrap-Weighted
Corel5k0.0000000.0040000.4000000.1480000.0035560.1155560.0048890.0097780.0693330.0004440.3506670.243556
bibtex0.0977780.0231110.0222220.0933330.0115560.0444440.0124440.1160000.0591110.0591110.0706670.089333
birds0.4743410.0083220.4590850.5409150.0559410.0198800.0027740.0064720.0443830.0138700.1040220.154415
delicious0.0000000.0000000.2444440.3506670.0000000.0000000.0000000.0000000.2497780.3871110.0817780.120444
emotions0.3695650.3913040.4782610.6956520.6956520.5217390.5869570.8913040.6739130.5217390.6086960.847826
enron0.0440000.4830000.4600000.5675000.2840000.2745000.4360000.5225000.4745000.3130000.1510000.277000
genbase0.9111110.8777780.8568890.9111110.8568890.9111110.8777780.8777780.8777780.8568890.8777780.877778
mediamill0.1382220.2560000.3146670.4031110.3293330.2386670.2608890.2764440.2222220.4031110.3453330.301778
medical0.9470000.5170000.7350000.7620000.6365000.7250000.7835000.5290000.7405000.7470000.5855000.675500
scene0.3695650.5217390.5217390.5000000.2826090.3913040.4782610.4565220.4565220.5000000.6304350.456522
tmc2007-5000.9993431.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.000000
yeast0.5480980.4780010.5920950.7263240.5995530.5287100.5480980.6159580.6092470.6860550.5272180.775541
Table 6. Number of communities (number of label subspaces) detected in training sets by each method per dataset.
Table 6. Number of communities (number of label subspaces) detected in training sets by each method per dataset.
Fast GreedyFast Greedy-WeightedInfomapInfomap-WeightedLabel_PropagationLabel_Propagation-WeightedLeading_EigenvectorLeading_Eigenvector-WeightedWalktrapWalktrap-Weighted
Corel5k1013825477152225
bibtex3528174598
birds3311112327
delicious3613122654
emotions1211121212
enron44222233137
genbase10111011101110111211
mediamill3312113234
medical20192020181820182020
scene3322223333
tmc2007-5002311112312
yeast1411111414
Table 7. The summary of the evaluated hypotheses and proposed recommendations of this paper.
Table 7. The summary of the evaluated hypotheses and proposed recommendations of this paper.
MeasureMicro-Averaged F1Micro Averaged F1Subset AccuracyJaccard SimilarityHamming Loss
RH1: The data-driven approach is significantly better than random ( α = 0 . 05 )YesYesYesYesNo
RH2: The data-driven approach is more likely to outperform RAkELd than a priori methodsYesNoYesYesYes
RH3: The data-driven approach is more likely to outperform RAkELd than a priori methods in the worst caseYesYesYesYesYes
RH4: The data-driven approach is more likely to perform better than RAkELd in the worst case, than otherwiseYesYesYesYesNo
RH5: The data-driven approach is more time efficient than RAkELdYesYesYesYesYes
Recommended data-driven approachWeighted fast greedy and weighted walktrapWeighted fast greedyUnweighted infomapUnweighted infomapWeighted fast greedy

Share and Cite

MDPI and ACS Style

Szymański, P.; Kajdanowicz, T.; Kersting, K. How Is a Data-Driven Approach Better than Random Choice in Label Space Division for Multi-Label Classification? Entropy 2016, 18, 282. https://doi.org/10.3390/e18080282

AMA Style

Szymański P, Kajdanowicz T, Kersting K. How Is a Data-Driven Approach Better than Random Choice in Label Space Division for Multi-Label Classification? Entropy. 2016; 18(8):282. https://doi.org/10.3390/e18080282

Chicago/Turabian Style

Szymański, Piotr, Tomasz Kajdanowicz, and Kristian Kersting. 2016. "How Is a Data-Driven Approach Better than Random Choice in Label Space Division for Multi-Label Classification?" Entropy 18, no. 8: 282. https://doi.org/10.3390/e18080282

APA Style

Szymański, P., Kajdanowicz, T., & Kersting, K. (2016). How Is a Data-Driven Approach Better than Random Choice in Label Space Division for Multi-Label Classification? Entropy, 18(8), 282. https://doi.org/10.3390/e18080282

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop