Next Article in Journal
The Symplectic Camel and Poincaré Superrecurrence: Open Problems
Next Article in Special Issue
Sample Entropy of Human Gait Center of Pressure Displacement: A Systematic Methodological Analysis
Previous Article in Journal
Combination of R-R Interval and Crest Time in Assessing Complexity Using Multiscale Cross-Approximate Entropy in Normal and Diabetic Subjects
Previous Article in Special Issue
Vague Entropy Measure for Complex Vague Soft Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing Information Transmission in Data Transformations with the Channel Multivariate Entropy Triangle

by
Francisco J. Valverde-Albacete
and
Carmen Peláez-Moreno
*,†
Department of Signal Theory and Communications, Universidad Carlos III de Madrid, Leganés 28911, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2018, 20(7), 498; https://doi.org/10.3390/e20070498
Submission received: 3 May 2018 / Revised: 11 June 2018 / Accepted: 20 June 2018 / Published: 27 June 2018

Abstract

:
Data transformation, e.g., feature transformation and selection, is an integral part of any machine learning procedure. In this paper, we introduce an information-theoretic model and tools to assess the quality of data transformations in machine learning tasks. In an unsupervised fashion, we analyze the transformation of a discrete, multivariate source of information X ¯ into a discrete, multivariate sink of information Y ¯ related by a distribution P X ¯ Y ¯ . The first contribution is a decomposition of the maximal potential entropy of ( X ¯ , Y ¯ ) , which we call a balance equation, into its (a) non-transferable, (b) transferable, but not transferred, and (c) transferred parts. Such balance equations can be represented in (de Finetti) entropy diagrams, our second set of contributions. The most important of these, the aggregate channel multivariate entropy triangle, is a visual exploratory tool to assess the effectiveness of multivariate data transformations in transferring information from input to output variables. We also show how these decomposition and balance equations also apply to the entropies of X ¯ and Y ¯ , respectively, and generate entropy triangles for them. As an example, we present the application of these tools to the assessment of information transfer efficiency for Principal Component Analysis and Independent Component Analysis as unsupervised feature transformation and selection procedures in supervised classification tasks.

Graphical Abstract

1. Introduction

Information-related considerations are often cursorily invoked in many machine learning applications, sometimes to suggest why a system or procedure is seemingly better than another at a particular task. In this paper, we set out to ground our work on measurable evidence phrases such as “this transformation retains more information from the data” or “this learning method uses the information from the data better than this other”.
This has become particularly relevant with the increase of complexity of machine learning methods, such as deep neuronal architectures [1], which prevents straightforward interpretations. Nowadays, these learning schemes almost always become black-boxes, where the researchers try to optimize a prescribed performance metric without looking inside. However, there is a need to assess what the deep layers are actually accomplishing. Although some answers have started to appear [2,3], the issue is by no means settled.
In this paper, we put forward that framing the previous problem in a generic information-theoretical model can shed light on it by exploiting the versatility of information theory. For instance, a classical end-to-end example of an information-based model evaluation can be observed in Figure 1a. In this supervised scheme introduced in [4], the evaluation of the performance of the classifier involves only the comparison of the true labels K vs. the predicted labels K ^ . This means that all the complexity enclosed in the classifier box cannot be accessed, measured or interpreted.
In this paper, we want to expand the previous model into the scheme of Figure 1a, which provides a more detailed picture of the contents of the black-box where:
  • A random source of classification labels K is subjected to a measurement process that returns random observations X ¯ . The n instances of pairs ( k i , x ¯ i ) , 1 i n is often called the (task) dataset.
  • Then, a generic data transformation block may transform the available data, e.g., the observations in the dataset X ¯ , into other data with “better” characteristics, the transformed feature vectors Y ¯ . These characteristics may be representational power, independence among individual dimensions, reduction of complexity offered to a classifier, etc. The process is normally called feature transformation and selection.
  • Finally, the Y ¯ are the inputs to an actual classifier of choice that obtains the predicted labels K ^ .
This would allow us to better understand the flow of information in the classification process with a view toward assessing and improving it.
Note the similarity between the classical setting of Figure 1b and the transformation block of Figure 1a reproduced in Figure 1c for convenience. Despite this, the former represents a Single-Input Single-Output (SISO) block with ( K , K ^ ) P K K ^ , whereas the latter represents a multivariate Multiple-Input Multiple-Output (MIMO) block described by the joint distribution of random vectors ( X ¯ , Y ¯ ) P X ¯ Y ¯ .
This MIMO kind of block may represent an unsupervised transformation method—for instance, a Principal Component Analysis (PCA) or Independent Component Analysis (ICA)—in which case, the “effectiveness” of the transformation is supplied by a heuristic principle, e.g., least reconstruction error on some test data, maximum mutual information, etc. However, it may also represent a supervised transformation method—for instance, X ¯ are the feature instances, and Y ¯ are the (multi-)labels or classes in a classification task, or Y ¯ may be the activation signals of a convolutional neural network trained using an implicit target signal— in which case, the “effectiveness” should measure the conformance to the supervisory signal.
In [4], we argued for carrying out the evaluation of classification tasks that can be modeled by Figure 1b with the new framework of entropy balance equations and their related entropy triangles [4,5,6]. This has provided a means of quantifying and visualizing the end-to-end information transfer for SISO architectures. The gist of this framework is explained in Section 2.1: if a classifier working on a certain dataset obtained a confusion matrix P K K ^ , then we can information-theoretically assess the classifier by analyzing the entropies and information in the related distribution P K K ^ with the help of a balance equation [6]. However, looking inside the black-box poses a challenge since X ¯ and Y ¯   are random vectors and most information-theoretic quantities are not readily available in their multivariate version.
If we want to extend the same framework of evaluation to random vectors in general, we need the multivariate generalizations of the information-theoretic measures involved in the balance equations, an issue that is not free of contention. With this purpose in mind, we review the best-known multivariate generalizations of mutual information in Section 2.2.
We present our contributions finally in Section 3. As a first result, we develop a balance equation for the joint distribution P X ¯ Y ¯ and related representation in Section 3.1 and Section 3.2, respectively. However we are also able to obtain split equations for the input and output multivariate sources only tied by one multivariate extension of mutual information, much as in the SISO case. As an instance of use, in Section 3.3, we analyze the transfer of information in PCA and ICA transformations applied to some well-known UCI datasets. We conclude with a discussion of the tools in light of this application in Section 3.4.

2. Methods

In Section 3, we will build a solution to our problem by finding the minimum common multiple, so to speak, of our previous solutions to the SISO block we describe in Section 2.1 and the multivariate source cases, to be described in Section 2.2.

2.1. The Channel Bivariate Entropy Balance Equation and Triangle

A solution to conceptualizing and visualizing the transmission of information through a channel where input and output are reduced to a single variable, that is with | X ¯ | = 1 and | Y ¯ | = 1 , was presented in [6] and later extended in [4]. For this case, we use simply X and Y to describe the random variables. Notice that in the Introduction, and later in the example application, these are called K and K ^ , but here, we want to present this case as a simpler version of the one we set out to solve in this paper. Figure 2a, then, depicts a classical information-diagram (i-diagram) [7,8] of an entropy decomposition around P X Y in which we have included the exterior boundaries arising from the entropy balance equation, as we will show later. Three crucial regions can be observed:
  • The (normalized) redundancy ([9], Section 2.4), or divergence with respect to uniformity (yellow area), Δ H P X · P Y , between the joint distribution where P X and P Y are independent and the uniform distributions with the same cardinality of events as P X and P Y ,
    Δ H P X · P Y = H U X · U Y H P X · P Y .
  • The mutual information, M I P X Y  [10] (each of the green areas), quantifies the force of the stochastic binding between P X and P Y , “towards the outside” in Figure 2a,
    M I P X Y = H P X · P Y H P X Y
    but also “towards the inside”,
    M I P X Y = H P X H P X | Y = H P Y H P Y | X .
  • The variation of information (the sum of the red areas), V I P X Y  [11], embodies the residual entropy, not used in binding the variables,
    V I P X Y = H P X | Y + H P Y | X .
Then, we may write the following entropy balance equation between the entropies of X and Y:
H U X · U Y = Δ H P X · P Y + 2 · M I P X Y + V I P X Y 0 Δ H P X · P Y , M I P X Y , V I P X Y H U X · U Y
where the bounds are easily obtained from distributional considerations [6]. If we normalize (5) by the overall entropy H U X · U Y , we obtain:
1 = Δ H P X · P Y + 2 · M I P X Y + V I P X Y     0 Δ H P X · P Y , M I P X Y , V I P X Y 1
Equation (6) is the 2-simplex in normalized Δ H P X · P Y × 2 M I P X Y × V I P X Y space. Each joint distribution P X Y can be characterized by its joint entropy fractions, F ( P X Y ) = [ Δ H P X Y , 2 · M I P X Y , V I P X Y ] , whose projection onto the plane with director vector ( 1 , 1 , 1 ) is its de Finetti or compositional diagram [12]. This diagram of the 2-simplex is an equilateral triangle, the coordinates of which are F ( P X Y ) , so every bivariate distribution is shown as a point in the triangle, and each zone in the triangle is indicative of the characteristics of distributions, the coordinates of which fall in it. This is what we call the Channel Bivariate Entropy Triangle (CBET) whose schematic is shown in Figure 3.
We can actually decompose (5) and the quantities in it into two split balance equations,
H U X = Δ H P X + M I P X Y + H P X | Y H U Y = Δ H P Y + M I P X Y + H P Y | X .
with the obvious limits. These can be each normalized by H U X , respectively H U Y , leading to the 2-simplex equations:
1 = Δ H P X + M I P X Y + H P X | Y 1 = Δ H P Y + M I P X Y + H P Y | X .
Since these are also equations on a 2-simplex, we can actually represent the coordinates F X ( P X Y ) = [ Δ H P X , M I P X Y , H P X | Y ] and F Y ( P X Y ) = [ Δ H P Y , M I P X Y , H P Y | X ] in the same triangle side by side the original F ( P X Y ) , whereby the representation seems to split in two.

Application: The Evaluation of Multiclass Classification

The CBET can be used to visualize the performance of supervised classifiers in a straightforward manner as announced in the Introduction: Consider the confusion matrix N K K ^ of a classifier chain on a supervised classification task given the random variable of true class labels K P K and that of predicted labels K ^ P K ^ as depicted in Figure 1a, which now play the role of P X and P Y . From this confusion matrix, we can estimate the joint distribution P K K ^ between the random variables, so that the entropy triangle for P K K ^ produces valuable information about the actual classifier used to solve the task [6,13] and even the theoretical limits of the task; for instance, whether it can be solved in a trustworthy manner by classification technology and with what effectiveness.
The CBET acts, in this case, as an exploratory data analysis tool for visual assessment, as shown in Figure 3.
The success of this approach in the bivariate, supervised classification case is a strong hint that the multivariate extension will likewise be useful for other machine learning tasks. See [4] for a thorough explanation of this procedure.

2.2. Quantities around the Multivariate Mutual Information

The main hurdle for a multivariate extension of the balance Equation (5) and the CBET is the multivariate generalization of binary mutual information, since it quantifies the information transport from input to output in the bivariate case and is also crucial for the decoupling of (5) into the split balance Equation (7). For this reason, we next review the different “flavors” of information measures describing sets of more than two variables looking for these two properties. We start from very basic definitions both in the interest of self-containment and to provide a script of the process of developing future analogues for other information measures.
To fix notation, let X ¯ = { X i 1 i m } be a set of discrete random variables with joint multivariate distribution P X ¯ = P X 1 X m and the corresponding marginals P X i ( x i ) = j i P X ¯ ( x ¯ ) where x ¯ = x 1 x m is a tuple of m elements; likewise for Y ¯ = { Y j 1 j l } , with P Y ¯ = P Y 1 Y l and the marginals P Y j . Furthermore, let P X ¯ Y ¯ be the joint distribution of the ( m + l ) -length tuples X ¯ Y ¯ . Note that two different situations can be clearly distinguished:
Situation 1:
All the random variables form part of the same set X ¯ , and we are looking at information transfer within this set, or
Situation 2:
They are partitioned into two different sets X ¯ and Y ¯ , and we are looking at information transfer between these sets.
An up-to-date review of multivariate information measures in both situations is [14], which follows the interesting methodological point from [15] of calling information those measures that involve amounts of entropy shared by multiple variables and entropies those that do not—although, this poses a conundrum for the entropy written as the self-information H P X = M I P X X .
Since i-diagrams are a powerful tool to visualize the interaction of distributions in the bivariate case, we will also try to use them for sets of random variables. For multivariate generalizations of mutual information as seen in the i-diagrams, the following caveats apply:
  • Their multivariate generalization is only warranted when signed measures of probability are considered, since it is well known that some of these “areas” can be negative, contrary to the geometric intuitions in this respect.
  • We should retain the bounding rectangles that appear when considering the most entropic distributions with similar support to the ones being graphed [6]. This is the sense of the bounding rectangles in Figure 4a,b.
With great insight, the authors of [15] point out that some of the multivariate information measures stem from focusing on a particular property of the bivariate mutual information and generalizing it to the multivariate setting. The properties in question—including already stated (2) and (3)—are: The properties in question are:
M I P X Y = H P X + H P Y H P X Y
M I P X Y = H P X H P X | Y = H P Y H P Y | X
M I P X Y = x , y P X Y ( x , y ) log P X Y ( x , y ) P X ( x ) P Y ( y )
Regarding the first situation of a vector of random variables X ¯ P X ¯ , let Π X ¯ = i = 1 n P X i be the (jointly) independent distribution with similar marginals to P X ¯ . To picture this (virtual) distribution consider Figure 4a depicting an i-diagram for X ¯ = [ X 1 , X 2 , X 3 ] . Then, Π X ¯ = P X 1 · P X 2 · P X 3 is the inner rectangle containing both green areas. The different extensions of mutual information that concentrate on different properties are:
  • the total correlation [16], integration [17] or multi-information [18], which is a generalization of (2), represented by the green area outside H P X ¯ .
    C P X ¯ = H Π X ¯ H P X ¯
  • the dual total correlation [19,20] or interaction complexity [21] is a generalization of (3), represented by the green area inside H P X ¯ :
    D P X ¯ = H P X ¯ V I P X ¯
  • the interaction information [22], multivariate mutual information [23] or co-information [24] is the generalization of (9), the total amount of information to which all variables contribute.
    M I P X ¯ = P X ¯ ( x ¯ ) log P X ¯ ( x ¯ ) Π X ¯ ( x ¯ )
    It is represented by the inner convex green area (within the dual total correlation), but note that it may in fact be negative for n > 2  [25].
  • the local exogenous information [15] or the bound information [26] is the addition of the total correlation and the dual total correlation:
    M P X ¯ = C P X ¯ + D P X ¯ .
Some of these generalizations of the multivariate case were used in [5,26] to develop a similar technique as the CBET, but applied to analyzing the information content of data sources. For this purpose, it was necessary to define for every random variable a residual entropy H P X i X i c , where X i c = X ¯ \ { X i } , which is not explained by the information provided by the other variables. We call residual information [15] or (multivariate) variation of information [11,26] the generalization of the same quantity in the bivariate case, i.e., the sum of these quantities across the set of random variables:
V I P X ¯ = i = 1 n H P X i X i c .
Then, the variation of information can easily be seen to consist of the sum of the red areas in Figure 4a and amounts to information particular to each variable.
The main question regarding this issue is which, if any, of these generalizations of bivariate mutual information are adequate for an analogue of the entropy balance equations and triangles. Note that all of these generalizations consider X ¯ as a homogeneous set of variables, that is Situation 1 described at the beginning of this section, and none consider the partitioning of the variables in X ¯ into two subsets (Situation 2), for instance to distinguish between input and output ones, so the answer cannot be straightforward. This issue is clarified in Section 3.1.

3. Results

Our goal is now to find a decomposition of the entropies around characterizing a joint distribution P X ¯ Y ¯ between random vectors X ¯ and Y ¯ in ways analogous to those of (5) but considering multivariate input and output.
Note that it provides no advantage trying to do this on continuous distributions, as the entropic measures used are basic. Rather, what we actually capitalize on is in the outstanding existence of a balance equation between these apparently simple entropic concepts, and what their intuitive meanings afford to the problem of measuring the transfer of information in data processing tasks. As we set out to demonstrate in this section, our main results are in complete analogy to those of the binary case, but with the flavour of the multivariate case.

3.1. The Aggregate and Split Channel Multivariate Balance Equation

Consider the modified information diagram of Figure 4b highlighting entropies for some distributions around P X ¯ Y ¯ . When we distinguish two random vectors in the set of variables X ¯ and Y ¯ , a proper multivariate generalization of the variation of information in (4) is
V I P X ¯ Y ¯ = H P X ¯ | Y ¯ + H P Y ¯ | X ¯ .
and we will also call it the variation of information. It represents the addition of the information in X ¯ not shared with Y ¯ and vice-versa, as captured by the red area in Figure 4b. Note that this is a non-negative quantity, since its is the addition of two entropies.
Next, consider
  • U X ¯ Y ¯  , the uniform distribution over the supports of X ¯ and Y ¯ , and
  • P X ¯ × P Y ¯ , the distribution created with the marginals of P X ¯ Y ¯ considered independent.
Then, we may define a multivariate divergence with respect to uniformity—in analogy to (1)—as
Δ H P X ¯ × P Y ¯ = H U X ¯ Y ¯ H P X ¯ × P Y ¯ .
This is the yellow area in Figure 4b representing the divergence of the virtual distribution P X ¯ × P Y ¯ with respect to uniformity. The virtuality comes from the fact that this distribution does not properly exist in the context being studied. Rather, it only appears in the extreme situation that the marginals of P X ¯ Y ¯ are independent.
Furthermore, recall that both the total entropy of the uniform distribution and the divergence from uniformity factor into individual equalities H U X ¯ U Y ¯ = H U X ¯ + H U Y ¯ —since uniform joint distributions always have independent marginals—and H P X ¯ × P Y ¯ = H P X ¯ + H P Y ¯ . Therefore (16) admits splitting as Δ H P X ¯ × P Y ¯ = Δ H P X ¯ + Δ H P Y ¯ where
Δ H P X ¯ = H U X ¯ H P X ¯     Δ H P Y ¯ = H U Y ¯ H P Y ¯ .
Now, both U X ¯ and U Y ¯ are the most entropic distributions definable in the support of X ¯ and Y ¯ whence both Δ H P X ¯ and Δ H P Y ¯ are non-negative, as is their addition. These generalizations are straightforward and intuitively mean that we expect them to agree with the intuitions developed in the CBET, which is an important usability concern.
The problem is finding a quantity that fulfills the same role as the (bivariate) mutual information. The first property that we would like to have is for this quantity to be a “transmitted information” after conditioning away any of the entropy of either partition, so we propose the following as a definition:
I P X ¯ Y ¯ = H P X ¯ Y ¯ V I P X ¯ Y ¯
represented by the inner green area in the i-diagram of Figure 4b. This can easily be “refocused” on each of the subsets of the partition:
Lemma 1.
Let P X ¯ Y ¯ be a discrete joint distribution. Then
H P X ¯ H P X ¯ Y ¯ = H P Y ¯ H P Y ¯ X ¯ = I P X ¯ Y ¯
Proof. 
Recalling that the conditional entropies are easily related to the joint entropy by the chain rule H P X ¯ Y ¯ = H P X ¯ + H P Y ¯ X ¯ = H P Y ¯ + H P X ¯ Y ¯ , simply subtract V I P X ¯ Y ¯ . ☐
This property introduces the notion that this information is within each of X ¯ and Y ¯ independently but mutually induced. It is easy to see that this quantity appears once again in the i-diagram:
Lemma 2.
Let P X ¯ Y ¯ be a discrete joint distribution. Then
I P X ¯ Y ¯ = H P X ¯ × P Y ¯ H P X ¯ Y ¯ .
Proof. 
Considering the entropy decomposition of P X ¯ × P Y ¯ :
H P X ¯ × P Y ¯ H P X ¯ Y ¯ = H P X ¯ + H P Y ¯ H P Y ¯ + H P X ¯ Y ¯ = H P X ¯ H P X ¯ Y ¯ = I P X ¯ Y ¯
 ☐
In other words, this is the quantity of information required to bind P X ¯ and P Y ¯ ; equivalently, it is the amount of information lost from P X ¯ × P Y ¯ to achieve the binding in P X ¯ Y ¯ . Pictorially, this is the outermost green area in Figure 4b, and it must be non-negative, since P X ¯ × P Y ¯ is more entropic than P X ¯ Y ¯ . Notice that (18) and (19) are the analogues of (10) and (11), respectively, but with the flavor of (2) and (3). Therefore, this quantity must be the multivariate mutual information of P X ¯ Y ¯ as per the Kullback-Leibler divergence definition:
Lemma 3.
Let P X ¯ Y ¯ be a discrete joint distribution. Then
I P X ¯ Y ¯ = i , j P X ¯ Y ¯ ( x i , y j ) log P X ¯ Y ¯ ( x i , y j ) P X ¯ ( x i ) P Y ¯ ( y j )
Proof. 
This is an easy manipulation.
i , j P X ¯ Y ¯ ( x i , y j ) log P X ¯ Y ¯ ( x i , y j ) P X ¯ ( x i ) P Y ¯ ( y j ) = i , j P X ¯ Y ¯ ( x i , y j ) log P X ¯ Y ¯ = y j ( x i | y j ) P X ¯ ( x i ) = i P X ¯ ( x i ) log 1 P X ¯ ( x i ) j P Y ¯ ( y j ) i P X ¯ Y ¯ = y j ( x i | y j ) log 1 P X ¯ Y ¯ = y j ( x i | y j ) = = H P X ¯ H P X ¯ Y ¯ = I P X ¯ Y ¯ ,
after a step of marginalization and considering (3). ☐
With these relations we can state our first theorem:
Theorem 1.
Let P X ¯ Y ¯ be a discrete joint distribution. Then the following decomposition holds:
H U X ¯ × U Y ¯ = Δ H P X ¯ × P Y ¯ + 2 · I P X ¯ Y ¯ + V I P X ¯ Y ¯ 0 Δ H P X ¯ × P Y ¯ , I P X ¯ Y ¯ , V I P X ¯ Y ¯ H U X ¯ × U Y ¯
Proof. 
From (16) we have H U X ¯ × U Y ¯ = Δ H P X ¯ × P Y ¯ + H P X ¯ × P Y ¯ whence by introducing (18) and (20) we obtain:
H U X ¯ × U Y ¯ = Δ H P X ¯ × P Y ¯ + I P X ¯ Y ¯ + H P X ¯ Y ¯ = Δ H P X ¯ × P Y ¯ + I P X ¯ Y ¯ + I P X ¯ Y ¯ + V I P X ¯ Y ¯ .
Recall that each quantity is non-negative by (15), (16) and (21), so the only things left to be proven are the limits for each quantity in the decomposition. For that purpose, consider the following clarifying conditions,
  • X ¯ marginal uniformity when H P X ¯ = H U X ¯ , Y ¯ marginal uniformity when H P Y ¯ = H U Y ¯ and marginal uniformity when both conditions coocur.
  • Marginal independence, when P X ¯ Y ¯ = P X ¯ × P Y ¯ .
  • Y ¯ determines X ¯ when H P X ¯ Y ¯ = 0 , X ¯ determines Y ¯ when H P Y ¯ X ¯ = 0 and mutual determination, when both conditions hold.
Notice that these conditions are independent of each other and that each fixes the value of one of the quantities in the balance:
  • For instance, in case H P X ¯ = H U X ¯ then Δ H P X ¯ = 0 after (17). Similarly, if H P Y ¯ = H U Y ¯ then Δ H P Y ¯ = 0 . Hence when marginal uniformity holds, we have Δ H P X ¯ Y ¯ = 0 .
  • Similarly, when marginal independence holds, we see that I P X ¯ Y ¯ = 0 from (20). Otherwise stated, H P X ¯ Y ¯ = H P X ¯ and H P Y ¯ X ¯ = H P Y ¯ .
  • Finally, if mutual determination holds—that is to say the variables in either set are deterministic functions of those of the other set—by the definition of the multivariate variation of information, we have V I P X ¯ Y ¯ = 0 .
Therefore, these three conditions fix the lower bounds for their respectively related quantities. Likewise, the upper bounds hold when two of the conditions hold at the same time. This is easily seen invoking the previously found balance Equation (23):
  • For instance, if marginal uniformity holds, then Δ H P X ¯ Y ¯ = 0 . But if marginal independence also holds, then I P X ¯ Y ¯ = 0 whence by (23) V I P X ¯ Y ¯ = H U X ¯ × U Y ¯ .
  • But if both marginal uniformity and mutual determination hold, then we have Δ H P X ¯ Y ¯ = 0 and V I P X ¯ Y ¯ = 0 so that I P X ¯ Y ¯ = H U X ¯ × U Y ¯ .
  • Finally, if both mutual determination and marginal indepence holds, then a fortiori Δ H P X ¯ Y ¯ = H U X ¯ × U Y ¯ .
This concludes the proof. ☐
Notice how the bounds also allow an interpretation similar to that of (5). In particular, the interpretation of the conditions for actual joint distributions will be taken again in Section 3.2.
The next question is whether the balance equation also admits splitting.
Theorem 2.
Let P X ¯ Y ¯ be a discrete joint distribution. Then the Channel Multivariate Entropy Balance equation can be split as:
H U X ¯ = Δ H P X ¯ + I P X ¯ Y ¯ + H P X ¯ Y ¯     0 Δ H P X ¯ , I P X ¯ Y ¯ , H P X ¯ | Y ¯ H U X ¯
H U Y ¯ = Δ H P Y ¯ + I P X ¯ Y ¯ + H P Y ¯ X ¯     0 Δ H P Y ¯ , I P X ¯ Y ¯ , H P Y ¯ | X ¯ H U Y ¯
Proof. 
We prove (24): the proof of (25) is similar mutatis mutandis.
In a similar way as for (22), we have that H U X ¯ = Δ H P X ¯ + H P X ¯ . By introducing the value of H P X ¯ from (19) we obtain the decomposition of H U X ¯ of (24).
These quantities are non-negative, as mentioned. Next consider the X ¯ marginal uniformity condition applied to the input vector introduced in the proof of Theorem 1. Clearly, Δ H X ¯ = 0 . Marginal independence, again, is the condition so that I X ¯ Y ¯ = 0 . Finally, if Y ¯ determines X ¯ then H P X ¯ Y ¯ = 0 . These conditions individually provide the lower bounds on each quantity.
On the other hand, when we put together any two of these conditions, we obtain the upper bound for the unspecified variable: so, if Δ H P X ¯ = 0 and I P X ¯ Y ¯ = 0 then H P X ¯ Y ¯ = H P X ¯ = H U X ¯ . Also, if I P X ¯ Y ¯ = 0 and H P X ¯ Y ¯ = 0 , then H P X ¯ = H P X ¯ Y ¯ = 0 and Δ H P X ¯ = H U X ¯ 0 . Finally, if H P X ¯ Y ¯ = 0 and Δ H P X ¯ = 0 , then I P X ¯ Y ¯ = H P X ¯ H P X ¯ Y ¯ = H U X ¯ 0  ☐.

3.2. Visualizations: From i-Diagrams to Entropy Triangles

3.2.1. The Channel Multivariate Entropy Triangle

Our next goal is to develop an exploratory analysis tool similar to the CBET introduced in Section 2.1. As in that case, we need the equation of a simplex to represent the information balance of a multivariate transformation. For that purpose, as in (6) we may normalize by the overall entropy H U X ¯ × U Y ¯ to obtain the equation of the 2-simplex in multivariate entropic space,
1 = Δ H P X ¯ × P Y ¯ + 2 · I P X ¯ Y ¯ + V I P X ¯ Y ¯ 0 Δ H P X ¯ × P Y ¯ , I P X ¯ Y ¯ , V I P X ¯ Y ¯ 1 .
The de Finetti diagram of this equation then provides the aggregated Channel Multivariate Entropy Triangle, CMET.
A formal graphical assessment of multivariate joint distribution with the CMET is fairly simple using the schematic in Figure 5a and the conditions of Theorem 1:
  • The lower side of the triangle with I P X ¯ Y ¯ = 0 , affected of marginal independence P X ¯ Y ¯ = P X ¯ × P Y ¯ , is the locus of partitioned joint distributions who do not share information between the two blocks X ¯ and Y ¯ .
  • The right side of the triangle with V I P X ¯ Y ¯ = 0 , described with mutual determination H P X ¯ Y ¯ = 0 = H P Y ¯ X ¯ , is the locus of partitioned joint distributions whose groups do not carry supplementary information to that provided by the other group.
  • The left sidewith Δ H P X ¯ Y ¯ = 0 , describing distributions with uniform marginals P X ¯ = U X ¯ and P Y ¯ = U Y ¯ , is the locus of partitioned joint distributions that offer as much potential information for transformations as possible.
Based on these characterizations we can attach interpretations to other regions of the CMET:
  • If we want a transformation from X ¯ to Y ¯ to be faithful, then we want to maximize the information used for mutual determination I P X ¯ Y ¯ 1 , equivalently, minimize at the same time the divergence from uniformity Δ H P X ¯ Y ¯ 0 and the information that only pertains to each of the blocks in the partition V I P X ¯ Y ¯ 0 . So the coordinates of a faithful partitioned joint distribution will lay close to the apex of the triangle.
  • However, if the coordinates of a distribution lay close to the left vertex V I P X ¯ Y ¯ 1 , then it shows marginal uniformity Δ H P X ¯ Y ¯ 0 but shares little or no information between the blocks I P X ¯ Y ¯ 0 , hence it must be a randomizing transformation.
  • Distributions whose coordinates lay close to the right vertex Δ H P X ¯ Y ¯ 1 are essentially deterministic and in that sense carry no information I P X ¯ Y ¯ 0 , V I P X ¯ Y ¯ 0 . Indeed in this instance there does not seem to exist a transformation, whence we call them rigid.
These qualities are annotated on the vertices of the schematic CMET of Figure 5a. Note that different applications may call for partitioned distributions with different qualities and the one used above is pertinent when the partitioned joint distributions models a transformation of X ¯ into Y ¯ or vice-versa.

3.2.2. Normalized Split Channel Multivariate Balance Equations

With a normalization similar to that from (7) to (8), (24) and (25) naturally lead to 2-simplex equations normalizing by H U X ¯ and H U Y ¯ , respectively
1 = Δ H P X ¯ + I P X ¯ Y ¯ + H P X ¯ Y ¯     0 Δ H P X ¯ , I P X ¯ Y ¯ , H P X ¯ | Y ¯ 1
1 = Δ H P Y ¯ + I P X ¯ Y ¯ + H P Y ¯ X ¯     0 Δ H P Y ¯ , I P X ¯ Y ¯ , H P Y ¯ | X ¯ 1
Note that the quantities Δ H P X ¯ and Δ H P Y ¯ have been independently motivated and named redundancies ([9], Section 2.4).
These are actually two different representations for each of the two blocks in the partitioned joint distribution. Using the fact that they share one coordinate— I P X ¯ Y ¯ —and the rest are analogues— Δ H P X ¯ and Δ H P Y ¯ on one side, and H P X ¯ Y ¯ and H P Y ¯ X ¯ on the other—we can represent both equations at the same time in a single de Finetti diagram. We call this representation the split Channel Multivariate Entropy Triangle, an schema of which can be seen in Figure 5b. The qualifying “split” then refers to the fact that each partitioned joint distribution appears as two points in the diagram. Note the double annotation in the left and bottom coordinates implying that there are two different diagrams overlapping.
Conventionally, the point referring to the X ¯ block described by (27) is represented with a cross, while the point referring to the Y ¯ block described by (28) is represented with a circle as will be noted in Figure 6.
The formal interpretation of this split diagram with the conditions of Theorem 1 follows that of the aggregated CMET but considering only one block at a time, for instance, for X ¯ :
  • The lower side of the triangle is interpreted as before.
  • The right side of the triangle is the locus of the partitioned joint distribution whose X ¯ block is completely determined by the Y ¯ block, that is, H P X ¯ Y ¯ = 0 .
  • The left side of the triangle Δ H P X ¯ = 0 is the locus of those partitioned joint distributions whose X ¯ marginal is uniform P X ¯ = U X ¯  .
The interpretation is analogue for Y ¯ mutatis mutandis.
The purpose of this representation is to investigate the formal conditions separately on each block. However, for this split representation we have to take into consideration that the normalizations may not be the same, that is H P X ¯ and H P Y ¯ are, in general, different.
A full example of the interpretation of both types of diagrams, the CMET and the split CMET is provided in the next Section in the context of feature transformation and selection.

3.3. Example Application: The Analysis of Feature Transformation and Selection with Entropy Triangles

In this Section we present an application of the results obtained above to a machine learning subtask: the transformation and selection of features for supervised classification.
The task. An extended practice in supervised classification is to explore different transformations of the observations and then evaluate such different approaches on different classifiers for a particular task [27]. Instead of this “in the loop” evaluation—that conflates the evaluation of the transformation and the classification—we will use the CMET to evaluate only the transformation block using the information transferred from the original to the transformed features as heuristic. As specific instances of transformations, we will evaluate the use of Principal Component Analysis (PCA) [28] and Independent Component Analysis (ICA) [29] which are often employed for dimensionality reduction.
Note that we may evaluate feature transformation and dimensionality reduction at the same time with the techniques developed above: the transformation procedure in the case of PCA and ICA may provide the Y ¯ as a ranking of features, so that we may carry out feature selection afterwards by selecting subsets Y ¯ j spanning from the first-ranked to the j-th feature.
The tools. PCA is a staple technique in statistical data analysis and machine learning based in the Singular Value Decomposition of the data matrix to obtain projections along the singular vectors that account for its variance in decreasing amount, so PCA ranks the transformed features by this order. The implementation used in our examples are those of the publicly available R packages stats (v. 3.3.3) (https://stat.ethz.ch/R-manual/R-devel/library/stats/html/00Index.html, accessed on 11 June 2018).
While PCA aims at the orthogonalization of the projections, ICA finds the projections, also known as factors, by maximimizing their statistical independence, in our example by minimizing a cost term related to their mutual information [30]. However, this does not result in a ranking of the transformed features, hence we have created a pseudo-ranking by carrying an ICA transformation obtaining j transformed features for all sensible values of 1 j l using independent runs of the ICA algorithm. The implementation used in our examples is that of fastICA [30] as implemented in the R package fastICA (v. 1.2-1) (https://cran.r-project.org/package=fastICA, accessed on 11 June 2018, with standard parameter values (alg.typ=“parallel”, fun=“logcosh”, alpha=1, method=“C”, row.norm= FALSE, maxit=200, tol=0.0001).
The entropy diagrams and calculations were carried out with the open-source entropies experimental R package that provides an implementation of the present framework (available at https://github.com/FJValverde/entropies.git, accessed on 11 June 2018). The analysis carried out in this section is part of an illustrative vignette for the package and will remain so in future releases.
Analysis of results. We analyzed in this way some UCI classification datasets [31], whose number of classes k, features m, and observations n are listed in Table 1.
For simplicity issues, we decided to illustrate our new techniques on three datasets: Iris, Glass and Arthritis. Ionosphere, BreastCancer, Sonar and Wine have a similar pattern to Glass, but less interesting, as commented below. Besides, both Ionosphere and Wine have too many features for the kind of neat visualization we are trying to use in this paper. We have also used a slightly modified entropy triangles in which the colors of the axes are related to those of the information diagrams of Figure 4b.
For instance, Figure 6a presents the results of the PCA transformation on the logarithm of the features of Anderson’s Iris. Crosses represent the information decomposition of the input features X ¯ using (27) while circles represent the information decomposition of transformed features Y ¯ j using (28) and filled circles the aggregate decomposition of (26). We represent several possible features sets Y ¯ j as output where each is obtained selecting the first j features in the ranking provided by PCA. For example, since Iris has four features we can make four different feature sets of 1 to j features, named in the Figure as “1_j”, that is, “1_1” to “1_4”. The figure then explores how the information in the whole database X ¯ is transported to different, nested candidate feature sets Y ¯ j as per the PCA recipe: choose as many ranked features as required to increase the transmitted information.
We first notice that all the points for X ¯ lie on a line parallel to the left side of the triangle and their average transmitted information is increasing, parallel to a decrease in remanent information. Indeed, the redundancy Δ H X ¯ = Δ H X ¯ H U X ¯ is the same regardless of the choice of Y ¯ j . The monotonic increase with the number of features selected j in average transmitted information I P X ¯ Y ¯ j = I P X ¯ Y ¯ j H U X ¯ in (27) corresponds to the monotonic increase in absolute transmitted information I P X ¯ Y ¯ j : for a given input set of features X ¯ , the more output features are selected, the higher the mutual information between input and output. This is the basis of the effectiveness of the feature-selection procedure.
Regarding the points for Y ¯ j , note that the absolute transmitted information also appears in the average transmitted information (with respect to Y ¯ j ) as I P X ¯ Y ¯ j = I P X ¯ Y ¯ j H U Y ¯ j in (28). While I P X ¯ Y ¯ j increases with j, as mentioned, we actually see a monotonic decrease in I P X ¯ Y ¯ j . The reason for this is the rapidly increasing value of the denominator H U Y ¯ j as we select more and more features.
Finally, notice how these two tendencies are conflated in the aggregate plot for the X ¯ Y ¯ j in Figure 6a that shows a lopsided, inverted U pattern, peaking before j reaches its maximum. This suggests that if we balance aggregated transmitted information against number of features selected—the complexity of the representation—in the search for a faithful representation, the average transmitted information is the quantity to optimize, that is, the mutual determination between the two feature sets.
Figure 6b presents similar results on the ICA transformation on the logarithm of the features of Anderson’s Iris with the same glyph convention as before, but with a ranking resulting from carrying the ICA method in full for each value of j. That is, we first work out Y ¯ 1 which is a single component, then we calculate Y ¯ 2 which the two best ICA components, and so on. The reason for this is that ICA does not rank the features it produces, so we have to create this ranking by carrying the ICA algorithm for all values of j to obtain each Y ¯ j . Note that the transformed features produce by PCA and ICA are, in principle, very different, but the phenomena described for PCA are also apparent here: an increase in aggregate transmitted information, checked by the increase of the denominator represented by H U Y ¯ j which implies a decreasing transmitted information per feature for Y ¯ j .
With the present framework the question of which transformation is “better” for this dataset can be given content and rephrased as which transformation transmits more information on average on this dataset, and also, importantly, whether the aggregate information available in the dataset is being transmitted by either of these methods. This is explored in Figure 7 for Iris, Glass and Arthritis, where, for reference, we have included a point for the (deterministic) transformation of the logarithm, the cross, giving an idea of what a lossless information transformation can achieve.
Consider Figure 7a for Iris. The first interesting observation is that neither technique is transmitting all of the information in the database, which can be gleaned from the fact that both feature sets “1_4”—when all the features available have been selected—are below the cross. This clearly follows the data processing inequality, but is still surprising since transformations like ICA and PCA are extensively used and considered to work well in practice. In this instance it can only be explained by the advantages of the achieved dimensionality reduction. Actually, the observation in the CMET suggests that we can improve on the average transmitted information per feature by retaining the three first features for each PCA and ICA.
The analysis of Iris turns out to be an intermediate case between that of Arthritis and Glass, the latter being the most typical in our analysis. This is the case with a lot of original features X ¯ which transmit very little private, distinctive information per feature. The typical behavior, both for PCA and ICA is to select at first, features that carry very little average information Y ¯ 1 . As we select more and more transformed features, information accumulates but at a very slow pace as shown in Figure 6c,d. Typically, the transformed features chosen last are very redundant. In the case of Glass, specifically, there is no point in retaining features beyond the sixth (out of 9) for either PCA or ICA as shown in Figure 7b. As to comparing the techniques, in some similarly-behaving datasets PCA is better, while in others ICA is. In the case of Glass, it is better to use ICA when retaining up to two transformed features, but it is better to use PCA when retaining between 2 and 6.
The case of Arthritis is quite different, perhaps due to the small number of original features n = 3 . Our analyses show that just choosing the first ICA component Y ¯ 1 —perhaps the first two—provides an excellent characterization of the dataset, being extremely efficient in what regards information transmission. This phenomenon is also seen in the first PCA component, but is lost as we aggregate more PCA components. Crucially, taking the 3 ICA components amounts to taking all of the original information in the dataset, while taking the 3 components in the case of PCA is rather inefficient, as confirmed by Figure 7c.
All in all, our analyses show that the unsupervised transformation and selection of features in datasets can be assessed using an information-theoretical heuristic: maximize the average mutual information accumulated by the transformed features. And we have also shown how to carry out this assessment with entropic balance equations and entropy triangles.

3.4. Discussion

The development of the multivariate case is quite parallel to the bivariate case. An important point to realize is that the multivariate transmitted information between two different random vectors I P X ¯ Y ¯ is the proper generalization for the usual mutual information M I P X Y in the bivariate case, rather than the more complex alternatives used in multivariate sources (see Section 2.2 and [5,14]). Indeed properties (18) and (20) are crucial in transporting the structure and intuitions built from the bivariate channel entropy triangle to the multivariate one, of which the former is a proper instance. This was not the case with balance equations and entropy triangles for stochastic sources of information [5].
The crucial quantities in the balance equation and the triangle have been independently motivated in other works. First, multivariate mutual information is fundamental in Information Theory, and we have already mentioned the redundancy Δ H P X [9]. We also mentioned the input-entropy normalized I P X ¯ Y ¯ used as a standalone assessment measure in intrusion detection [32]. Perhaps the least known quantity in the paper was the variation of information. Despite being inspired by the concept proposed by Meila [11], to the best of our knowledge it is completely new in the multivariate setting. However, the underlying concepts of conditional or remanent entropies have proven their usefulness time and again. All of the above is indirect proof that the quantities studied in this paper are significant, and the existence of a balance equation binding them together important.
The paragraph above notwithstanding, there are researchers who claim that Shannon-type relations cannot capture all the dependencies inside multivariate random vectors [33]. Due to the novelty of that work, it is not clear how much the “standard” theory of Shannon measures would have to change to accommodate the objections raised to it in that respect. But this question seems to be off the mark for our purposes: the framework of channel balance equations and entropy triangles has not been developed to look into the question of dependency, but of aggregate information transfer, wherever that information comes from. It may be relevant to source balance equations and triangles [5]—which have a different purpose—but that still has to be researched into.
The normalizations involved in (6) and (26)—respectively, (8), (27) and (28)—are similar conceptually: to divide by the logarithm of the total size of the domains involved whether it is the size of X × Y or that of X ¯ × Y ¯ . Notice, first, that this is the same as taking the logarithm base these sizes in the non-normalized equations. The resulting units would not be bits for the multivariate case proper, since the size of X ¯ or Y ¯ is at least 2 × 2 = 4 . But since the entropy triangles represent compositions [12], which are inherently dimensionless, this allows us to represent many different, and otherwise incomparable systems, e.g., univariate and multivariate ones with the same kind of diagram. Second, this type of normalization allows for an interpretation of the extension of these measures to the continuous case as a limit in the process of equipartitioning a compact support, as done, for instance, for the Rényi entropy in ([34], Section 3) which is known to be a generalization of Shannon’s. There are hopes, then for a continuous version of the balance equations for Renyi’s entropy.
Finally, note that the application presented in Section 3.3 above, although principled in the framework presented here, is not conclusive on the quality of the analyzed transformations in general but only as applied to the particular dataset. For that, a wider selection of data transformation approaches, and many more datasets should be assessed. Furthermore, the feature selection process used the “filter” approach which for supervised tasks seems suboptimal. Future work will address this issue as well as how the technique developed here relates to the end-to-end assessment presented in [4] and the source characterization technique of [5].

4. Conclusions

In this paper, we have introduced a new way to assess quantitatively and visually the transfer of information from a multivariate source X ¯ to a multivariate sink of information Y ¯ , using a heretofore unknown decomposition of the entropies around the joint distribution P X ¯ Y ¯ . For that purpose, we have generalized a similar previous theory and visualization tools for bivariate sources, greatly extending the applicability of the results:
  • We have been able to decompose the information of a random multivariate source into three components: (a) the non-transferable divergence from uniformity Δ H P X ¯ Y ¯ , which is an entropy “missing” from P X ¯ Y ¯ ; (b) a transferable, but not transferred part, the variation of information V I P X ¯ Y ¯ ; and (c) the transferable and transferred information I P X ¯ Y ¯ , which is a known, but never considered in this context, generalization of bivariate mutual information.
  • Using the same principles as in previous developments, we have been able to obtain a new type of visualization diagram for this balance of information using de Finetti’s ternary diagrams, which is actually an exploratory data analysis tool.
We have also shown how to apply these new theoretical developments and the visualization tools to the analysis of information transfer in unsupervised feature transformation and selection, a ubiquitous step in data analysis, and specifically, to apply it to the analysis of PCA and ICA. We believe this is a fruitful approach, e.g., for the assessment of learning systems, and foresee a bevy of applications to come. Further conclusions on this issue are left for a more thorough later investigation.
The authors declare no conflict of interest.

Author Contributions

Conceptualization, F.J.V.-A. and C.P.-M.; Formal analysis, F.J.V.-A. and C.P.-M.; Funding acquisition, C.P.-M.; Investigation, F.J.V.-A. and C.P.-M.; Methodology, F.J.V.-A. and C.P.-M.; Software, F.J.V.-A.; Supervision, C.P.-M.; Validation, F.J.V.-A. and C.P.-M.; Visualization, F.J.V.-A. and C.P.-M.; Writing—original draft, F.J.V.-A. and C.P.-M.; Writing—review & editing, F.J.V.-A. and C.P.-M.

Funding

This research was funded by he Spanish Government-MinECo projects TEC2014-53390-P and TEC2017-84395-P.

Abbreviations

PCAPrincipal Component Analysis
ICAIndependent Component Analysis
CMETChannel Multivariate Entropy Triangle
CBETChannel Binary Entropy Triangle
SMETSource Multivariate Entropy Triangle

References

  1. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  2. Shwartz-Ziv, R.; Tishby, N. Opening the Black Box of Deep Neural Networks via Information. arXiv 2017, arXiv:1703.00810v3. [Google Scholar]
  3. Tishby, N.; Zaslavsky, N. Deep Learning and the Information Bottleneck Principle. In Proceedings of the IEEE 2015 Information Theory Workshop, San Diego, CA, USA, 1–6 February 2015. [Google Scholar]
  4. Valverde-Albacete, F.J.; Peláez-Moreno, C. 100% classification accuracy considered harmful: The normalized information transfer factor explains the accuracy paradox. PLOS ONE 2014. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Valverde-Albacete, F.J.; Peláez-Moreno, C. The Evaluation of Data Sources using Multivariate Entropy Tools. Expert Syst. Appl. 2017, 78, 145–157. [Google Scholar] [CrossRef]
  6. Valverde-Albacete, F.J.; Peláez-Moreno, C. Two information-theoretic tools to assess the performance of multi-class classifiers. Pattern Recognit. Lett. 2010, 31, 1665–1671. [Google Scholar] [CrossRef] [Green Version]
  7. Yeung, R. A new outlook on Shannon’s information measures. IEEE Trans. Inf. Theory 1991, 37, 466–474. [Google Scholar] [CrossRef]
  8. Reza, F.M. An Introduction to Information Theory; McGraw-Hill Electrical and Electronic Engineering Series; McGraw-Hill Book Co., Inc.: New York, NY, USA; Toronto, ON, Canada; London, UK, 1961. [Google Scholar]
  9. MacKay, D.J.C. Information Theory, Inference and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  10. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, XXVII, 379–423, 623–656. [Google Scholar] [CrossRef]
  11. Meila, M. Comparing clusterings—An information based distance. J. Multivar. Anal. 2007, 28, 875–893. [Google Scholar] [CrossRef]
  12. Pawlowsky-Glahn, V.; Egozcue, J.J.; Tolosana-Delgado, R. Modeling and Analysis of Compositional Data; John Wiley & Sons: Chichester, UK, 2015. [Google Scholar]
  13. Valverde-Albacete, F.J.; de Albornoz, J.C.; Peláez-Moreno, C. A Proposal for New Evaluation Metrics and Result Visualization Technique for Sentiment Analysis Tasks. In CLEF 2013: Information Access Evaluation. Multilinguality, Multimodality and Visualization; Forner, P., Müller, H., Paredes, R., Rosso, P., Stein, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8138, pp. 41–52. [Google Scholar]
  14. Timme, N.; Alford, W.; Flecker, B.; Beggs, J.M. Synergy, redundancy, and multivariate information measures: An experimentalist’s perspective. J. Comput. Neurosci. 2014, 36, 119–140. [Google Scholar] [CrossRef] [PubMed]
  15. James, R.G.; Ellison, C.J.; Crutchfield, J.P. Anatomy of a bit: Information in a time series observation. Chaos 2011, 21, 037109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Watanabe, S. Information theoretical analysis of multivariate correlation. J. Res. Dev. 1960, 4, 66–82. [Google Scholar] [CrossRef]
  17. Tononi, G.; Sporns, O.; Edelman, G.M. A measure for brain complexity: Relating functional segregation and integration in the nervous system. Proc. Natl. Acad. Sci. USA 1994, 91, 5033–5037. [Google Scholar] [CrossRef] [PubMed]
  18. Studený, M.; Vejnarová, J. The Multiinformation Function as a Tool for Measuring Stochastic Dependence. In Learning in Graphical Models; Springer: Dordrecht, The Netherlands, 1998; pp. 261–297. [Google Scholar]
  19. Han, T.S. Nonnegative entropy measures of multivariate symmetric correlations. Inf. Control 1978, 36, 133–156. [Google Scholar] [CrossRef]
  20. Abdallah, S.A.; Plumbley, M.D. A measure of statistical complexity based on predictive information with application to finite spin systems. Phys. Lett. A 2012, 376, 275–281. [Google Scholar] [CrossRef] [Green Version]
  21. Tononi, G. Complexity and coherency: Integrating information in the brain. Trends Cognit. Sci. 1998, 2, 474–484. [Google Scholar] [CrossRef]
  22. McGill, W.J. Multivariate information transmission. Psychometrika 1954, 19, 97–116. [Google Scholar] [CrossRef]
  23. Sun Han, T. Multiple mutual informations and multiple interactions in frequency data. Inf. Control 1980, 46, 26–45. [Google Scholar] [CrossRef]
  24. Bell, A. The co-information lattice. In Proceedings of the Fifth International Workshop on Independent Component Analysis and Blind Signal Separation, Nara, Japan, 1–4 April 2003. [Google Scholar]
  25. Abdallah, S.A.; Plumbley, M.D. Predictive Information, Multiinformation and Binding Information; Technical Report C4DM-TR10-10; Queen Mary, University of London: London, UK, 2010. [Google Scholar]
  26. Valverde Albacete, F.J.; Peláez-Moreno, C. The Multivariate Entropy Triangle and Applications. In Hybrid Artificial Intelligence Systems (HAIS 2016); Springer: Seville, Spain, 2016; pp. 647–658. [Google Scholar]
  27. Witten, I.H.; Eibe, F.; Hall, M.A. Data Mining. Practical Machine Learning Tools and Techniques, 3rd ed.; Morgan Kaufmann: Burlington, MA, USA, 2011. [Google Scholar]
  28. Pearson, K. On Lines and Planes of Closest Fit to Systems of Points in Space. Philos. Mag. 1901, 559–572. [Google Scholar] [CrossRef]
  29. Bell, A.J.; Sejnowski, T.J. An Information-Maximization Approach to Blind Separation and Blind Deconvolution. Neural Comput. 1995, 7, 1129–1159. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Hyvärinen, A.; Oja, E. Independent component analysis: Algorithms and applications. IEEE Trans. Neural Netw. 2000, 13, 411–430. [Google Scholar] [CrossRef]
  31. Bache, K.; Lichman, M. UCI Machine Learning Repository; 2013. [Google Scholar]
  32. Gu, G.; Fogla, P.; Dagon, D.; Lee, W.; Skorić, B. Measuring Intrusion Detection Capability: An Information- theoretic Approach. In Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security (ASIACCS ’06), Taipei, Taiwan, 21–23 March 2006; ACM: New York, NY, USA, 2006; pp. 90–101. [Google Scholar] [CrossRef]
  33. James, G.R.; Crutchfield, P.J. Multivariate Dependence beyond Shannon Information. Entropy 2017, 19, 531–545. [Google Scholar] [CrossRef]
  34. Jizba, P.; Arimitsu, T. The world according to Rényi: Thermodynamics of multifractal systems. Ann. Phys. 2004, 312, 17–59. [Google Scholar] [CrossRef]
Figure 1. Different views of a supervised classification task as an information channel: (a) as individualized blocks; (b) for end-to-end evaluation; and (c) focused on the transformation.
Figure 1. Different views of a supervised classification task as an information channel: (a) as individualized blocks; (b) for end-to-end evaluation; and (c) focused on the transformation.
Entropy 20 00498 g001
Figure 2. Extended entropy diagram related to a bivariate distribution, from [4].
Figure 2. Extended entropy diagram related to a bivariate distribution, from [4].
Entropy 20 00498 g002
Figure 3. Schematic CBET as applied to supervised classifier assessment. An actual triangle shows dots for each classifier (or its split coordinates see Figure 6 for example) and none of the callouts for specific types of classifiers (from [4]). The callouts situated in the center of the sides of the triangle apply to the whole side.
Figure 3. Schematic CBET as applied to supervised classifier assessment. An actual triangle shows dots for each classifier (or its split coordinates see Figure 6 for example) and none of the callouts for specific types of classifiers (from [4]). The callouts situated in the center of the sides of the triangle apply to the whole side.
Entropy 20 00498 g003
Figure 4. (Color online) Extended entropy diagram of multivariate distributions for (a) a trivariate distribution (from [5]) as an instance of Situation 1; and (b) a joint distribution where a partitioning of the variables is made evident (Situation 2). The color scheme follows that of Figure 2, to be explained in the text.
Figure 4. (Color online) Extended entropy diagram of multivariate distributions for (a) a trivariate distribution (from [5]) as an instance of Situation 1; and (b) a joint distribution where a partitioning of the variables is made evident (Situation 2). The color scheme follows that of Figure 2, to be explained in the text.
Entropy 20 00498 g004
Figure 5. Schematic Channel Multivariate Entropy Triangles (CMET) showing interpretable zones and extreme cases using formal conditions. The annotations on the center of each side are meant to hold for that whole side, those for the vertices are meant to hold in their immediate neighborhood too.
Figure 5. Schematic Channel Multivariate Entropy Triangles (CMET) showing interpretable zones and extreme cases using formal conditions. The annotations on the center of each side are meant to hold for that whole side, those for the vertices are meant to hold in their immediate neighborhood too.
Entropy 20 00498 g005
Figure 6. (Color online) Split CMET exploration of feature transformation and selection with PCA (left) and ICA (right) on Iris, Glass and Arthritis when selecting the first n ranked features as obtained for each method. The colors of the axes have been selected to match those of Figure 4.
Figure 6. (Color online) Split CMET exploration of feature transformation and selection with PCA (left) and ICA (right) on Iris, Glass and Arthritis when selecting the first n ranked features as obtained for each method. The colors of the axes have been selected to match those of Figure 4.
Entropy 20 00498 g006
Figure 7. (Color online) Comparison of PCA and ICA as data transformations using the CMET on Iris, Glass and Arthritis. Note that these are the same positions represented as inverted triangles in Figure 6a,b.
Figure 7. (Color online) Comparison of PCA and ICA as data transformations using the CMET on Iris, Glass and Arthritis. Note that these are the same positions represented as inverted triangles in Figure 6a,b.
Entropy 20 00498 g007
Table 1. Datasets analyzed.
Table 1. Datasets analyzed.
Namekmn
1Ionosphere234351
2Iris34150
3Glass79214
4Arthritis3384
5BreastCancer29699
6Sonar260208
7Wine313178

Share and Cite

MDPI and ACS Style

Valverde-Albacete, F.J.; Peláez-Moreno, C. Assessing Information Transmission in Data Transformations with the Channel Multivariate Entropy Triangle. Entropy 2018, 20, 498. https://doi.org/10.3390/e20070498

AMA Style

Valverde-Albacete FJ, Peláez-Moreno C. Assessing Information Transmission in Data Transformations with the Channel Multivariate Entropy Triangle. Entropy. 2018; 20(7):498. https://doi.org/10.3390/e20070498

Chicago/Turabian Style

Valverde-Albacete, Francisco J., and Carmen Peláez-Moreno. 2018. "Assessing Information Transmission in Data Transformations with the Channel Multivariate Entropy Triangle" Entropy 20, no. 7: 498. https://doi.org/10.3390/e20070498

APA Style

Valverde-Albacete, F. J., & Peláez-Moreno, C. (2018). Assessing Information Transmission in Data Transformations with the Channel Multivariate Entropy Triangle. Entropy, 20(7), 498. https://doi.org/10.3390/e20070498

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop