Next Article in Journal
Second Law and Non-Equilibrium Entropy of Schottky Systems—Doubts and Verification–
Previous Article in Journal
Entropy Generation Rates in Two-Dimensional Rayleigh–Taylor Turbulence Mixing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Stochastic Complexity of Spin Models: Are Pairwise Models Really Simple?

by
Alberto Beretta
1,
Claudia Battistin
2,*,
Clélia De Mulatier
1,3,
Iacopo Mastromatteo
4 and
Matteo Marsili
1,5
1
The Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, I-34014 Trieste, Italy
2
Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norges Teknisk-Naturvitenskapelige Universitet (NTNU), Olav Kyrres Gate 9, 7030 Trondheim, Norway
3
Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104-6396, USA
4
Capital Fund Management, 23 rue de l’Université, 75007 Paris, France
5
Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Trieste, 34100 Trieste, Italy
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(10), 739; https://doi.org/10.3390/e20100739
Submission received: 10 August 2018 / Revised: 18 September 2018 / Accepted: 18 September 2018 / Published: 27 September 2018
(This article belongs to the Section Complexity)

Abstract

:
Models can be simple for different reasons: because they yield a simple and computationally efficient interpretation of a generic dataset (e.g., in terms of pairwise dependencies)—as in statistical learning—or because they capture the laws of a specific phenomenon—as e.g., in physics—leading to non-trivial falsifiable predictions. In information theory, the simplicity of a model is quantified by the stochastic complexity, which measures the number of bits needed to encode its parameters. In order to understand how simple models look like, we study the stochastic complexity of spin models with interactions of arbitrary order. We show that bijections within the space of possible interactions preserve the stochastic complexity, which allows to partition the space of all models into equivalence classes. We thus found that the simplicity of a model is not determined by the order of the interactions, but rather by their mutual arrangements. Models where statistical dependencies are localized on non-overlapping groups of few variables are simple, affording predictions on independencies that are easy to falsify. On the contrary, fully connected pairwise models, which are often used in statistical learning, appear to be highly complex, because of their extended set of interactions, and they are hard to falsify.

1. Introduction

Science, as the endeavour of reducing complex phenomena to simple principles and models, has been instrumental to solve practical problems. Yet, problems such as image or speech recognition and language translation have shown that Big Data can solve problems without necessarily understanding them [1,2,3]. A statistical model trained on a sufficiently large number of instances can learn how to mimic the performance of the human brain on these tasks [4,5]. These models are simple in the sense that they are easy to evaluate, train, and/or to infer. They offer simple interpretations in terms of low order (typically pairwise) dependencies, which in turn afford an explicit graph theoretical representation [6]. Their aim is not to uncover fundamental laws but to “generalize well”, i.e., to describe well yet unseen data. For this reason, machine learning relies on “universal” models that are apt to describe any possible data on which they can be trained [7], using suitable “regularization” schemes in order to tame parameter fluctuations (overfitting) and achieve small generalization error [8]. Scientific models, instead, are the simplest possible descriptions of experimental results. A physical model is a representation of a real system and its structure reflects the laws and symmetries of Nature. It predicts well not because it generalizes well, but rather because it captures essential features of the specific phenomena that it describes. It should depend on few parameters and is designed to provide predictions that are easy to be falsified [9]. For example, Newton’s laws of motion are consistent with momentum conservation, a fact that can be checked in scattering experiments.
The intuitive notion of a “simple model” hints at a succinct description, one that requires few bits [10]. The stochastic complexity [11], derived within Minimum Description Length (MDL) [12,13], provides a quantitative measure for “counting” the complexity of models in bits. The question this paper addresses is: what are the features of simple models according to MDL, and are they simple in the sense surmised in statistical learning or in physics? In particular, are models with up to pairwise interactions, which are frequently used in statistical learning, simple?
We address this issue in the context of spin models, describing the statistical dependence among n binary variables. There has been a surge of recent interest in the inference of spin models [14] from high dimensional data, most of which was limited to pairwise models. This is partly because pairwise models allow for an intuitive graph representation of statistical dependencies. Most importantly, since the number of k-variable interactions grows as n k , the number of samples is hardly sufficient to go beyond k = 2 . For this reason, efforts to go beyond pairwise interactions have mostly focused on low order interactions (e.g., k = 3 , see [15] and references therein). Reference [16] recently suggested that even for data generated by models with higher order interactions, pairwise models may provide a sufficiently accurate description of the data. Within the class of pairwise models, L1 regularization [17] has proven to be a remarkably efficient heuristic of model selection (but see also [18]).
Here we focus on the exponential family of spin models with interactions of arbitrary order. This class of models assumes a sharp separation between relevant observables and irrelevant ones, the expected value of which is predicted by the model. In this setting, the stochastic complexity [11] computed within MDL coincides with the penalty that, in Bayesian model selection, accounts for model’s complexity, under non-informative (Jeffrey’s) priors [19].

1.1. The Exponential Family of Spin Models (With Interactions of Arbitrary Order)

Consider n spin variables, s = ( s 1 , , s n ) , taking values s i = ± 1 and the set of all product spin operators, ϕ μ ( s ) = i μ s i , where μ is any subset of the indices { 1 , , n } . Each operator ϕ μ ( s ) models the interaction that involves all the spins in the subset μ .
Definition 1.
The probability distribution of s under spin model M is defined as:
P ( s | g , M ) = 1 Z M ( g ) e μ M g μ ϕ μ ( s ) ,
with Z M ( g ) = s e μ M g μ ϕ μ ( s )
being the partition function, which ensures normalisation. The model M is identified by the set { ϕ μ ( s ) , μ M } of product spin operators ϕ μ ( s ) that it contains.
Note that, under this definition, we consider interactions of arbitrary order (see Section SM-0 of the Supplementary Material). For instance, for pairwise interaction models, the operators  ϕ μ ( s ) are single spins s i or product of two spins s i s j , for i , j { 1 , . . . , n } . The g μ are the conjugate parameters [20] that modulate the strength of the interaction associated with ϕ μ .
We remark that the model defined in Equation (1) belongs to the exponential family of spin models. In other words, it can be derived as the maximum entropy distributions that are consistent with the requirement that the model reproduces the empirical averages of the operators ϕ μ ( s ) for all μ M  on a given dataset [21,22]. In other words, empirical averages of ϕ μ ( s ) are sufficient statistics, i.e., their values are enough to compute the maximum likelihood parameters g ^ . Therefore the choice of the operators ϕ μ in M inherently entails a sharp separation between relevant variables (the sufficient statistics) and irrelevant ones, which may have important consequences in the inference process. For example, if statistical inference assumes pairwise interactions, it might be blind to relevant patterns in the data resulting from higher order interactions. Without prior knowledge, all models M should be compared. According to MDL and Bayesian model selection (see Section SM-0 of the Supplementary Material), models should be compared on the basis of their maximum (log)likelihood penalized by their complexity. In other words, simple models should be preferred a priori.

1.2. Stochastic Complexity

The complexity of a model can be defined unambiguously within MDL as the number of bits needed to specify a priori the parameters g ^ that best describe a dataset s ^ = ( s ( 1 ) , , s ( N ) ) consisting of N samples independently drawn from the distribution P ( s | g , M ) for some unknown g (see Section SM-0 of the Supplementary Material). Asymptotically for N , for systems of discrete variables, the MDL complexity is given by [23,24]:
log s ^ P ( s ^ | g ^ , M ) | M | 2 log N 2 π + c M .
The first term in the right hand, which is the basis of the Bayesian Information Criterion (BIC) [25,26], captures the increase of the complexity with the number | M | of model’s parameters and with the number N of data points. This accounts for the fact that the uncertainty in each parameter g ^ decreases with N as N - 1 / 2 , so its description requires 1 2 log N bits. The second term c M quantifies the statistical dependencies between the parameters, and encodes the intrinsic notion of simplicity we are interested in. The sum of these two terms, in the right hand side of (3), is generally referred as stochastic complexity [11,26].  However, to distinguish these two terms, we will refer to c M as the stochastic complexity and the other as BIC term.
Definition 2.
For models of the exponential family, the stochastic complexity c M in Equation (3) is given by [23,24]
c M = log d g det J ( g ) ,
where J ( g ) is the Fisher Information matrix with entries
J μ ν ( g ) = 2 g μ g ν log Z M ( g ) .
For the exponential family of models, the MDL criterium (3) coincides with the Bayesian model selection approach, assuming Jeffreys’ prior over the parameters g [26,27,28] (see Section SM-0 of the Supplementary Material). Notice, however, that we take c M as an information theoretic measure of model complexity, and abstain from entering into the debate on whether Jeffreys’ prior is an adequate choice in Bayesian inference (see e.g., [29]).
Within a fully Bayesian approach, the model that maximises its posterior given the data s ^ , P ( M | s ^ ) , is the one to be selected. Therefore, if two models have the same number of parameters (same BIC term), the simplest one, i.e., the one with the lowest stochastic complexity c M , has to be chosen a priori. However, the number of possible interactions ϕ μ among n spins is 2 n - 1 , and therefore the number of spin models is 2 2 n - 1 . The super-exponential growth of the number of models with the number of spins n makes selecting the best model unfeasible even for moderate n. Our aim is then to understand how the stochastic complexity depends on the structure of the model M and eventually provide guidelines for the search of simpler models in such a huge space.

2. Results

2.1. Gauge Transformations

Let’s start by showing that low order interactions do not have a privileged status and are not necessarily related to low complexity c M , with the following argument: Alice is interested in finding which model M best describes a dataset s ^ ; Bob is interested in the same problem, but his dataset σ ^ = ( σ ( 1 ) , , σ ( N ) ) is related to Alice’s dataset by a gauge transformation.
Definition 3.
We define a gauge transformation as a bijective transformation between n spin variables s = ( s 1 , , s n ) { ± 1 } n and n spin variables σ = ( σ 1 , , σ n ) { ± 1 } n that corresponds to a bijection from the set of all operators to itself, ϕ μ ( s ) ϕ μ ( σ ) (see Section SM-1 of the Supplementary Material). Gauge transformations preserve the structure of the set of all operators. For examples of gauge transformations see Figure 1.
This gauge transformation induces a bijective transformation between Alice’s models and those of Bob, as shown in Figure 1, that preserves the number of interactions | M | . Whatever conclusion Bob draws on the relative likelihood of models can be translated into Alice’s world, where it has to coincide with Alice’s result. It follows that two models, M and M , related by a gauge transformation, must also have the same complexity c M = c M . In particular, pairwise interactions can be mapped to interactions of any order (see Figure 1), and, consequently, low order interactions are not necessarily simpler than higher order ones.
Proposition 1.
Two models related by a gauge transformation have the same complexity.
Proposition 2.
The stochastic complexity c M of a model is not defined by the order of its interactions. Models with low order interactions don’t necessarily have a low complexity, and, reciprocally, high order interactions don’t necessarily imply large complexity.
Observe that models connected by gauge transformations have remarkably different structures. In Figure 1, model a) has all the possible interactions concentrated on 3 spins, having the properties of a simplicial complex [30,31]; however, its gauge-transformed counterparties are not simplicial complexes. Model d) is invariant under any permutations of the four spins, whereas the other models have a lower degree of symmetry under permutations (see the different multiplicities in Figure 1).
Gauge transformations are discussed in more details in Section SM-1 of the Supplementary Material. One can also see them as a change of the basis s σ in which the operators are expressed. Counting the number of possible bases then gives us the number of gauge transformations (see Section SM-1 of the Supplementary Material).
Proposition 3.
The total number of gauge transformations for a system of n spin variables is:
N G T ( n ) = 2 n 2 k = 1 n 1 - 2 - k .
Notice that the number of gauge transformations, (6) is much smaller than the number 2 n ! of possible bijections of the set of 2 n states into itself. Indeed, a generic bijection between the state spaces of s and σ maps each product operator to one of the binary functions f : σ { + 1 , - 1 } , which does not necessarily correspond to a product operator ϕ μ ( σ ) .

2.2. Complexity Classes

Gauge transformations define equivalence relations, which partition the set of all models into equivalence classes. Models belonging to the same class are related to each other by a gauge transformation, and thus, from Proposition 1, have the same complexity c M , which leads us to introduce the notion of complexity classes.
Definition 4.
A complexity class is an equivalence class of models defined by gauge transformations.
This classification suggests the presence of “quantum numbers” (invariants), in terms of which models can be classified. These invariants emerge explicitly when writing the cluster expansion of the partition function [32,33,34] (see Section SM-2 of the Supplementary Material):
Z M ( g ) = 2 n μ M cosh ( g μ ) L μ tanh ( g μ ) .
The sum runs on the set L of all possible loops ℓ that can be formed with the operators μ M , including the empty loop = .
Definition 5.
A loop is any subset M such that μ ϕ μ ( s ) = 1 for any value of s , i.e., such that each spin s i occurs zero or an even number of times in this product.
The structure of Z M ( g ) in (7) depends on few characteristics of the model M : The number | M |  of operators (or, equivalently, of parameters) and the structure of its set of loops L (which operator is involved in which loop). The invariance under gauge transformation of the complexity in (4) reveals itself in the fact that the partition function of models related by a gauge transformation have the same functional dependence on their parameters up to relabeling.
Let us focus on the loop structure of models belonging to the same class. The set L of loops of any model M has the structure of a finite Abelian group: if 1 , 2 L , then 1 2 is also a loop of M , where ⊕ is the symmetric difference [35] of two sets (see Section SM-3 of the Supplementary Material). As a consequence, for each model M one can identify a minimal generating set of λ loops, such that any loop in L can be uniquely expressed as the symmetric difference of loops in the minimal generating set. Note that the choice of the generating set is not unique, though all choices have the same cardinality λ ; Figure 2 gives examples of this decomposition for the models of Figure 1. Note also that = for each loop L . As a consequence, the cardinality of the loop group is | L | = 2 λ (including the empty loop ∅). We found that λ is related to the number | M | of operators of the model by λ = | M | - n M (see Section SM-3 of the Supplementary Material), where n M is the number of independent operators of a model M , i.e., the maximal number of operators that can be taken in M without forming any loop. This implies that λ attains its minimal value, λ = 0 , for models with only independent operators ( | M | = n M ), and its maximal value, λ = 2 n - 1 - n , for the complete model  M ¯ , that contains all the | M ¯ | = 2 n - 1 possible operators. The number of independent operators, n M , is preserved by gauge transformation, and, as the total number of operators, | M | , is also an invariant of the class, so is the cardinality of the minimal generating set λ . For example, all models in Figure 1 have n M = 3 independent operators and λ = 4 (see Figure 2). It can also be shown that gauge transformations imply a duality relation, that associates to each class of models with | M | operators a class of models with the 2 n - 1 - | M | complementary operators (see Section SM-3 of the Supplementary Material).
To summarize, the distinctive features of a complexity class are given in the following Proposition.
Proposition 4.
The number of operators, | M | , the number of independent operators, n M , and the structure of the set of loops, L (through its generators), fully characterize a complexity class.

3. Discussion: How Do Simple Models Look Like?

3.1. Fewer Independent Operators, Shorter Loops

Coming to the quantitative estimate of the complexity, c M generally depends on the extent to which ensemble averages of the operators ϕ μ ( s ) in the model μ M constrain each other. This appears explicitly by rewriting (4) as an integral over the ensemble averages of the operators, φ = { ϕ μ , μ M } , exploiting the bijection between the parameters g and their dual parameters φ and re-parameterization invariance [28,36]:
c M = log F d φ det J ( φ ) ,
where J ( φ ) is the Fisher Information Matrix in the φ -coordinates. The new domain, F , of integration is over the values of φ that can be realized in any empirical sample drawn from the model M (known in this context as marginal polytope [37,38]) and is related to the mutual constraints between the ensemble averages, φ μ , (see Section SM-4 of the Supplementary Material for more details).
Proposition 5.
The complexity of a model without loops, i.e., L = { } , and n M = | M | independent operators is c M = | M | log π .
Indeed, if the model contains no loop, then J μ ν ( φ ) = [ 1 - ( φ μ ) 2 ] - 1 δ μ ν is diagonal: The integral in (8) factorizes and Proposition 5 follows. In this case, the variables φ μ are not constrained at all and the domain of integration is F = [ - 1 , 1 ] | M | . If instead the model contains loops, the variables φ μ become constrained and the marginal polytope, F , is reduced. For example, for a model with a single loop of length three (e.g., ϕ 1 = s 1 , ϕ 2 = s 2 , and ϕ 3 = s 1 s 2 ), the values of φ in [ - 1 , 1 ] 3 are not all attainable, indeed F = { φ [ - 1 , 1 ] 3 : | φ 1 + φ 2 | - 1 φ 3 1 - | φ 1 - φ 2 | } is reduced, which decreases the complexity.
Proposition 6.
The complexity of models with a fixed number of operators and a single (non-empty) loop increases with the length of the loop.
The complexity, c M ( k ) , of models with a fixed number, | M | , of parameters and a single (non-empty) loop of length k is shown in Figure 3 (see Section SM-6 of the Supplementary Material): c M ( k ) increases with k and saturates at | M | log π , which is the value one would expect if all operators were unconstrained. This is consistent with the expectation that longer loops induce weaker constraints among the operators. Note that the number of independent operators is kept constant here, equal to n M = | M | - 1 .
Proposition 7.
At fixed number of operators, the complexity of a model increases with the number of independent operators.
The single loop calculation allows computing the complexity of models with non-overlapping loops ( = for all , L ), for which c M = L c is the sum over the complexity, c , associated to each loop. In the general case of models with more complex loop structures, the explicit calculation of c M is non-trivial. Yet, the argument above suggests that, at fixed number of parameters  | M | , c M should increase with the number n M of independent operators. Figure 4 summarises the results for all models with n = 4 spins and supports this conclusion: For a given value of | M | , classes with lower values of n M (i.e., with less independent operators) are less complex.
Proposition 8.
At fixed number of operators, complete models on a subset of spins and their equivalents are the simplest models. We refer to these models as sub-complete models. Classes of sub-complete models are the classes of minimal complexity.
A surprising result of Figure 4 is that c M is not monotonic with the number, | M | , of operators of the model, increasing first with | M | and then decreasing. Complete models M ¯ turn out to be the simplest (see the dashed curve in Figure 4). As a consequence, for a given | M | , models that contain a complete model on a subset of spins are generally simpler than models where operators have support on all the spins. For instance, the complexity class displayed in Figure 1 is the class of models with | M | = 7 operators that has the lowest complexity (see green triangle on the dashed curve in Figure 4).
Figure 4 also confirms that pairwise models are not simpler than models with higher order interactions. Indeed, for instance for | M | = 7 , c M increases drastically when changing model a) of Figure 1 into a pairwise model by turning the 3-spin interaction into an external field acting on s 4 . Likewise, the model with all six pairwise interactions for | M | = 10 is more complex than the one where one of them is turned into a 3-spin interaction.

3.2. Complete and Sub-Complete Models

It is possible to compute explicitly the complexity of a complete model, M ¯ , with n spins using the mapping g μ = 2 - n s ϕ μ ( s ) log p ( s ) between the 2 n - 1 parameters g μ of M ¯ and the 2 n probabilities p ( s ) constrained by their normalization [39]. The complexity in (4) being invariant under reparametrization [36], one can easily re-write this integral in terms of the variables p ( s ) . Finally, using that det J ( p ) = s 1 / p ( s ) , we obtain the following proposition (see Section SM-5 of the Supplementary Material).
Proposition 9.
The complexity of a complete model M ¯ with n spins is:
c M ¯ = log 0 1 d p δ s p ( s ) - 1 s 1 p ( s ) , = 2 n - 1 log π - log Γ ( 2 n - 1 ) .
Note that, for n > 4 , c M ¯ becomes negative (for n = 6 , c M ¯ - 41 . 5 ). This suggests that the class of least complex models with | M | interactions is the one that contains the model where the maximal number of loops are concentrated on the smallest number of spins. This agrees with our previous observations on single loop models and sub-complete models. On the contrary, models where interactions are distributed uniformly across the variables (e.g., models with only single spin operators for n | M | or with non-overlapping sets of loops) have higher complexity.
We finally note that complete models are also extremely simple to infer from empirical data. Indeed, the maximum likelihood estimates of the parameter are trivially given by g ^ μ = 2 - n s ϕ μ ( s ) log p ^ ( s ) , where p ^ ( s ) is the empirical distribution. By contrast, learning the parameters of pairwise interacting models can be a daunting task [14].

3.3. Maximally Overlapping Loops

This finally leads us to conjecture that stochastic complexity is related to the localization properties of the set of loops  L (i.e., its group structure) rather than to the order of the interactions: Models where the loops, , L , have a “large” overlap, , are simple, whereas models with an extended homogeneous network of interactions (e.g., fully connected Ising models with up-to pairwise interaction) have many non-overlapping loops, = , and therefore are rather complex. It is interesting to note that the former (simple models) lend themselves to predictions on the independence of different groups of spins. These predictions suggest “fundamental” properties of the system under study (i.e., invariance properties, spin permutation symmetry breaking) and are easy to falsify (i.e., it is clear how to devise a statistical test for these hypotheses to any given confidence level). On the contrary, complex models (e.g., fully connected pairwise Ising models) are harder to falsify as their parameters can be adjusted to fit reasonably well any sample, irrespectively of the system under study.

3.4. Summary

We find that at fixed number, | M | , of operators, simpler models are those with fewer independent operators (i.e., smaller n M ). For the same value of n M , models can still have different complexities. The simpler ones are then those with a loop structure that will impose the most constraints between the operators of the model. More generally, we show that the complexity of a model is not defined by the order of the interactions involved, but is, instead, intimately connected to its internal geometry, i.e., how interactions are arranged in the model. The geometry of this arrangement implies mutual dependencies between interactions that constrain the states accessible to the system. More complex models are those that implement fewer constraints, and can thus account for broader types of data. This result is consistent with the information geometric approach of Reference [26], which studies model complexity in terms of the geometry of the space of probability distributions [40]. The contribution of this paper clarifies the relation between the information geometric point of view and the specific structure of the model, i.e., the actual arrangement of its interactions. We remark that these results apply to non-degenerate spin models. In the broader class of degenerate models [20], arrangement of the operators and degeneracy of the parameters may interact non trivially in terms of complexity. Our preliminary numerical investigation of single loop models (see Section SM-7 of the Supplementary Material) indicates that degeneracy decreases complexity by constraining the parameters and therefore the statistics of the operators. Also, Reference [41] discusses model selection on mixture models of spin variables, and shows that they can be cast in the form of Equation (1), where g μ are subject to linear constraints. For these models, Bayesian model selection can be performed exactly without resorting to the approach discussed here.
A rough estimate of the number, N, of data samples beyond which the complexity term becomes negligible in Bayesian inference can be obtained with the following argument: An upper bound for the complexity of models with n spins and m parameters is given by m log π , i.e., when all operators are independent. As a lower bound, we take Equation (9) with m = 2 n - 1 . This implies that an upper bound for the variation of the complexity is given by Δ c = m - 1 2 log π + log Γ m + 1 2 . When this is much smaller than the BIC term, the stochastic complexity can be neglected. For large m this implies N m , which may be relevant for the applicability of fully connected pairwise models ( m n 2 / 2 ) in typical cases, for instance when samples cannot be considered as independent observations from a stationary distribution (see [18]).

4. Conclusions

As pointed out by Wigner [42] long ago, the unreasonable effectiveness of mathematical models relies on isolating phenomena that depend on few variables, whose mutual variation is described by simple models and is independent of the rest. Remarkably we find that, for a fixed number of spin variables and parameters, simple models, according to MDL, are precisely of this form: Statistical dependencies are concentrated on the smallest subset of variables and these are independent of all the rest. Such simple models are not optimal for generalizing, i.e., to describe generic statistical dependencies, rather they are easy to falsify. They are designed for spotting independencies that may hint at deeper principles (e.g., symmetries or conservation laws) that may “take us beyond the data” [43], meaning that they can hint at hypotheses (on e.g., symmetries or other regularities) that can be tested in future experiments. On the contrary, fully connected pairwise models, which provide simple interpretation of statistical dependencies in terms of direct interactions, appears to be rather complex. This, we conjecture, is the origin of pairwise sufficiency [16] that makes them so successful to describe a wide variety of data from neural tissues [44] to voting behaviour [45].
Our results, however, show that any model that can be obtained from a pairwise model via a gauge transformation has the same complexity and hence the same generalisation power, but has higher order interactions. Hence, gauge transformations can be used to compare pairwise models with models in the same complexity class, in order to quantitatively assess when a dataset is genuinely described by pairwise interactions. Notice that this comparison can be done directly on the basis of their maximum likelihood alone.
This is only one of the possible applications of gauge transformations, which are one of the main results of this paper. In loose words, these transformations preserve the topology of models, i.e., the manner in which interactions are mutually arranged, but change the “basis” of the operators that embody these interactions. Besides model selection within the same complexity class, as in the example of pairwise models above, we can think of selecting the appropriate complexity class according to the availability of data. One particularly interesting avenue of future research is to perform model selection among models of minimal complexity (i.e., sub-complete models). Model comparison between and within these classes may be relevant given the high degree of clustering and modularity in neural, social, metabolic, and regulatory networks [46]. These models offer the simplest possible explanation of a dataset, not necessarily the most accurate one (e.g., in terms of generalisation error), but the one that can potentially reveal regularities and symmetries in the data. Interestingly, parameter inference for these models is also remarkably simple.
In conclusion, when data are scarce and high dimensional, Bayesian inference should privilege simple models, i.e., those with small stochastic complexity, over more complex ones, such as fully connected pairwise models that are often used [14,44,45]. A full Bayesian model selection approach is hampered by the calculation of the stochastic complexity that is a daunting task. Developing approximate heuristics for accomplishing this task is a challenging future avenue of research.

Supplementary Materials

The Supplementary Material is available online at https://www.mdpi.com/1099-4300/20/10/739/s1.

Author Contributions

Conceptualization, I.M. and M.M.; Formal analysis, A.B., C.B. and C.d.M.; Methodology, A.B., C.B. and C.d.M.; Software, A.B.; Writing-original draft, C.B., C.d.M. and M.M.

Funding

C.B. acknowledges financial support from the Kavli Foundation and the Norwegian Research Council’s Centre of Excellence scheme (Centre for Neural Computation, grant number 223262). A.B. acknowledges financial support from International School for Advanced Studies (SISSA).

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References and Notes

  1. Mayer-Schonberger, V.; Cukier, K. Big Data: A Revolution That Will Transform How We Live, Work and Think; John Murray Publishers: London, UK, 2013. [Google Scholar]
  2. Anderson, C. The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, 2008. Wired. Available online: https://www.wired.com/2008/06/pb-theory/ (accessed on 20 September 2018).
  3. Cristianini, N. Are we there yet? Neural Netw. 2010, 23, 466–470. [Google Scholar] [CrossRef] [PubMed]
  4. LeCun, Y.; Kavukcuoglu, K.; Farabet, C. Convolutional networks and applications in vision. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 253–256. [Google Scholar]
  5. Hannun, A.; Case, C.; Casper, J.; Catanzaro, B.; Diamos, G.; Elsen, E.; Prenge, R.; Satheesh, S.; Sengupta, S.; Coates, A.; Ng, A. Deep Speech: Scaling up end-to-end speech recognition. arXiv, 2014; arXiv:cs.CL/1412.5567. [Google Scholar]
  6. Bishop, C. Pattern Recognition and Machine Learning; (Information Science and Statistics); Springer-Verlag: New York, NY, USA, 2006. [Google Scholar]
  7. Wu, X.; Kumar, V.; Quinlan, J.R.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Yu, P.S.; et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. 2008, 14, 1–37. [Google Scholar] [CrossRef]
  8. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 20 September 2018).
  9. Popper, K. The Logic of Scientific Discovery (Routledge Classics); Taylor & Francis: London, UK, 2002. [Google Scholar]
  10. Chater, N.; Vitányi, P. Simplicity: A unifying principle in cognitive science? Trends Cogn. Sci. 2003, 7, 19–22. [Google Scholar] [CrossRef]
  11. Rissanen, J. Stochastic complexity in learning. J. Comput. Syst. Sci. 1997, 55, 89–95. [Google Scholar] [CrossRef]
  12. Rissanen, J. Modeling by shortest data description. Automatic 1978, 14, 465–471. [Google Scholar] [CrossRef]
  13. Grünwald, P. The Minimum Description Length Principle; (Adaptive Computation and Machine Learning); MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  14. Chau Nguyen, H.; Zecchina, R.; Berg, J. Inverse statistical problems: From the inverse Ising problem to data science. arXiv, 2017; arXiv:cond-mat.dis-nn/1702.01522. [Google Scholar]
  15. Margolin, A.; Wang, K.; Califano, A.; Nemenman, I. Multivariate dependence and genetic networks inference. IET Syst. Biol. 2010, 4, 428–440. [Google Scholar] [CrossRef] [PubMed]
  16. Merchan, L.; Nemenman, I. On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks. J. Stat. Phys. 2016, 162, 1294–1308. [Google Scholar] [CrossRef] [Green Version]
  17. Ravikumar, P.; Wainwright, M.J.; Lafferty, J.D. High-dimensional Ising model selection using 1-regularized logistic regression. Ann. Stat. 2010, 38, 1287–1319. [Google Scholar] [CrossRef]
  18. Bulso, N.; Marsili, M.; Roudi, Y. Sparse model selection in the highly under-sampled regime. J. Stat. Mech. Theor. Exp. 2016, 2016, 093404. [Google Scholar] [CrossRef] [Green Version]
  19. Balasubramanian, V. Statistical inference, Occam’s razor, and statistical mechanics on the space of probability distributions. Neural Comput. 1997, 9, 349–368. [Google Scholar] [CrossRef]
  20. There is a broader class of models, where subsets 𝒱 of operators have the same parameter, i.e., gμ = g𝒱 for all μ𝒱 or gμ are subject to linear constrains. These degenerate models are rarely considered in the inference literature. Here we confine our discussion to non-degenerate models and refer the reader to Section SM-7 of the Supplementary Material for more discussion.
  21. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  22. Tikochinsky, Y.; Tishby, N.Z.; Levine, R.D. Alternative approach to maximum-entropy inference. Phys. Rev. A 1984, 30, 2638–2644. [Google Scholar] [CrossRef]
  23. Rissanen, J.J. Fisher information and stochastic complexity. IEEE Trans. Inf. Theory 1996, 42, 40–47. [Google Scholar] [CrossRef]
  24. Rissanen, J. Strong optimality of the normalized ML models as universal codes and information in data. IEEE Trans. Inf. Theo. 2001, 47, 1712–1717. [Google Scholar] [CrossRef]
  25. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  26. Myung, I.J.; Balasubramanian, V.; Pitt, M.A. Counting probability distributions: Differential geometry and model selection. Proc. Natl. Acad. Sci. USA 2000, 97, 11170–11175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Jeffreys, H. An Invariant Form for the Prior Probability in Estimation Problems. Proc. R. Soc. Lond. A Math. Phys. Eng. Sci. 1946, 186, 453–461. [Google Scholar] [CrossRef]
  28. Amari, S. Information Geometry and Its Applications; (Applied Mathematical Sciences); Springer: Tokyo, Japan, 2016. [Google Scholar]
  29. Kass, R.E.; Wasserman, L. The selection of prior distributions by formal rules. J. Am. Stat. Assoc. 1996, 91, 1343–1370. [Google Scholar] [CrossRef]
  30. A simplicial complex [31], in our notation, is a model such that, for any interaction μ, any interaction that involves any subset νμ of spins is also contained in the model (i.e., ν).
  31. Courtney, O.T.; Bianconi, G. Generalized network structures: The configuration model and the canonical ensemble of simplicial complexes. Phys. Rev. E 2016, 93, 062311. [Google Scholar] [CrossRef] [PubMed]
  32. Landau, L.; Lifshitz, E. Statistical Physics, 3rd ed.; Elsevier Science: London, UK, 2013. [Google Scholar]
  33. Kramers, H.A.; Wannier, G.H. Statistics of the Two-Dimensional Ferromagnet. Part II. Phys. Rev. 1941, 60, 263–276. [Google Scholar] [CrossRef]
  34. Pelizzola, A. Cluster variation method in statistical Physics and probabilistic graphical models. J. Phys. A Math. Gen. 2005, 38, R309–R339. [Google Scholar] [CrossRef]
  35. The symmetric difference of two sets 1 and 2 is defined as the set that contains the elements that occur in 1 but not in 2 and viceversa: 12 = (12) \ (12). It corresponds to the XOR operator between the operators of the two loops.
  36. Amari, S.; Nagaoka, H. Methods of Information Geometry; (Translations of mathematical monographs); American Mathematical Society: Providence, RI, USA, 2007. [Google Scholar]
  37. Wainwright, M.J.; Jordan, M.I. Graphical Models, Exponential Families, and Variational Inference. Found. Trends® Mach. Learn. 2008, 1, 1–305. [Google Scholar] [CrossRef]
  38. Wainwright, M.J.; Jordan, M.I. Variational inference in graphical models: The view from the marginal polytope. In Proceedings of the Forty-First Annual Allerton Conference on Communication, Control, and Computing, Monticello, NY, USA, 1–3 October 2003; Volume 41, pp. 961–971. [Google Scholar]
  39. Mastromatteo, I. On the typical properties of inverse problems in statistical mechanics. arXiv, 2013; arXiv:cond-mat.stat-mech/1311.0190. [Google Scholar]
  40. In information geometry [28,36], a model defines a manifold in the space of probability distributions. For exponential models (1), the natural metric, in the coordinates gμ, is given by the Fisher Information (5), and the stochastic complexity (4) is the volume of the manifold [26].
  41. Gresele, L.; Marsili, M. On maximum entropy and inference. Entropy 2017, 19, 642. [Google Scholar] [CrossRef]
  42. Wigner, E.P. The unreasonable effectiveness of mathematics in the natural sciences. Commun. Pure Appl. Math. 1960, 13, 1–14. [Google Scholar] [CrossRef]
  43. In his response to Reference [2] on edge.org, W.D. Willis observes that “Models are interesting precisely because they can take us beyond the data”.
  44. Schneidman, E.; Berry, M.J., II; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Lee, E.; Broedersz, C.; Bialek, W. Statistical mechanics of the US Supreme Court. J. Stat. Phys. 2015, 160, 275–301. [Google Scholar] [CrossRef]
  46. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Example of gauge transformations between models with n = 4 spins. Models are represented by diagrams (color online): spins are full dots • in presence of a local field, empty dots ∘ otherwise; blue lines are pairwise interactions ( ϕ μ = s i s j ); orange triangles denote 3-spin interactions ( ϕ μ = s i s j s k ); and the 4-spin interaction ( s 1 s 2 s 3 s 4 ) is a filled blue triangle. Note that model a) has all its interactions grouped on 3 spins; the gauge transformations leading to this model are shown along the arrows. All the models belong to the same complexity class, with | M | = 7 , λ = 4 and a number of independent operators n M = 3 (e.g., s 1 , s 2 and s 3 in model a)—see tables in Section SM-6 of the Supplementary Material. The class contains in total 15 models, which are grouped, with respect to the permutation of the spins, behind the 4 representatives shown here with their multiplicity ( × m ). Models a), b), c) and d) are respectively identified by the following sets of operators: (a) M = { s 1 , s 2 , s 3 , s 1 s 2 , s 1 s 3 , s 2 s 3 , s 1 s 2 s 3 } ; (b) M = { s 1 , s 2 s 3 , s 3 s 4 , s 4 s 2 , s 1 s 2 s 3 , s 1 s 3 s 4 , s 1 s 2 s 4 } ; (c) M = { s 2 , s 4 , s 2 s 4 , s 1 s 3 , s 1 s 2 s 3 , s 1 s 3 s 4 , s 1 s 2 s 3 s 4 } ; and (d) M = { s 1 s 2 , s 1 s 3 , s 1 s 4 , s 2 s 3 , s 2 s 4 , s 3 s 4 , s 1 s 2 s 3 s 4 } .
Figure 1. Example of gauge transformations between models with n = 4 spins. Models are represented by diagrams (color online): spins are full dots • in presence of a local field, empty dots ∘ otherwise; blue lines are pairwise interactions ( ϕ μ = s i s j ); orange triangles denote 3-spin interactions ( ϕ μ = s i s j s k ); and the 4-spin interaction ( s 1 s 2 s 3 s 4 ) is a filled blue triangle. Note that model a) has all its interactions grouped on 3 spins; the gauge transformations leading to this model are shown along the arrows. All the models belong to the same complexity class, with | M | = 7 , λ = 4 and a number of independent operators n M = 3 (e.g., s 1 , s 2 and s 3 in model a)—see tables in Section SM-6 of the Supplementary Material. The class contains in total 15 models, which are grouped, with respect to the permutation of the spins, behind the 4 representatives shown here with their multiplicity ( × m ). Models a), b), c) and d) are respectively identified by the following sets of operators: (a) M = { s 1 , s 2 , s 3 , s 1 s 2 , s 1 s 3 , s 2 s 3 , s 1 s 2 s 3 } ; (b) M = { s 1 , s 2 s 3 , s 3 s 4 , s 4 s 2 , s 1 s 2 s 3 , s 1 s 3 s 4 , s 1 s 2 s 4 } ; (c) M = { s 2 , s 4 , s 2 s 4 , s 1 s 3 , s 1 s 2 s 3 , s 1 s 3 s 4 , s 1 s 2 s 3 s 4 } ; and (d) M = { s 1 s 2 , s 1 s 3 , s 1 s 4 , s 2 s 3 , s 2 s 4 , s 3 s 4 , s 1 s 2 s 3 s 4 } .
Entropy 20 00739 g001
Figure 2. Example of a minimal generating set of loops for each model of Figure 1. As these models belong to the same class, their (respective) sets of loops have the same cardinality, 2 λ , where λ = 4 is the number of generators (as shown here). For model a), one can easily check that the 4 loops of the set are independent, as each of them contains at least one operator that doesn’t appear in the other 3 loops (see Section SM-3 of the Supplementary Material). Within each column, on the r.h.s., loops are related by the same gauge transformation morphing models into one another on the l.h.s. (i.e., the transformations displayed in Figure 1). This shows that the loops of these 4 generating sets have the same structure, which implies that the loop structure of the 4 models is the same. Any loop of a model can finally be obtained by combining a subset of its generating loops. Note that the choice of the generating set is not unique. (ad) refer to the same models as in Figure 1.
Figure 2. Example of a minimal generating set of loops for each model of Figure 1. As these models belong to the same class, their (respective) sets of loops have the same cardinality, 2 λ , where λ = 4 is the number of generators (as shown here). For model a), one can easily check that the 4 loops of the set are independent, as each of them contains at least one operator that doesn’t appear in the other 3 loops (see Section SM-3 of the Supplementary Material). Within each column, on the r.h.s., loops are related by the same gauge transformation morphing models into one another on the l.h.s. (i.e., the transformations displayed in Figure 1). This shows that the loops of these 4 generating sets have the same structure, which implies that the loop structure of the 4 models is the same. Any loop of a model can finally be obtained by combining a subset of its generating loops. Note that the choice of the generating set is not unique. (ad) refer to the same models as in Figure 1.
Entropy 20 00739 g002
Figure 3. Complexity, c M ( k ) , of models with a single loop of length k, and | M | - k free operators, i.e., not involved in any loop. For k = 3 , c M ( 3 ) = ( | M | - 1 ) log π can be computed analytically from (4). Values of c M are averaged over 10 3 numerical estimates of the integral in (4), using 10 6 Monte Carlo samples each. Error bars correspond to their standard deviation.
Figure 3. Complexity, c M ( k ) , of models with a single loop of length k, and | M | - k free operators, i.e., not involved in any loop. For k = 3 , c M ( 3 ) = ( | M | - 1 ) log π can be computed analytically from (4). Values of c M are averaged over 10 3 numerical estimates of the integral in (4), using 10 6 Monte Carlo samples each. Error bars correspond to their standard deviation.
Entropy 20 00739 g003
Figure 4. (color online) Complexity of models for n = 4 as a function of the number | M | of operators: Each triangle represents a class of complexity, which contains one or more models (see Section SM-6 of the Supplementary Material). For each class, the value of c M was obtained from a representative of the class; some of them are shown here with their corresponding diagram (same notation as in Figure 1). The triangle colors indicate the values of n M : Violet for n M = 4 , green for 3, yellow for 2, pink for 1, and red for 0 (model with no operator). Models on the black line have only independent operators ( | M | = n M ) and complexity, c M = | M | log π ; models on the dashed curve are complete models, the complexities of which are given in (9). Complexity classes with the same values of | M | and n M have the same value of λ = | M | - n M , i.e., the same number of loops, | L | , but a different loop structure.
Figure 4. (color online) Complexity of models for n = 4 as a function of the number | M | of operators: Each triangle represents a class of complexity, which contains one or more models (see Section SM-6 of the Supplementary Material). For each class, the value of c M was obtained from a representative of the class; some of them are shown here with their corresponding diagram (same notation as in Figure 1). The triangle colors indicate the values of n M : Violet for n M = 4 , green for 3, yellow for 2, pink for 1, and red for 0 (model with no operator). Models on the black line have only independent operators ( | M | = n M ) and complexity, c M = | M | log π ; models on the dashed curve are complete models, the complexities of which are given in (9). Complexity classes with the same values of | M | and n M have the same value of λ = | M | - n M , i.e., the same number of loops, | L | , but a different loop structure.
Entropy 20 00739 g004

Share and Cite

MDPI and ACS Style

Beretta, A.; Battistin, C.; De Mulatier, C.; Mastromatteo, I.; Marsili, M. The Stochastic Complexity of Spin Models: Are Pairwise Models Really Simple? Entropy 2018, 20, 739. https://doi.org/10.3390/e20100739

AMA Style

Beretta A, Battistin C, De Mulatier C, Mastromatteo I, Marsili M. The Stochastic Complexity of Spin Models: Are Pairwise Models Really Simple? Entropy. 2018; 20(10):739. https://doi.org/10.3390/e20100739

Chicago/Turabian Style

Beretta, Alberto, Claudia Battistin, Clélia De Mulatier, Iacopo Mastromatteo, and Matteo Marsili. 2018. "The Stochastic Complexity of Spin Models: Are Pairwise Models Really Simple?" Entropy 20, no. 10: 739. https://doi.org/10.3390/e20100739

APA Style

Beretta, A., Battistin, C., De Mulatier, C., Mastromatteo, I., & Marsili, M. (2018). The Stochastic Complexity of Spin Models: Are Pairwise Models Really Simple? Entropy, 20(10), 739. https://doi.org/10.3390/e20100739

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop