Next Article in Journal
Identity Vector Extraction by Perceptual Wavelet Packet Entropy and Convolutional Neural Network for Voice Authentication
Next Article in Special Issue
Probability Mass Exclusions and the Directed Components of Mutual Information
Previous Article in Journal
Entropy Applications in Environmental and Water Engineering
Previous Article in Special Issue
A Novel Index Based on Binary Entropy to Confirm the Spatial Expansion Degree of Urban Sprawl
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intrinsic Computation of a Monod-Wyman-Changeux Molecule

Physics of Living Systems Group, Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Entropy 2018, 20(8), 599; https://doi.org/10.3390/e20080599
Submission received: 12 July 2018 / Revised: 6 August 2018 / Accepted: 10 August 2018 / Published: 11 August 2018
(This article belongs to the Special Issue Information Theory in Complex Systems)

Abstract

:
Causal states are minimal sufficient statistics of prediction of a stochastic process, their coding cost is called statistical complexity, and the implied causal structure yields a sense of the process’ “intrinsic computation”. We discuss how statistical complexity changes with slight changes to the underlying model– in this case, a biologically-motivated dynamical model, that of a Monod-Wyman-Changeux molecule. Perturbations to kinetic rates cause statistical complexity to jump from finite to infinite. The same is not true for excess entropy, the mutual information between past and future, or for the molecule’s transfer function. We discuss the implications of this for the relationship between intrinsic and functional computation of biological sensory systems.

1. Introduction

Intrinsic computation [1] is a theory of how a dynamical system “intrinsically” computes. In short, one makes a minimal maximally predictive model (or ϵ -machine) of the process generated by a dynamical system. States of the ϵ -machine are called “causal states”, although these states are normally not causal in the sense of Ref. [2]. Certain words are forbidden, in that those words can never be seen. The words that are seen and thus accepted by the ϵ -machine constitute the ϵ -machine’s language, in a nod to the computation performable by finite and infinite automata. The “memory stored by the process”, the statistical complexity, is taken to mean the coding cost of the ϵ -machine’s states.
One interesting hypothesis is that the ϵ -machine’s structure provides a guide to the “functional” computation of the corresponding dynamical system. Functional computation–biologically-relevant computation, e.g., transformation of information with fitness consequences–might include everything from estimating past input [3,4] to predicting future input [5] to performing logical computations on input [6]. A more rigorous definition of “functional computation” remains an open problem; here, we merely list examples of quantities that can be identified with functional computations. As of yet, no link between intrinsic and functional computation has been found.
Here, we investigate the intrinsic computation and two functional computations (ligand concentration transduction and low-pass filtering) of a Monod-Wyman-Changeux (MWC) molecule, a widely-used model of a biological sensor [7,8,9]. This is the first time that the intrinsic computation of an MWC molecule–which here is limited to the ϵ -machine structure, the statistical complexity, and the excess entropy–has been calculated. The calculational techniques used here can be applied to study intrinsic computation of a more general class of biological sensors than previously studied.
We find that certain arbitrarily small perturbations to the underlying MWC molecule can lead to arbitrarily large perturbations in the process’ intrinsic structure but lead to arbitrarily small changes in the stated functional computations. To the author’s knowledge, this is the first such example in the literature. These results therefore suggest that causal structure and functional computation are orthogonal characterizations of a process, at least for oft-considered functional computations.
However, intrinsic computation could be taken to include several newer structure-related information-theoretic measures of a process, including excess entropy [10,11,12]. These newer measures do not suffer the same sensitivity as the ϵ -machine and statistical complexity, suggesting that these measures might help characterize functional computation.
Section 2 reviews the definition of ϵ -machine and MWC molecules. Section 3 explores how variations in kinetic rates change the molecule’s intrinsic and functional computation. Section 4 discusses future research directions for intrinsic computation.

2. Background

The subject of interest here is a continuous-time, discrete-event process. However, for reasons explained later, for characterization of statistical complexity we will consider time-binning the process and treating it as a discrete-time, discrete-event process. Hence, we are also interested in discrete-time, discrete-event processes.
First, we discuss discrete-time, discrete-event processes. We code these processes as , x 1 , x 0 , x 1 , where x i is the i-th symbol appearing. The past x (with corresponding random variable X ) is taken to be , x 2 , x 1 while the future x (with corresponding random variable X ) is taken to be x 0 , x 1 , x 2 , .
Next, we discuss continuous-time, discrete-event processes. We code these processes as , ( x 1 , τ 1 ) , ( x 0 , τ 0 ) , ( x 1 , τ 1 ) , where x i is the i-th symbol, appearing for a total duration τ i . We enforce x i x i + 1 so as to ensure a unique coding. The present is said to occur sometime during the presentation of x 0 , and so we denote the past ( x , τ ) (with corresponding random variable ( X , T ) ) as , ( x 1 , τ 1 ) , ( x 0 , τ + ) and the future ( x , τ ) (with corresponding random variable ( X , T ) ) as ( x 0 , τ ) , ( x 1 , τ 1 ) , where τ + + τ = τ 0 .
Section 2.1 reviews the definition of causal states, statistical complexity, the continuous-time ϵ -machine, and the mixed-state simplex. Section 2.2 reviews the dynamical models of Monod-Wyman-Changeux molecules used here.
We assume knowledge of information theory at the level of Ref. [13], but we briefly review definitions here. When X is a discrete random variable with probability distribution p ( x ) , then its entropy is H [ X ] = x p ( x ) log p ( x ) ; when X is a continuous random variable with probability density function ρ ( x ) , then the differential entropy is H [ X ] = ρ ( x ) log ρ ( x ) d x ; and when X is a mixed random variable (as is the case here), the entropy H [ X ] is given by Ref. [14]. Entropy can be thought of as a measure of uncertainty. Conditional entropy of X conditioned on random variable Y is H [ X | Y ] = H [ X | Y = y ] y , and the mutual or shared information I [ X ; Y ] between two random variables X and Y is merely I [ X ; Y ] = H [ X ] H [ X | Y ] .

2.1. Causal States S , Statistical Complexity C μ , the ϵ -Machine, and the Mixed-State Simplex

Consider the equivalence relation ϵ that clusters two semi-infinite pasts, x and x , together if Pr ( X | X = x ) = Pr ( X | X = x ) –that is, if the two pasts are equivalent from the standpoint of prediction. The corresponding clusters are causal states σ , which are realizations of the random variable for the causal states S . The statistical complexity C μ is simply their coding cost, C μ = H [ S ] . In short, causal states S are minimal sufficient statistics of prediction; the statistical complexity C μ = H [ S ] is the coding cost of those causal states [15]; and the ϵ -machine is the minimal maximally predictive model constructed from those causal states [16].
The same constructions apply when considering continuous-time, discrete-event processes. In that case, the equivalence relation ϵ clusters two semi-infinite pasts ( x , τ ) , ( x , τ ) together if Pr ( ( X , T ) | ( X , T ) = ( x , τ ) ) = Pr ( ( X , T ) | ( X , T ) = ( x , τ ) ) . Again, the clusters are causal states σ , realizations of the random variable for causal states S .
For what follows, we must define two terms: mixed state simplex, and unifilar. Consider any hidden Markov model whose hidden state at time t is a random variable; now consider the probability distribution over hidden states given observations x . Each one of these conditional probability distributions is a mixed state, and it lies in the mixed state simplex, the set of all possible probability distributions over hidden states. The hidden Markov model is unifilar when, given one’s hidden state at time t and an observation at time t, one knows exactly which hidden state comes next at t + 1 . There is a connection between unifilarity and the mixed state simplex: when the hidden Markov model under study is unifilar, then the mixed states will lie at the edge of the simplex. (This is not true for nonunifilar hidden Markov models.) The causal states are just the mixed states of the minimal (potentially nonunifilar) generative model.
The causal states of discrete-time processes are usually uncountably infinite. When this is the case, then the box-counting dimension of the mixed state presentation in the mixed state simplex is nonzero. Let’s unpack this statement. Suppose that a (potentially nonunifilar) Hidden Markov model with states g generates the observed discrete-time process. Then we use p ( g | x ) to denote the probability over hidden states in the generative model given past output. Typically, p ( g | x ) is in the interior of the mixed state simplex–the space of probability distributions over hidden states. The box-counting dimension of the mixed state presentation is obtained by gridding the mixed state simplex by cubes of side length ϵ , counting the number of non-empty cubes N ϵ (which contain coarse-grainings of histories) [17], and then calculating the scaling of N ϵ with ϵ –so the box-counting dimension is lim ϵ 0 log N ϵ log 1 ϵ . A cube is considered non-empty when there is at least one history that leads to a mixed state in the cube. When there are countable causal states for a discrete-time process, the box-counting dimension is 0.
The causal states inherit a dynamic, and the ϵ -machine of a process is the pairing of causal states together with that dynamic. For discrete-time, discrete-event processes, tractable ϵ -machines are merely countable unifilar Hidden Markov models [16], where unifilarity implies that the next hidden state is determined uniquely by the previous hidden state and the present emitted symbol. For continuous-time, discrete-event processes, tractable ϵ -machines can (for instance) take the form of joined conveyer belts [18]. Continuous-time causal states are then usually accompanied by labeled transition operators O ( x ) , and the list of labeled transition operators specifies the continuous-time ϵ -machine. A “tractable ϵ -machine” is one for which C μ is finite, with the exception of the ϵ -machines of continuous-time periodic processes (which are tractable but which correspond to infinite C μ ).

2.2. Monod-Wyman-Changeux Molecules

A Monod-Wyman-Changeux (MWC) molecule has two configurations, active (A) and inactive (I), and n binding sites. Each configuration can bind any number of molecules, from 0 to n. This gives a total of 2 n + 1 possible states. If binding sites are indistinguishable, then a simplified model can be made based on the symmetry in binding sites so that there are only 2 ( n + 1 ) distinguishable possible states: it can be either active or inactive, with any number of binding sites occupied by ligand molecules. As our argument holds for any n, we focus on the case that n = 1 . The four states of the corresponding MWC molecule– { A 0 , A 1 , I 0 , I 1 } , standing for active/inactive ( A / I ) with either 0 or 1 ligands bound as written in the subscript–are shown in Figure 1, along with allowed transitions.
We denote the probability distribution of being in various states as
p = p ( A 0 ) p ( A 1 ) p ( I 0 ) p ( I 1 ) .
This probability distribution evolves via the master equation
d p d t = M ( c ( t ) ) p
where
M ( c ) = ( f T + f A c ) b A b T 0 f A c ( f T + b A ) 0 b T f T 0 ( b T + f I c ) b I 0 f T f I c ( b T + b I )
with f T , f A , b A , b T , f T , b T are kinetic parameters. In fixed ligand concentration c, Equation (2) is solved as
p ( t ) = e M ( c ) t p ( 0 )
where p ( 0 ) is the initial probability distribution over the MWC molecule’s states.

3. Results

We suppose that we are only allowed to see whether or not the MWC molecule is active or inactive, as would be true for most experimental observations of ligand-gated ion channels. This is a key constraint, as otherwise, the minimal generative model would be the minimal maximally predictive model and none of the discrepancies described here would arise. In what follows, we explore the effects of kinetic rates on intrinsic and functional computation.
Intrinsic computation as it was originally defined included the ϵ -machine and statistical complexity, and today includes other information measures, such as the excess entropy [10,11,12]. The number of functional computation-related quantities is unbounded, but we focus on two here due to their presence in the literature: the binding curve, or the probability of the MWC molecule being active as a function of ligand concentration; and the transfer function, or how the MWC molecule responds to sinusoidal perturbations of the ligand concentration.
Our argument will essentially be a proof by contradiction. We will start by assuming that there is some relationship between at least one aspect of intrinsic computation and at least one aspect of functional computation. If there were a relationship between these two quantities, then we should not be able to change kinetic rates so that one quantity changes by an arbitrarily small amount and the other by an arbitrarily large amount. (If so, these kinetic rates would then be of incredible importance to the process’ causal architecture, say, but of vanishingly small importance to the so-called functional computations.) We will then show in the following analysis that arbitrarily small perturbations in the kinetic rates f T , b T induce arbitrarily large perturbations to the ϵ -machine and statistical complexity, but induce arbitrarily small perturbations to excess entropy and the functional computations considered here. We therefore conclude that if there is a relationship between intrinsic computation and functional computation for these kinds of molecules, it will more likely come from excess entropy (or other more recently-studied information measures of time series [19]) than from statistical complexity or the ϵ -machine. We discuss the possibilities of finding functional computations that are sensitive to arbitrarily small increases in f T , b T in Section 4.

3.1. Intrinsic Computation

If b T , f T > 0 , then there is no “sync word”–that is, no string of observed past symbols that uniquely determines the underlying present state of the MWC molecule. (Note again that this would not be the case if we were allowed to observe the full state, and not just whether or not the MWC molecule is active or inactive.) This has important consequences for C μ and for the process’ ϵ -machine.
To analyze C μ , we move to the discrete-time domain, so as to avoid the interpretational difficulties with differential entropy [18]. By observing the process every Δ t for Δ t much smaller than any inherent time constant in the problem, the process is turned into a discrete-time process. The transition probabilities of this new process have corresponding labeled transition matrices T ( x ) = e M ( x ) Δ t , which are approximated to lowest order in Δ t by T ( x ) = I ( x ) + M ( x ) Δ t , where x is an emitted symbol:
M ( A ) ( c ) = ( f T + f A c ) b A 0 0 f A c ( b A + f T ) 0 0 f T 0 0 0 0 f T 0 0 and M ( I ) ( c ) = 0 0 b T 0 0 0 0 b T 0 0 ( b T + f I c ) b I 0 0 f I c ( b I + b T )
and
I ( A ) = I 2 × 2 0 2 × 2 0 2 × 2 0 2 × 2 I ( I ) = 0 2 × 2 0 2 × 2 0 2 × 2 I 2 × 2 .
The causal states correspond to the mixed states Pr ( S | X = x ) defined by either ( a , 1 a , 0 , 0 ) or ( 0 , 0 , 1 b , b ) . When b T , f T > 0 , all a’s and b’s are allowed; there are an uncountable infinity of these causal states because there is no sync word. As such, the ϵ -machine is uncountable and intractable, and statistical complexity is very likely infinite. However, when b T = f T = 0 , then only a countable set of a’s and b’s are allowed according to the discrete-time analogue of Theorem 1 of Ref. [18]. In particular, the causal states are identified as both the present configuration (active or inactive) and the number of time steps since last configuration switch. As a result, the ϵ -machine is countably infinite and tractable, and statistical complexity is finite, though it increases with log 1 Δ t [20].
This can be seen more directly by considering the mixed-state presentation’s box-counting dimension h 0 when the process is turned into a discrete-time process with small time resolution Δ t = 0.01 . We coarse-grain mixed-state simplex into cubes of side-length ϵ , and count the number of non-empty boxes N ϵ , as described in Ref. [17,21]. The scaling of N ϵ with ϵ reveals the box-counting dimension h 0 of the mixed-state presentation, via N ϵ ( 1 / ϵ ) h 0 . Figure 2 shows the scaling of N ϵ with ϵ for an MWC molecule with and without f T = b T = 0 . When f T = b T = 0 , h 0 = 0 ; when f T , b T > 0 , the box-counting dimension h 0 > 0 .
The reason for the former fact lies in Theorem 1 of Ref. [18]. When f T = b T = 0 , the dynamic MWC molecule of Figure 1 generates a semi-Markov process, a restricted version of the unifilar hidden semi-Markov processes analyzed in Ref. [18]. This is true even when there is more than one ligand binding site. Causal states are characterized by x, whether or not the MWC molecule is presently active, and τ + , the time since the MWC molecule last switched between activities. The now-tractable ϵ -machine takes the form shown in Figure 3.
In the continuous-time limit, which one can derive by considering the limit of the discrete-time process considered above with appropriate renormalization (e.g., compare Ref. [20] to Ref. [22]), all probability distributions over mixed states become probability density functions. Continuous-time statistical complexity can be defined using the entropy of mixed random variables [18,22], though differential entropy does have some troubling properties mentioned in those references. Given the analysis above, there is likely a singular limit in the continuous-time statistical complexity as the kinetic rates b T , f T tend to 0.
Not all structure-based characterizations of a process lack robustness in this way, as different structure-based metrics pick up on different kinds of structure. To show this, we now compare the statistical complexity C μ and the excess entropy E = I [ ( X , T ) ; ( X , T ) ] [10,11,12] when f T = b T = 0 . (We calculate E of the continuous-time process, as the excess entropy of the discrete-time process converges to that of the continuous-time process in the Δ t 0 limit [20].) The latter can be calculated via E = I [ S + ; S ] [23,24], while C μ = H [ S + ] , and so calculation of both merely requires the joint distribution p ( σ + , σ ) . For that, we need ϕ A / I ( t ) , the dwell time distributions of activity and inactivity. Note that emission of an A implies that one has just landed in A 0 , and similarly, emission of an I implies that one has just landed in I 0 . Hence, ϕ A ( t ) is the first-passage time distribution to state I 0 in which one starts in A 0 ; similarly, ϕ I ( t ) is the first-passage time distribution to state A 0 in which one starts in I 0 . To aid with the calculation, we recall the labeled transition matrices of Equation (5) when f T = b T = 0 . The matrix M ( A ) includes only the transitions between various active conformations and the only transition from active to inactive, A 0 I 0 . Therefore, the probability of not having stayed in active states given that one started in A 0 after a time t is given by
1 Φ A ( t ) = ( e ^ 3 + e ^ 4 ) p ( t ) , d p d t = M ( A ) ( c ) p ( t ) , p ( 0 ) = e ^ 1 ,
where e ^ k is the vector with elements δ i , k . Hence, the survival function Φ A ( τ ) = τ ϕ A ( τ ) d τ , the probability that one stays in the active conformation (one of A 0 , A 1 ) after time t given that one started in A 0 , can be calculated via
Φ A ( t ) = 1 ( e ^ 3 + e ^ 4 ) e M ( A ) ( c ) t e ^ 1 .
Similarly,
Φ I ( t ) = 1 ( e ^ 1 + e ^ 2 ) e M ( I ) ( c ) t e ^ 3 .
After differentiation, we find that
ϕ A ( t ) = d Φ A ( t ) d t = ( e ^ 3 + e ^ 4 ) M ( A ) ( c ) e M ( A ) ( c ) t e ^ 1
ϕ I ( t ) = d Φ I ( t ) d t = ( e ^ 1 + e ^ 2 ) M ( I ) ( c ) e M ( I ) ( c ) t e ^ 3 .
Examples of ϕ A ( t ) for various ligand concentrations c and kinetic rates are shown in Figure 4.
From Lemma 1 of Ref. [18], we find that the statistical complexity of this semi-Markov process is given by
C μ = H b ( p ( A ) ) x { A , I } p ( x ) 0 ( μ x Φ x ( τ ) ) log ( μ x Φ x ( τ ) ) d τ ,
where p ( A ) = μ I μ A + μ I , μ x = 1 / 0 Φ x ( τ ) d τ , and H b ( x ) : = x log x ( 1 x ) log ( 1 x ) . Figure 5 shows how C μ smoothly varies with changes in f T , b T for f T , b T > 0 . Statistical complexity is maximized at small kinetic rates, f T , b T 0 ; when those kinetic rates are small, dwell time distributions have longer tails, and the memory required to losslessly predict increases. Interestingly, if either f T or b T is exactly 0, then the generated process emits only A or I (see Figure 1) and thus has C μ = 0 . In other words, the limits f T , b T 0 are singular, as were the limits f T , b T 0 .
We now wish to calculate E = I [ ( X , T ) ; ( X , T ) ] which is E = I [ S + ; S ] [23], and so
E = H [ S ] H [ S | S + ] .
As a semi-Markov process is causally reversible, we have
H [ S ] = H [ S + ] = C μ
as given in Equation (10). Furthermore, the reverse-time causal states are the pair ( x , τ ) (the time to next symbol and present symbol) while the forward-time causal states are still the pair ( x + , τ + ) (the time since last symbol and present symbol) [18], so that x + = x almost surely, implying that H [ X | X + ] = 0 . Hence,
H [ S | S + ] = H [ X , T | X + , T + ] = H [ X | X + , T + ] + H [ T | X , X + , T + ] = H [ T | X 0 , T + ]
where x 0 is just the present symbol. We then note that
p ( τ | x 0 , τ + ) = ϕ x 0 ( τ + + τ ) Φ x 0 ( τ + ) ,
as was derived in Ref. [22] for a continuous-time renewal process, but the same derivation holds for the semi-Markov process. It is then straightforward to show that excess entropy is
E = H [ X ] + x p ( x ) E [ ϕ x ( t ) ]
where
E [ ϕ x ( t ) ] = 0 0 μ x ϕ x ( t + t ) log ϕ x ( t + t ) μ x Φ x ( t ) Φ x ( t ) d t d t .
Figure 5b shows how excess entropy E varies with f T , b T . Interestingly, E varies in opposition to C μ , attaining its lowest values at low values of f T and b T . Hence, the singular limits f T , b T 0 that plague C μ are not singular limits for E . Nor are f T , b T 0 singular limits for E , as arbitrarily small values of f T , b T lead to arbitrarily small perturbations to the trajectory distribution, and thus arbitrarily small perturbations to the mutual information between past and future.
How different would the results be for n > 1 , i.e., when the number of binding sites of the MWC molecule exceeded 1? In short, we would expect the same qualitative trends and singular limits. In this more general case, we allow an active MWC molecule with k ligands bound to transition to an inactive MWC molecule with k ligands bound, and vice versa, both with rates f T and b T , as for n = 1 . Meanwhile, also as for n = 1 , the active MWC molecule with no ligands bound can transition to the inactive MWC molecule with no ligands bound with rate f T , and the reverse transition can occur with rate b T . The observed process for fixed ligand concentration would still be semi-Markov when f T = b T = 0 , as was true for n = 1 . Then, decreases in f T , b T would lead to longer dwell times in active and inactive states, thereby increasing the statistical complexity C μ ; and the dwell time distributions would become closer to exponential, decreasing the excess entropy E . When f T , b T become small but nonzero, all pasts are causal states, and so C μ shoots to infinity, while E (because it is a function of trajectory distributions) barely changes.
There are some well-known examples of how arbitrarily large ϵ -machines can still have arbitrarily small excess entropies, e.g., the almost fair coin. Indeed, Ref. [23] defined crypticity as the difference between statistical complexity C μ and excess entropy E . The dynamical MWC molecule described above adds another such example to the literature, finding not only that a familiar process can have arbitrarily large crypticity, but that C μ and E can be anti-correlated with respect to underlying kinetic rates, as is true for the process generated by the parametrized Simple Nonunifilar Source [25]. There are also examples in the literature of processes with uncountable ϵ -machines and nonzero box-counting dimensions of their mixed-state presentation, e.g., the Cantor process in Ref. [26].
However, the dynamical MWC molecule is more than just an example of a process with potentially arbitrarily large crypticity or an uncountable ϵ -machine; it is also an example of how arbitrarily small changes to a generative model can lead to arbitrarily large changes in the causal structure of a process. Of course, it may be obvious to those familiar with intrinsic computation that sometimes, arbitrarily small perturbations in transition probabilities of a generative model can lead to arbitrarily large perturbations in ϵ -machine structure. However, to the author’s knowledge, the above MWC molecule example is the first such example in the literature.

3.2. Functional Computation

Monod-Wyman-Changeux (MWC) molecules have been used to model everything from ligand-gated ion channels to gene regulation [8]. The functional computations that an MWC molecule is thought to perform include transduction of ligand concentration and low-pass filtering of input [7].
Let eig 0 ( M ( c ) ) be the normalized eigenvector of eigenvalue 0 of matrix M ( c ) , normalized so that 1 eig 0 ( M ( c ) ) = 1 ; and let p e q , A ( c ) be the equilibrium probability of being in state A. The MWC molecule’s ability to convey the ligand concentration via its activity is a static property, relying only on how the equilibrium distribution
p e q , A ( c ) = p e q , A 0 ( c ) + p e q , A 1 ( c ) = ( e ^ 1 + e ^ 2 ) eig 0 ( M ( c ) )
varies with kinetic rates. An observer that can only see whether or not the MWC molecule is active can discern, to some extent, the external ligand concentration c. Such a situation might occur, for instance, for the nicotinic acetylcholine receptors at the neuromuscular junction that transduce information about whether or not a muscle fiber should seize, based on acetylcholine concentration. Though eig 0 ( M ( c ) ) in principle might be a non-smoothly varying function of f T , b T , a Mathematica calculation finds that eig 0 ( M ( c ) ) and thus p e q , A ( c ) varies smoothly with kinetic rates f T , b T :
eig 0 ( M ( c ) ) b A b I b T + b A b T b T + b A b T f I c + b I b T f T ( b I b T f A + b T b T f A + b T f A f I c + b T f T f I ) c b A b I f T + b A f T b T + f T b I f A c + b I f T f T ( b A f T f I + b T f A f T + f T f I f A c + f T f T f I ) c ,
where the normalization constant is chosen so that 1 eig 0 ( M ( c ) ) = 1 . The smooth variation of p e q , A ( c ) with respect to f T , b T is depicted for a random choice of kinetic rates in Figure 6.
The MWC molecule is a low-pass filter of ligand concentration c. Suppose that c ( t ) = c 0 + δ c sin ω t , where δ c is small. Then p e q , A ( t ) will also take the form p e q , A = p e q , A ( c 0 ) + G ( ω ) δ c sin ω t + O ( δ c 2 ) , where G ( ω ) is the transfer function. This transfer function therefore characterizes the dynamical response of the MWC molecule to fluctuations in the ligand concentration. From Equation 49 of Ref. [8], we find that the transfer function G ( ω ) is
G ( ω ) = ( e ^ 1 + e ^ 2 ) ( i ω I M 0 ) 1 M 1 eig 0 ( M ( c 0 ) )
where
M 0 = f T b A b T 0 0 ( f T + b A ) 0 b T f T 0 b T b I 0 f T 0 ( b T + b I ) and M 1 = f A 0 0 0 f A 0 0 0 0 0 f I 0 0 0 f I 0 .
A series expansion not shown here confirms that G ( ω ) varies smoothly with kinetic rates f T , b T , as would be expected from the realization that all expressions in Equation (18) are smoothly varying with f T , b T . To illustrate this, the magnitude of the transfer function, | G ( ω ) | , is plotted in Figure 7 for a randomly chosen initial concentration of c 0 = 1 .
Again, it is worth commenting on how these results would vary with larger n, i.e., a larger number of potential ligands bound to the MWC molecule. We consider the dynamical model for this more complex MWC molecule as specified in Section 3.1. Just as for the case when n = 1 , the eigenvector of eigenvalue 0 for this larger MWC molecule’s rate matrix is a continuous function of kinetic rates f T , b T and f T , b T ; as a result, both the binding curve and the transfer function vary smoothly with these rates.

4. Discussion

Our overarching aim here was to study the link between intrinsic computation and functional computation by focusing on a popular model of a biological sensor–the Monod-Wyman-Changeux (MWC) molecule. While studying its intrinsic computation, we found interesting singular limits for C μ . In particular, we found that statistical complexity was infinite and that all pasts were causal states when two of the kinetic rates were nonzero, f T , b T 0 , no matter how small f T , b T ; and we found that statistical complexity was zero when f T = b T = 0 but nonzero and arbitrarily large for arbitrarily small f T , b T > 0 . While studying the MWC molecule’s functional computation and its process’ excess entropy E [10,11,12], we found no such singular limits with respect to these kinetic rates.
The reason for this is that the studied functional computations and excess entropy is a continuous function of trajectory distributions alone, while statistical complexity must be written in terms of a distribution of causal states, which can vary in a non-continuous manner with the trajectory distribution. As a result, statistical complexity (and the mixed state presentation’s box-counting dimension h 0 ) and the ϵ -machine are incredibly sensitive to a particular type of process structure, which includes but is not limited to forbidden words. On the other hand, the studied functional computations and excess entropy are smoothly varying functions of the generative model’s kinetic rates.
From the study of the MWC molecule alone, we can conclude that a restrictive definition of intrinsic computation, e.g., only causal structure, does not necessarily provide a guide to the functional computation of a dynamical system, at least for the functional computations considered here. Accordingly, high statistical complexity does not imply biological function.
It’s well worth emphasizing that the functional computations listed here are far from an exhaustive list of all possible functional computations, and so future research might uncover a functional computation that depends sensitively on the process’ causal structure. Also, even if no such functional computation is identified, the sensitivity of causal structure to certain changes in the generative model might be considered by some to be an interesting feature, and not a bug, perhaps as a case study in how limited computational resources yield innovation [1]. However, for those wishing to study functional computation only, the extreme sensitivity of statistical complexity to particular types of process structure might prove to be a bug rather than a feature.
However, even then, the ϵ -machine finds use. In more recent years, intrinsic computation has been expansively defined to include a study of other structure-related information-theoretic statistics of a process besides statistical complexity [21]. This list includes but is not limited to excess entropy [10,11,12] as studied here, bound information rate [27], and predictive rate-distortion functions [28,29]. The last is particularly notable here in that predictive rate-distortion includes statistical complexity and excess entropy as limiting cases. On the whole, these quantities enjoy the label “information anatomy” [19] or the more broadly-construed “informational architecture”. Most of these quantities are smoothly varying functions of the transition probabilities in the minimal generative model and thus trajectory distribution, and so are not as sensitive to the underlying structure of the process as is statistical complexity. (The statistical complexity can vary discontinuously with transition probabilities in the minimal generative model, but varies smoothly with the transition probabilities in the minimal maximally predictive model; the structure of the minimal maximally predictive model, the ϵ -machine, can change by an arbitrarily large amount with arbitrarily small changes to the minimal generative model.) Those that are not so sensitive to the process’ structure are often easily calculable from the process’ ϵ -machine [29,30]. In the future, these quantities might provide interesting statistics with which to interpret the functional computation performed by biological or social systems, e.g., as in Ref. [31].

Funding

The author acknowledges generous funding from the U.C. Berkeley Chancellor’s Fellowship, the National Science Foundation Graduate Research Fellowship, and the MIT Physics of Living Systems Fellowship.

Acknowledgments

The author would like to thank James P. Crutchfield for clarifying discussions, as well as two anonymous referees.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Crutchfield, J.P. The calculi of emergence: Computation, dynamics, and induction. Phys. D Nonlinear Phenom. 1994, 75, 11–54. [Google Scholar] [CrossRef]
  2. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  3. White, O.L.; Lee, D.D.; Sompolinsky, H. Short-term memory in orthogonal neural networks. Phys. Rev. Lett. 2004, 92, 148102. [Google Scholar] [CrossRef] [PubMed]
  4. Ganguli, S.; Huh, D.; Sompolinsky, H. Memory traces in dynamical systems. Proc. Natl. Acad. Sci. USA 2008, 105, 18970–18975. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Creutzig, F.; Globerson, A.; Tishby, N. Past-future information bottleneck in dynamical systems. Phys. Rev. E 2009, 79, 041925. [Google Scholar] [CrossRef] [PubMed]
  6. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  7. Martins, B.M.; Swain, P.S. Trade-offs and constraints in allosteric sensing. PLoS Comput. Biol. 2011, 7, e1002261. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Marzen, S.; Garcia, H.G.; Phillips, R. Statistical mechanics of Monod–Wyman–Changeux (MWC) models. J. Mol. Biol. 2013, 425, 1433–1460. [Google Scholar] [CrossRef] [PubMed]
  9. Changeux, J.P. 50 years of allosteric interactions: the twists and turns of the models. Nat. Rev. Mol. Cell Biol. 2013, 14, 819–829. [Google Scholar] [CrossRef] [PubMed]
  10. Grassberger, P. Toward a quantitative theory of self-generated complexity. Int. J. Theor. Phys. 1986, 25, 907–938. [Google Scholar] [CrossRef]
  11. Bialek, W.; Nemenman, I.; Tishby, N. Predictability, complexity, and learning. Neural Comput. 2001, 13, 2409–2463. [Google Scholar] [CrossRef] [PubMed]
  12. Bialek, W.; Nemenman, I.; Tishby, N. Complexity through Nonextensivity. Phys. A Stat. Mech. Its Appl. 2001, 302, 89–99. [Google Scholar] [CrossRef]
  13. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: New York, NY, USA, 2006. [Google Scholar]
  14. Nair, C.; Prabhakar, B.; Shah, D. On entropy for mixtures of discrete and continuous variables. arXiv, 2006; arXiv:cs/0607075. [Google Scholar]
  15. Crutchfield, J.P.; Young, K. Inferring statistical complexity. Phys. Rev. Lett. 1989, 63, 105–108. [Google Scholar] [CrossRef] [PubMed]
  16. Shalizi, C.R.; Crutchfield, J.P. Computational mechanics: Pattern and prediction, structure and simplicity. J. Stat. Phys. 2001, 104, 817–879. [Google Scholar] [CrossRef]
  17. Marzen, S.E.; Crutchfield, J.P. Nearly maximally predictive features and their dimensions. Phys. Rev. E 2017, 95, 051301. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Marzen, S.E.; Crutchfield, J.P. Structure and randomness of continuous-time, discrete-event processes. J. Stat. Phys. 2017, 169, 303–315. [Google Scholar] [CrossRef]
  19. James, R.G.; Ellison, C.J.; Crutchfield, J.P. Anatomy of a bit: Information in a Time series observation. Chaos Interdiscip. J. Nonlinear Sci. 2011, 21, 037109. [Google Scholar] [CrossRef] [PubMed]
  20. Marzen, S.; DeWeese, M.R.; Crutchfield, J.P. Time resolution dependence of information measures for spiking neurons: Scaling and universality. Front. Comput. Neurosci. 2015, 9, 105. [Google Scholar] [CrossRef] [PubMed]
  21. Crutchfield, J.P. Between order and chaos. Nat. Phys. 2012, 8, 17–24. [Google Scholar] [CrossRef]
  22. Marzen, S.; Crutchfield, J.P. Informational and causal architecture of continuous-time renewal processes. J. Stat. Phys. 2017, 168, 109–127. [Google Scholar] [CrossRef]
  23. Crutchfield, J.P.; Ellison, C.J.; Mahoney, J.R. Time’s barbed arrow: Irreversibility, crypticity, and stored information. Phys. Rev. Lett. 2009, 103, 094101. [Google Scholar] [CrossRef] [PubMed]
  24. Ellison, C.J.; Mahoney, J.R.; Crutchfield, J.P. Prediction, retrodiction, and the amount of information stored in the present. J. Stat. Phys. 2009, 136, 1005–1034. [Google Scholar] [CrossRef]
  25. Marzen, S.; Crutchfield, J.P. Informational and causal architecture of discrete-time renewal processes. Entropy 2015, 17, 4891–4917. [Google Scholar] [CrossRef]
  26. Upper, D.R. Theory and Algorithms for Hidden Markov Models and Generalized Hidden Markov Models. Ph.D. Thesis, University of California, Berkeley, CA, USA, 1997. [Google Scholar]
  27. Abdallah, S.A.; Plumbley, M.D. A measure of statistical complexity based on predictive information. arXiv, 2010; arXiv:1012.1890. [Google Scholar]
  28. Still, S.; Crutchfield, J.P.; Ellison, C.J. Optimal causal inference: Estimating stored information and approximating causal architecture. Chaos Interdiscip. J. Nonlinear Sci. 2010, 20, 037111. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Marzen, S.E.; Crutchfield, J.P. Predictive rate-distortion for infinite-order Markov processes. J. Stat. Phys. 2016, 163, 1312–1338. [Google Scholar] [CrossRef]
  30. Crutchfield, J.P.; Ellison, C.J.; Riechers, P.M. Exact complexity: The spectral decomposition of intrinsic computation. Phys. Lett. A 2016, 380, 998–1002. [Google Scholar] [CrossRef] [Green Version]
  31. Palmer, S.E.; Marre, O.; Berry, M.J., II; Bialek, W. Predictive information in a sensory population. arXiv, 2013; arXiv:1307.0225. [Google Scholar]
Figure 1. A dynamical single-site Monod-Wyman-Changeux molecule, with kinetic rates as shown. States marked A i are active with i bound ligand molecules, while states marked I i are inactive with i bound ligand molecules. When transitioning from states A 0 , A 1 , A is emitted, while when transitioning from states I 0 , I 1 , I is emitted.
Figure 1. A dynamical single-site Monod-Wyman-Changeux molecule, with kinetic rates as shown. States marked A i are active with i bound ligand molecules, while states marked I i are inactive with i bound ligand molecules. When transitioning from states A 0 , A 1 , A is emitted, while when transitioning from states I 0 , I 1 , I is emitted.
Entropy 20 00599 g001
Figure 2. Box-counting dimension of the mixed-state presentation changes drastically with f T , b T . For both processes, we have: f T = 1.0 , f A c = 2.9 , b A = 3.4 , b T = 3 , f I c = 4 , b I = 2 . The process with nonzero f T , b T has a scaling of log N ϵ log ( 1 / ϵ ) and thus a nonzero box-counting dimension h 0 > 0 , whereas the process with f T = b T = 0 has a scaling of log N ϵ log log ( 1 / ϵ ) and thus a box-counting dimension h 0 = 0 .
Figure 2. Box-counting dimension of the mixed-state presentation changes drastically with f T , b T . For both processes, we have: f T = 1.0 , f A c = 2.9 , b A = 3.4 , b T = 3 , f I c = 4 , b I = 2 . The process with nonzero f T , b T has a scaling of log N ϵ log ( 1 / ϵ ) and thus a nonzero box-counting dimension h 0 > 0 , whereas the process with f T = b T = 0 has a scaling of log N ϵ log log ( 1 / ϵ ) and thus a box-counting dimension h 0 = 0 .
Entropy 20 00599 g002
Figure 3. At left, a generative model of the process generated by the MWC molecule in fixed ligand concentration c of Figure 1 with f T = b T = 0 . The dwell time distributions ϕ A ( t ) and ϕ I ( t ) are given in Equations (8) and (9). At right, the corresponding topological ϵ -machine. While emitting A, one moves along the “conveyer belt” starting with state A to the left; while emitting I, one moves along the conveyer belt starting with state I to the right. To switch the letter that one is emitting, one jumps to the other conveyer belt. The states along the conveyer belt to the left correspond to the time that one has been inactive, and the states along the conveyer belt to the right correspond to the time that one has been active.
Figure 3. At left, a generative model of the process generated by the MWC molecule in fixed ligand concentration c of Figure 1 with f T = b T = 0 . The dwell time distributions ϕ A ( t ) and ϕ I ( t ) are given in Equations (8) and (9). At right, the corresponding topological ϵ -machine. While emitting A, one moves along the “conveyer belt” starting with state A to the left; while emitting I, one moves along the conveyer belt starting with state I to the right. To switch the letter that one is emitting, one jumps to the other conveyer belt. The states along the conveyer belt to the left correspond to the time that one has been inactive, and the states along the conveyer belt to the right correspond to the time that one has been active.
Entropy 20 00599 g003
Figure 4. ϕ A ( t ) for f A = b A = 1.0 and f T = 1.0 (blue), f T = 2.0 (orange), and f T = 3.0 (green), calculated using Equation (8).
Figure 4. ϕ A ( t ) for f A = b A = 1.0 and f T = 1.0 (blue), f T = 2.0 (orange), and f T = 3.0 (green), calculated using Equation (8).
Entropy 20 00599 g004
Figure 5. Contour plot of C μ (a) and E (b) as a function of f T , b T when f T = b T = 0 , f A c = f I c = b A = b I = 1 .
Figure 5. Contour plot of C μ (a) and E (b) as a function of f T , b T when f T = b T = 0 , f A c = f I c = b A = b I = 1 .
Entropy 20 00599 g005
Figure 6. Probability of being in the active state, p e q , A ( c ) , as a function of ligand concentration c, for f T = 1 , f A = 100 , b A = 0.1 and b T = 1 , f I = 1 , b I = 1 , and: f T = b T = 0 (blue); f T = 0.01 and b T = 0 (orange); and f T = 0 and b T = 10 (green), almost overlaying the blue.
Figure 6. Probability of being in the active state, p e q , A ( c ) , as a function of ligand concentration c, for f T = 1 , f A = 100 , b A = 0.1 and b T = 1 , f I = 1 , b I = 1 , and: f T = b T = 0 (blue); f T = 0.01 and b T = 0 (orange); and f T = 0 and b T = 10 (green), almost overlaying the blue.
Entropy 20 00599 g006
Figure 7. Transfer function G ( ω ) as a function of input frequency ω at randomly chosen initial concentration c 0 for f T = 1 , f A = 100 , b A = 0.1 and b T = 1 , f I = 1 , b I = 1 , and: f T = b T = 0 (blue); f T = 0.01 and b T = 0 (orange); and f T = 0 and b T = 10 (green), almost overlaying the blue.
Figure 7. Transfer function G ( ω ) as a function of input frequency ω at randomly chosen initial concentration c 0 for f T = 1 , f A = 100 , b A = 0.1 and b T = 1 , f I = 1 , b I = 1 , and: f T = b T = 0 (blue); f T = 0.01 and b T = 0 (orange); and f T = 0 and b T = 10 (green), almost overlaying the blue.
Entropy 20 00599 g007

Share and Cite

MDPI and ACS Style

Marzen, S. Intrinsic Computation of a Monod-Wyman-Changeux Molecule. Entropy 2018, 20, 599. https://doi.org/10.3390/e20080599

AMA Style

Marzen S. Intrinsic Computation of a Monod-Wyman-Changeux Molecule. Entropy. 2018; 20(8):599. https://doi.org/10.3390/e20080599

Chicago/Turabian Style

Marzen, Sarah. 2018. "Intrinsic Computation of a Monod-Wyman-Changeux Molecule" Entropy 20, no. 8: 599. https://doi.org/10.3390/e20080599

APA Style

Marzen, S. (2018). Intrinsic Computation of a Monod-Wyman-Changeux Molecule. Entropy, 20(8), 599. https://doi.org/10.3390/e20080599

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop