Next Article in Journal
Unveiling the Connectivity of Complex Networks Using Ordinal Transition Methods
Next Article in Special Issue
Work Fluctuations in Ergotropic Heat Engines
Previous Article in Journal
Non-Predictive Model-Free Control of Nonlinear Systems with Unknown Input Time Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Thermodynamics of Multiple Co-Evolving Systems—Beyond Multipartite Processes

by
Farita Tasnim
1 and
David H. Wolpert
2,3,4,5,*
1
Massachusetts Institute of Technology, Cambridge, MA 02139, USA
2
Santa Fe Institute, Santa Fe, NM 87501, USA
3
Complexity Science Hub, Josefstadter Straße 39, 1080 Vienna, Austria
4
Center for Bio-Social Complex Systems, Arizona State University, Tempe, AZ 85287, USA
5
International Center for Theoretical Physics, 34151 Trieste, Italy
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(7), 1078; https://doi.org/10.3390/e25071078
Submission received: 16 May 2023 / Revised: 14 July 2023 / Accepted: 14 July 2023 / Published: 17 July 2023
(This article belongs to the Special Issue Thermodynamic Uncertainty Relations)

Abstract

:
Many dynamical systems consist of multiple, co-evolving subsystems (i.e., they have multiple degrees of freedom). Often, the dynamics of one or more of these subsystems will not directly depend on the state of some other subsystems, resulting in a network of dependencies governing the dynamics. How does this dependency network affect the full system’s thermodynamics? Prior studies on the stochastic thermodynamics of multipartite processes have addressed this question by assuming that, in addition to the constraints of the dependency network, only one subsystem is allowed to change state at a time. However, in many real systems, such as chemical reaction networks or electronic circuits, multiple subsystems can—or must—change state together. Here, we investigate the thermodynamics of such composite processes, in which multiple subsystems are allowed to change state simultaneously. We first present new, strictly positive lower bounds on entropy production in composite processes. We then present thermodynamic uncertainty relations for information flows in composite processes. We end with strengthened speed limits for composite processes.

1. Introduction

Many dynamical systems can be decomposed into a set of multiple co-evolving degrees of freedom. A canonical example is where the separate degrees of freedom are physically distinct variables. As a particularly important example, information-processing systems such as computers and brains consist of many separate components that evolve together and affect each other’s dynamics—these components are the subsystems of the overall system. For brevity, we will refer to the different degrees of freedom co-evolving in an overall system as different subsystems.
Often, the dynamics of one of the subsystems will not directly directly depend on the joint state of all the other subsystems, but only on the state of a subset of the other subsystems. Examples range from spin networks to CRNs to biological systems to electronic circuits to full, digital computers. How does the thermodynamics of such systems depend on the network of dependencies in the dynamics of its constituent subsystems?
Thus far, in stochastic thermodynamics, this question has primarily been analyzed by imposing the additional requirement that only one subsystem can change state at any given time. Such systems are known as multipartite processes (MPPs) [1,2,3]. Physically, for a system to be an MPP, it must be that each subsystem is (effectively) coupled to a different set of external reservoirs from the other subsystems.
However, there are many situations where multiple subsystems can—or, in fact, must—change state at the same time. As a canonical example, in the finite size limit of CRNs (Figure 1a), multiple species counts must change state concurrently. As another example, the voltages on different conductors in a circuit (Figure 1b) must change state at the same time. Physically, whenever multiple subsystems can change state simultaneously, these subsystems are (effectively) coupled to some of the same external reservoirs. In the language of stochastic thermodynamics, they share “mechanisms”.
Systems with these more general types of coupled subsystems are called composite systems, and their overall dynamics is called a composite process. There has been some preliminary work extending the stochastic thermodynamics of MPPs to consider composite processes [4]. (It should also be noted that [5] considered scenarios where different subsystems share mechanisms, restricted to the specific issue of how to extend the concept of the “learning rate” to biochemical diffusion processes that have this characteristic.) Here, we extend the preliminary work in [4] and obtain new results on the stochastic thermodynamics of composite processes.
We begin by reviewing some of this earlier work on the stochastic thermodynamics of composite processes. A central feature in the stochastic thermodynamics of a composite process is represented by the”units of the process. As formalized below, these are subsets of all of the subsystems whose joint dynamics does not depend on the state of the rest of the system. (In general, though, the dynamics of external subsystems will depend on the dynamics of some of the subsystems within a given unit.) Our first contribution is to illustrate how to apply previous bounds on EP from the literature (e.g., thermodynamic uncertainty relations (TURs [6,7,8,9,10]), kinetic uncertainty relations (KURs [11,12]), mismatch cost bounds on EP [13,14,15,16], speed limit theorems (SLTs [17,18,19]), etc.) to individual units in a composite process, in order to derive strictly positive lower bounds on the EP of the overall system. Crucially, as we show, this can sometimes be done when none of the previous bounds apply to the entire system as a whole, since the conditions for these bounds only apply to individual units, and not to the overall system.
Next, we show how to decompose key quantities of an overall composite process (including probability flows, EP, and dynamical activity) into the contributions of each mechanism. We then combine this decomposition of key quantities with the definition of units to derive a set of new TURs. Finally, we derive a strengthened SLT for composite processes. This speed limit provides a tighter restriction on how much the probability distribution over system states can change during a fixed time interval, using the contributions from each mechanism to EP and dynamical activity. (Note that these results also apply to MPPs, since MPPs are a special case of composite processes.) We conclude by discussing our results in the broader contexts of the thermodynamics of constraints and the thermodynamics of computation and by suggesting avenues for future work.

2. Stochastic Thermodynamics of Composite Processes

2.1. Background on Composite Processes

A composite process is a generalization of MPPs, describing the Markovian co-evolution of a finite set of subsystems, N = { 1 , 2 , , N } . Each subsystem i has a discrete state space X i [4]. x indicates a state vector in X = × i N X i , the joint state space of the full system. x A indicates a state vector in X = × i A X i , the joint state space of the subset A. The probability that the entire system is in state x at time t evolves according to a master equation:
d d t p x ( t ) = K x x ( t ) p x ( t )
Physically, this stochastic dynamics arises due to couplings of the system with a set of mechanisms V = { v 1 , v 2 , , v M } [20,21]. In general, each such mechanism v couples to only a subset of the subsystems. We refer to the set of subsystems to which a mechanism v couples as its puppet set and write it as P ( v ) N .
As an example, an MPP is a composite process where each mechanism couples to only one subsystem (although a single subsystem might be coupled to multiple mechanisms [1]). Thus, in an MPP, the cardinality of every puppet set is 1.
At any given time, a composite system changes state due to its interaction with, at most, one mechanism, as with MPPs. Accordingly, the rate matrix of the overall system is a sum of mechanism-specific rate matrices:
K x x ( t ) = v V δ x N P ( v ) x N P ( v ) K x P ( v ) , x N P ( v ) x P ( v ) , x N P ( v ) ( t )
: = v V K x x ( v ; t )
(Here, and throughout, for any two variables z , z contained in the same space, δ z z is the Kronecker delta function, which equals 1 when z = z and equals 0 otherwise.)
An example of a composite process was presented in [4]. This example involved a particle performing a random walk across a two-dimensional grid. The different subsystems corresponded to different degrees of freedom in a coordinate system for the particle.
As another example, consider a toy stochastic CRN (Figure 1a) [22,23,24]. This network involves four co-evolving species { X 1 , X 2 , X 3 , X 4 } that change state according to three chemical reactions { A , B , C } (top panel in figure, LHS). The system state is a four-component vector specifying the number of each type of molecule (species) in the system. We identify each of these four separate degrees of freedom as its own subsystem.
Only one reaction can occur at a time, but when a reaction does occur, multiple subsystems all change their states. For example, in the forward reaction A, species X 1 , X 2 , and X 3 must change state at the same time, by counts of { 2 , 1 , + 1 } , respectively. Accordingly, this reaction network is not an MPP. However, it is a composite process.
We can illustrate this composite process in terms of the associated puppet sets. There are a total of three such puppet sets, one for each of the possible chemical reactions. We can denote the mechanisms of the three puppet sets as r A , r B , and r C , with the puppet set of mechanism r denoted as P ( r ) . These three puppet sets are indicated by translucent bubbles in the top panel in the figure, RHS.
As a final example, consider a toy electronic circuit [25] consisting of four conductors (the four circles on the left-hand side of Figure 1b) and three devices (the three bidirectional arrows in the figure). The state of the system is a vector consisting of the voltage on each conductor. Two of the conductors (1 and 4) are “regulated”, since they are tied directly to fixed voltage sources ( V 1 and V 4 ). The other two conductors (2 and 3) are “free” to stochastically change state via the effect of devices A, B, and C.
The composite process capturing the dynamics of the state of this circuit is illustrated on the right-hand side of the figure. There are three puppet sets (each a translucent bubble), each corresponding to a mechanism associated with one of the devices in the system. The mechanisms are denoted as r A , r B , and r C , and the puppet set of mechanism r is denoted as P ( r ) .
In an MPP, although the mechanisms that affect the dynamics of any subsystem i do not affect the dynamics of any other subsystem, in general, the dynamics of i will depend on the states of some set of other subsystems. For example, in a bipartite process [1,26], both of the subsystems can be modeled as having their own set of mechanisms, but each subsystem’s dynamics is governed by the state of the other subsystem as well as its own state.
Similarly, in a composite process, the dynamics of each subsystem i can depend on the states of other subsystems in addition to its own state. Each such dependency can be represented as an edge in a directed graph. In the resulting dependency network, each edge j i means that the state of subsystem j affects the rate of state transitions in subsystem i. (We do not assign the self-dependency of a subsystem’s dynamics its own edge.) We refer to the set of subsystems whose states affect the dynamics of i as the leaders of i. Thus, j i means that j is a leader of i, in addition to i itself. In any dependency network, the leaders of each subsystem i are i itself together with its parents in the dependency graph, pa ( i ) .
The leader set for a mechanism v is defined to be the union of the leaders of each subsystem in the puppet set of v: ( v ) = i P ( v ) pa ( i ) , where pa ( i ) represents the parents of i in the dependency network. As an example, although the puppet set of mechanism v 2 in Figure 2 is { A , C , D } , the leader set of v 2 is { A , B , C , D } .
The leader set of any mechanism is a (perhaps proper) superset of its puppet set. Accordingly, we can write
K x x ( v ; t ) = K x ( v ) P ( v ) , x P ( v ) , x N ( v ) x ( v ) , x N ( v ) ( v ; t )
With abuse of notation, we can rewrite this in a way that explicitly embodies the fact that the instantaneous dynamics of the puppet set P ( v ) depends at most on the state of the leader set ( v ) , and not on the state of any of the subsystems in N ( v ) :
K x ( v ) P ( v ) , x P ( v ) x ( v ) ( v ; t ) : = K x P ( v ) x ( v ) ( v ; t )

2.2. Background on Units

A unit  ω N is a collection of subsystems such that, as the full system’s state evolves via a master equation according to K ( t ) , the marginal distribution over the states of the unit also evolves according to its own CTMC,
d d t p x ω ( t ) = K x ω x ω ( ω ; t ) p x ω ( t )
for some associated rate matrix K ( ω ; t ) . (See [4] for some additional, technical considerations.) Intuitively, a unit is any set of subsystems whose evolution is independent of the states of the subsystems outside the unit. Typically, a unit is a union of leader sets. In such cases, no subsystem in the unit has parents outside of the unit. Importantly, though, this does not prevent there being a subsystem in the unit that is a leader for some subsystem outside of the unit. Thus, there can be mechanisms whose puppet set contains both a subsystem in a unit and a subsystem outside of the unit. Informally speaking, the boundary of a unit in a dependency network can have outgoing edges, even though it cannot have any incoming edges. However, without loss of generality, we can assume that this is not the case. (See Prop. 2. 1 in [4].)
Any union of units is a unit, and any non-empty intersection of units is a unit [4]. For simplicity, we restrict our attention to systems whose units do not change in time (even though their rate matrices may). Note that the entire system N itself is a unit. We denote the set of all units as N . Thus, the units of a composite process provide a topology over the subsystems in the process, whose open sets are the members of N together with the empty set. (The dependency network of a composite process should not be confused with the “dependency graph” discussed in [4]. Dependency graphs relate the different units in a composite process to one another, not the different subsystems.) We also write ν ( ω ) for the set of reservoirs governing the stochastic dynamics of the subsystems in ω , and K x ω x ω ( v ; t ) for the (additive) contribution of reservoir v ν ( ω ) to K x ω x ω ( ω ; t ) , the rate matrix governing the joint dynamics of the subsystems in ω . (In other words, K x ω x ω ( v ; t ) is a re-expression of the term on the RHS of Equation (7).)
A set of units N * is called a unit structure if it obeys the following properties [4].
  • The union of the units in the set equals N , i.e., they cover N :
    N * = { ω 1 , ω 2 , } : ω N ω = N
  • The set is closed under intersections of its units:
( ω 1 , ω 2 ) N * 2 , ω 1 ω 2 N *
From now on, for simplicity, we will restrict our attention to unit structures N * where N N * .
Since each separate unit in a unit structure evolves according to its own CTMC, all the usual theorems of stochastic thermodynamics apply to each unit separately, with the expected EP rate of a unit ω at time t defined exactly as it is in conventional stochastic thermodynamics [20,21]:
σ ˙ ω ( t ) = x ω , x ω , v ν ( ω ) K x ω x ω ( v ; t ) p x ω ( t ) ln K x ω x ω ( v ; t ) p x ω ( t ) K x ω x ω ( v ; t ) p x ω ( t )
In particular, the Second Law applies to any unit, as do the mismatch cost theorems. Similarly, when their conditions hold for a particular unit, the associated TURs hold for this unit, and the same is true for the SLTs, the fluctuation theorems [27], and thermodynamic bounds based on first-passage times and stopping times [28,29,30].
In addition to these previous results of stochastic thermodynamics, the relationships among the units in a unit structure provide some new stochastic thermodynamics theorems as well. As an example, for any pair of nested units ω and α ω ,
σ ˙ ω ( t ) σ ˙ α ( t )
Loosely speaking, this can be viewed as an extension of the well-known result that ignoring the degrees of freedom in a system cannot increase the associated EP, only decrease it [20]. (See [3,4] for the derivation and discussion of this and related results.) In the sequel, we follow conventional notation, dropping the dot over σ to indicate the integrated value of the EP rate across some time interval.
Let N * = { ω j : j = 1 , 2 , , n } be a unit structure. In particular, for all i , j { 1 , 2 , , n } , ω j ω i = ω k for some ω k N * . Suppose that we have a set of real numbers, f, which are indexed (using superscripts) by the units in N * . It will be convenient to use the associated shorthand,
^ ω N * f ω : = j = 1 n f ω j 1 j < j n f ω j ω j + 1 j < j < j n f ω j ω j ω j
which is well defined because the set of units in a unit structure is closed under intersection. (Note that the precise assignment of integer indices to the units in N * does not affect the value of this sum.) This quantity is called the inclusion–exclusion sum (or simply “in–ex sum” for short) of f for the unit structure N * . (See [31] for background on the inclusion–exclusion principle.)
Next, define the time-t in–ex information as
I N * : = ω N * ^ S ω S N = S N + j = 1 n S ω j 1 j < j n S ω j ω j +
where all the terms in the sums on the RHS are marginal entropies over the (distributions over the coordinates in the) indicated units. As an example, if N * consists of two units, ω 1 , ω 2 , with no intersection, then the expected in–ex information at time t is simply the mutual information between these units at that time. More generally, if there is an arbitrary number of units in N * but none of them overlap, then the expected in–ex information is what is called the “multi-information”, or “total correlation”, among these units [32].
In App. E in [4], it is proven that the global EP incurred during a time period [ 0 , τ ] can be decomposed as
σ N = ^ ω N * σ ω Δ I N *
Intuitively, the negative-valued terms in the in–ex sums correct for the overcounting of the thermodynamic quantities arising due to the fact that units can overlap with one another. (The formal proof of Equation (11) is somewhat elaborate, using Rota’s extension of the inclusion–exclusion principle to negative-valued measures and the fact—proven in App. M in [4]— that the heat flow into the unit structure decomposes into an in–ex sum.)

3. Strictly Positive Lower Bounds on EP from Its In–Ex Decomposition

In some situations, some of the many thermodynamic bounds (TURs, SLTs, mismatch cost, KURs, etc.) will apply separately to some of the units within a system, but no such bound will apply to the overall system. The in–ex sum decomposition of EP can sometimes be used to “knit together” those bounds on the EP generated by the individual units, to give a bound on the EP generated by the overall system.

3.1. Mismatch Cost

We can illustrate this by exploiting a recent result concerning the minimal EP generated in any dynamic process [13,14,15,16,33]. Suppose that we have a (perhaps stochastic) dynamic process, taking any initial distribution p ( x ) to an ending distribution p ( x ) = x P ( x | x ) p ( x ) , which we express in shorthand as p = P p . (Thus, P is the conditional distribution of the ending state given the initial state.) Write σ ( p ) for the EP generated by running this process on the distribution p. Let q be the initial distribution minimizing σ ( q ) for this fixed process. Then, it is always (exactly) true that for any initial distribution p,
σ ( p ) σ ( q ) = D ( p | | q ) D ( P p | | P q ) 0
where D ( . | | . ) is the relative entropy (Kullback–Leibler divergence).
In the literature, q is called the prior, due to its Bayes optimality when running a full thermodynamic cycle [33], and the drop in KL divergence in Equation (12) is called the mismatch cost. (It is important to note that Equation (12) applies for an extremely wide range of dynamic processes implementing P, for many types of state space, and for many other thermodynamic costs generated during the process, in addition to EP; see [15,34].)
Now, suppose that there are multiple possible initial distributions, indexed by a (countable) variable θ , { p θ } . Suppose as well that θ R for some distribution R. Thus, the expected mismatch cost for the fixed process P is
E R D ( p θ | | q ) D ( P p θ | | P q )
In contrast, the mismatch cost for the expected initial distribution is
D ( E R ( p θ ) | | q ) D ( P E R ( p θ ) | | P q )
The difference between these two quantities is
E R ( S ( p θ ) ) S ( E R ( p θ ) ) E R ( S ( P p θ ) ) S ( P E R ( p θ ) )
where S ( . ) is Shannon entropy as usual. This difference equals Δ J S R ( θ ) ( { p ( θ ) } ) ) , the change during the process P in the value of the Jensen–Shannon (JS) divergence for the set of distributions p θ where θ is distributed according to R ( θ ) [35].
Note that Δ J S R ( θ ) ( { p ( θ ) } ) ) is independent of q. Accordingly, if we subtract and add the expression in Equation (14) from the expression in Equation (13), and then minimize over priors q, we obtain
min q E R [ D ( p θ | | q ) D ( P p θ | | P q ) ) = Δ J S R ( θ ) ( { p ( θ ) } ) ) + min q D ( E R ( p θ ) | | q ) D ( P E R ( p θ ) | | P q ) = Δ J S R ( θ ) ( { p ( θ ) } ) )
where the last equation uses the fact that the expression in Equation (14) is minimized (to 0) by taking q = E R p θ . (A variant of this result was first derived in [33], where JS divergence was called “entropic variance”. See also [36,37].)

3.2. Periodic Processes

Next, suppose that we have a process over a space X that is periodic, in the sense that, for some real number λ > 0 ,
P ( x ( n λ ) | x ( ( n 1 ) λ ) ) = P ( x ( λ ) | x ( 0 ) )
is the same for all integers n > 1 . Then, the EP generated during N periods starting at t = 0 , σ ( N λ ) , is lower-bounded by summing the mismatch cost over all N periods [34,37]:
σ ( N λ ) inf q Δ X t = 0 N 1 D ( P t p 0 | | q ) D ( P t + 1 p 0 | | P q )
where P t is the conditional distribution P ( x ( λ ) | x ( 0 ) ) iterated t times, and Δ X is the unit simplex over X.
Each term in the sum on the RHS of Equation (18) is non-negative, by the data-processing inequality of KL divergence. Moreover, so long as the conditional distribution P ( x ( λ ) | x ( 0 ) ) is non-degenerate (i.e., not simply a deterministic permutation over the states of X) and the initial distribution p 0 is not the steady state of P, at most, one of the terms in this sum on the RHS of Equation (18) can equal zero; all the other terms must be strictly positive. In such a situation, Equation (18) provides a strictly positive lower bound on the EP, σ ( N λ ) .
Indeed, we can re-express the sum on the RHS of Equation (18) as the uniform average over a set of t-parameterized distributions, { P t p 0 } , of the mismatch cost between this distribution and q. Inserting it into Equation (16), this establishes that
σ ( N λ ) N Δ J S U N ( t ) ( { P t p 0 } )
where U t is the uniform distribution over the values t = 1 , , N .
This lower bound in EP in Equation (19) is defined entirely in terms of the conditional distribution P and the initial distribution p 0 . None of the particular physical details of how the periodic process is physically implemented are involved in this “periodicity mismatch cost” lower bound on EP. In the sense of this generality, it is similar to the generalized Landauer’s bound. In particular, the TURs and SLTs are also lower bounds on EP that depend on the initial distribution over states and the discrete time conditional distribution of the dynamics. However, unlike the periodicity mismatch cost lower bound, the other bounds depend on other properties of the process, besides the initial distribution and the conditional distribution giving the dynamics. (For example, they depend on factors such as current precision or expected activities.) In this sense, the periodicity mismatch cost bound is more powerful than the other lower bounds on EP.

3.3. Example Where In–Ex Decomposition Is Necessary for Lower-Bound EP

In this subsection, we show how to insert Equation (18) into the in–ex decomposition of EP, Equation (11), and thereby derive a strictly positive lower bound on the EP of the full system—a lower bound that we could not derive without using this decomposition.
Suppose that we have three subsystems, A , B , C , of an overall system co-evolving with state spaces X A , X B , and X C , accordingly. Further, suppose that A , A C , and A B are the units in a unit structure. Suppose as well that A is in a meta-stable state (e.g., due to high energy barriers) throughout the co-evolution.
Consider the case where B undergoes an exactly periodic process parameterized by x A , i.e., given that x A does not change in time, there is some time interval λ B such that
P x B ( n λ B ) | x B ( ( n 1 ) λ B ) , x A ( 0 ) = P x B ( λ B ) | x B ( 0 )
is the same conditional distribution for all integers n > 1 . Thus, the dynamics of B is a time-homogeneous discrete-time Markov chain over timesteps given by the value of n, with the n-independent Markov kernel P A B x B ( n + 1 ) | x B ( n ) , x A ( 0 ) . Similarly, P ( x C ( n λ C ) | x C ( ( n 1 ) λ C ) , x A ( 0 ) ) is the same conditional distribution for all integers n > 1 . Thus, the dynamics of C is a time-homogeneous discrete-time Markov chain over timesteps given by the value of n, with an n-independent Markov kernel, namely P A C x C ( n + 1 ) | x C ( n ) , x A ( 0 ) .
Choose λ B and λ C so that there are two coprime positive integers m , n > m such that λ B = m λ C / n , and m > 2 . Thus, whatever the value of x A , the dynamics of the overall system is periodic with period τ : = n λ B = m λ C .
Suppose that we are interested in the EP generated by the overall system in the time interval [ 0 , τ ] . The RHS of Equation (18) is zero for the overall system for this time interval, since τ is merely a single period τ of the overall system, and so the infimum equals zero regardless of the dynamics or the initial distribution (simply take q = p 0 ). Thus, Equation (18) cannot be used directly to bound the EP of the overall system.
However, we can apply Equation (11) to the overall system and then use Equation (18) to lower-bound the EP generated by the units A B and A C . (Note that since A is in a meta-stable state, it generates no EP.)
To do this, first extend the definition given below Equation (10) to write the conditional entropy of the variables in a unit α conditioned on the variables in a unit γ as S α | γ . With this notation, by exploiting the fact that the entropy over the states of A does not change during the process, we can expand Equation (10) for our unit structure as
Δ I N * : = Δ S A B C Δ S A C Δ S A B
= Δ S B C | A Δ S A C | A Δ S A B | A
= Δ I ( X B ; X C | X A )
where the Δ S indicate the changes in a quantity from t = 0 to t = τ , and I ( X B ; X C | X A ) is the usual conditional mutual information.
Next, to evaluate Δ I ( X B ; X C | X A ) , note that since x A ( t ) is independent of time, the dynamics of x B ( t ) is a function of x B ( t ) and x A ( 0 ) , conditionally independent of the state x C ( t ) ; similarly, the dynamics of x C ( t ) is conditionally independent of the state x B ( t ) . In other words, for each value x A ( 0 ) X A ,
( x B ( τ ) , x A ( 0 ) ) ( x B ( 0 ) , x A ( 0 ) ) ( x C ( 0 ) , x A ( 0 ) ( x C ( τ ) , x A ( 0 ) )
is a Markov chain. Therefore, by the data-processing inequality [38], for each value of x A ( 0 ) , the mutual information between ( x B ( τ ) , x A ( 0 ) ) and ( x C ( τ ) , x A ( 0 ) ) is not greater than the mutual information between ( x B ( 0 ) , x A ( 0 ) ) and ( x C ( 0 ) , x A ( 0 ) ) . This establishes that Δ I ( X B ; X C | X A ) 0 , and so, by Equation (23),
Δ I N * 0
Inserting Equation (23) into Equation (11) and then using Equation (19) twice, once for the unit A C and once for the unit A B , we can lower-bound the EP generated by the full system in the time interval [ 0 , τ ] :
σ N = ^ ω N * σ ω Δ I N *
= σ A B + σ A C Δ I N * Δ I N * + inf q Δ X A × X B t = 0 n 1 D ( P A B t p 0 ( X A , X B ) | | q ( X A , X B ) ) D ( P A B t + 1 p 0 ( X A , X B ) | | P A B q ( X A , X B ) )
+ inf q Δ X A × X C t = 0 m 1 D ( P A C t p 0 ( X A , X C ) | | q ( X A , X C ) ) D ( P A B t + 1 p 0 ( X A , X C ) | | P A C q ( X A , X C ) )
= Δ I N * + Δ J S U n ( t ) ( { P A B t p 0 ( X A , X B ) } ) + Δ J S U m ( t ) ( { P A C t p 0 ( X A , X C ) } )
Every term in this lower bound on the EP of the full system is non-negative (including all the terms in the two sums), and if p 0 is not a steady state of the dynamics, while both P A B and P B C are non-degenerate, at most one term in each of the two sums can be zero. In such a case, Equation (29) provides a strictly positive lower bound on the EP of the overall system. Moreover, this bound can be evaluated knowing only the values of λ B , λ C , τ , p 0 ( x A , x B , x C ) and the two conditional distributions P A B and P A C , giving the periodic dynamics of units A B and A C , respectively. All other details of the physical process implementing the dynamics are irrelevant.
Ultimately this bound depends on the insertion of Equation (19) into the in–ex decomposition, Equation (11). In contrast, as mentioned above, we cannot establish a strictly positive lower bound on the EP of the full system by using Equation (19) only, without also using the in–ex decomposition.
In this example, a single lower bound on EP from the literature was combined with the in–ex decomposition to provide a strictly positive lower bound on EP, where no such bound could be derived without using this decomposition. This same method of exploiting the in–ex decomposition of EP can also be used to “mix and match” different thermodynamic bounds (TURs, SLTs, mismatch costs, KURs, etc.), each of which applies to different units, knitting those bound together to give a bound on the EP of the overall system. See [2] for some examples for the special case of MPPs.

4. Thermodynamics Due to Multiplicity of Mechanisms

None of the results presented thus far rely on the overall system obeying local detailed balance. In fact, they do not depend on there being a Hamiltonian function for the overall system. In the remainder of this paper, we will implicitly restrict our attention to systems that do obey local detailed balance, so that our results have direct thermodynamic significance. In addition, since the entire system is itself a unit, we will write all our results in terms of arbitrary units ω for the rest of the paper; they apply even if all of N is the only unit in our system.

4.1. Additional Decompositions of Thermodynamic and Dynamical Quantities in Composite Processes

The rate matrix of each unit ω in a composite process decomposes into rate matrices from each mechanism whose leader set is a subset of ω :
K x ω x ω ( ω ; t ) = v : ( v ) ω δ x ω ( v ) x ω ( v ) K x ( v ) , x ω ( v ) x ( v ) , x ω ( v ) ( t )
= v : ( v ) ω K x ω x ω ( v ; t )
Similarly, we can decompose the EP rate of any unit ω into contributions ζ ˙ ω v ( t ) from each mechanism whose leader set is a subset of ω :
σ ˙ ω ( t ) = v : ( v ) ω , x ω , x ω x ω K x ω x ω ( v ; t ) p x ω ( t ) ln K x ω x ω ( v ; t ) p x ω ( t ) K x ω x ω ( v ; t ) p x ω ( t )
= v : ( v ) ω ζ ˙ ω v ( t )
In particular, since the entire system is a unit whose state transitions are mediated by every mechanism v V , the global EP rate decomposes as σ ˙ N ( t ) = v ζ ˙ N v ( t ) .
A unit’s dynamical activity also decomposes:
A ω ( t ) = v : ( v ) ω , x ω , x ω x ω K x ω x ω ( v ; t ) p x ω ( t ) = v : ( v ) ω A ( v ; t )
Similarly, the entire system’s dynamical activity can be decomposed as A N ( t ) = v A ( v ; t ) . Note that the dynamics of every pair of nested units ω , α ω must be consistent [4], which means that A α ( v ; t ) = A ω ( v ; t ) = A ( v ; t ) for all α and ω .
We denote the probability flow from x ω x ω due to mechanism v as A x ω x ω ( v ; t ) = K x ω x ω ( v ; t ) p x ω ( t ) . We write the net probability current from x ω x ω due to mechanism v as J x ω x ω ( v ; t ) = A x ω x ω ( v ; t ) A x ω x ω ( v ; t ) . (Note that this is not defined for x ω = x ω .) The total net probability current from x ω x ω equals the sum of the probability currents due to each mechanism whose leader set is a subset of the unit ω :
J x ω x ω ( t ) = v : ( v ) ω J x ω x ω ( v ; t )
Accordingly, we can decompose the master equation for the unit ω into the probability currents induced by each mechanism:
d d t p x ω ( t ) = v : ( v ) ω K x ω x ω ( v ; t ) p x ω ( t ) = v : ( v ) ω , x ω x ω J x ω x ω ( v ; t )

4.2. Thermodynamic Uncertainty Relations for Composite Processes

For any unit ω that is in an NESS, any linear function of probability currents 𝒞 ω is a current. It can be divided into the contributions of each mechanism:
𝒞 ˙ ω = x ω , x ω > x ω J x ω x ω C x ω x ω
= v : ( v ) ω , x ω , x ω > x ω J x ω x ω ( v ) C x ω x ω
= v : ( v ) ω 𝒞 ˙ ω ( v )
where C x ω x ω = C x ω x ω is some anti-symmetric function of state transitions, and we have dropped the time dependence in the steady state.
Importantly, the current contribution from each mechanism 𝒞 ˙ ω ( v ) is itself a current. Thus, all of the thermodynamic uncertainty relations (TURs) hold for the time-integrated version of any such mechanism-specific current. In an NESS running for a time period of length τ , this mechanism-specific time-integrated current is 𝒞 ω ( v ) = τ 𝒞 ˙ ω ( v ) . Additionally, since every unit evolves according to its own CTMC, the TURs hold for each unit.
For example, the finite-time TUR bounds the precision of any current in a CTMC with respect to its EP rate [8,39]. For a composite process, this holds for any unit and any arbitrary time-integrated current:
σ ω 2 𝒞 ω 2 Var ( 𝒞 ω )
Additionally, for any mechanism v : ( v ) ω and any associated current 𝒞 ω ( v ) ,
σ ω 2 𝒞 ω ( v ) 2 Var ( 𝒞 ω ( v ) )
The vector-valued TUR following [9] holds for a vector 𝒞 ˙ ω of any set of (potentially mechanism-specific) currents { 𝒞 ˙ ω } that are not linearly dependent:
𝒞 ˙ ω T Ξ ω 1 𝒞 ˙ ω σ ˙ ω 2 τ
where Ξ ω 1 is the inverse of the covariance matrix of the associated time-integrated currents { 𝒞 ω } .
Any of these TURs can be useful to bound the EP when one has limited access to the system in the sense that one can measure state transitions (i) due only to some subset of the mechanisms influencing the system or state transitions or (ii) involving some subset of units in the system.

4.3. Information Flow TURs

One important quantity in an MPP is information flow [1,26,40]. Here, we extend the concept of information flow to composite processes. For any unit ω in an NESS, a set of subsystems A ω , and a set of subsystems B ω (for which A B = ), the information flow is the rate of decrease in the conditional entropy of the state of B given the state of A, due to state transitions in A:
I ˙ A B = x ω , x ω > x ω J x ω x ω δ x ω A x ω A ln p x B | x A p x B | x A
Thus, when ω is in an NESS, the information flow is a current for which C x ω x ω = C x ω A , x A x ω A , x A = δ x ω A x ω A ln p x B | x A p x B | x A . The contribution to the information flow that is due to interactions of the unit with reservoir v : ( v ) ω is itself an information flow:
I ˙ A B ( v ) = x ω , x ω > x ω J x ω x ω ( v ) δ x ω A x ω A ln p x B | x A p x B | x A
Since these information flows are currents, the TURs will apply to them. This observation, in combination with Equation (8), suggests that the precision of an information flow is (best) bounded by the reciprocal of the EP of the smallest unit that contains A B .

5. Strengthened Thermodynamic Speed Limits for Composite Processes

Here, we derive a speed limit similar to the one in [19], but for composite processes. This speed limit is tighter than the one presented in the mentioned paper. Our analysis will hold for an arbitrary unit ω (which could be the entire system N itself),
l ω v : ( v ) ω A ω tot ( v ; τ ) f ζ ω v ( τ ) A ω tot ( v ; τ )
where the dynamics occurs during the time period [ 0 , τ ] . Additionally, l ω is the total variation distance between the initial (time-0) and final (time- τ ) probability distributions over states of the unit ω . A ω tot ( v ; τ ) is the total time-integrated dynamical activity due to mechanism v. ζ ω v ( τ ) is the total contribution to the EP of unit ω due to interactions of ω with mechanism v.
We start by bounding the total variation distance between the initial and final (time- τ ) probability distributions over states of the unit ω :
l ω : = 𝕃 ( p x ω ( 0 ) , p x ω ( τ ) ) = 1 2 x ω | p x ω ( τ ) p x ω ( 0 ) |
= 1 2 x ω 0 τ d t d d t p x ω ( t )
1 2 0 τ d t x ω d d t p x ω ( t )
In a composite process, we can further bound the integrand:
x ω | d d t p x ω ( t ) | = x ω | v : ( v ) ω x ω x ω J x ω x ω ( v ; t ) |
v : ( v ) ω x ω , x ω x ω J x ω x ω ( v ; t )
We write the time-t “conditional probability distribution” of the forward process, under the counterfactual scenario whereby the process evolves with coupling only to mechanism v : ( v ) ω , as
W x ω x ω ( v ; t ) = ( 1 δ x ω x ω ) K x ω x ω ( v ; t ) p x ω ( t ) A ω ( v ; t )
Intuitively, this can be interpreted as a conditional probability that if a jump occurs at t due to reservoir v : ( v ) ω , the state before the jump was x ω and the state afterwards was x ω . We write the same quantity for the reverse process as
W ˜ x ω x ω ( t ) = ( 1 δ x ω x ω ) K x ω x ω ( v ; t ) p x ω ( t ) A ω ( v ; t )
The total variation distance between these matrices d TV ( W ω ( v ; t ) , W ˜ ω ( v ; t ) ) represents how irreversible this counterfactual process (the one driven only by mechanism v) is at time t. Using these definitions, we can rewrite Equation (50) as
x ω d d t p x ω ( t ) 2 v : ( v ) ω A ω ( v ; t ) d TV ( W ω ( v ; t ) , W ˜ ω ( v ; t ) )
Inserting this into Equation (48), we obtain
l ω 0 τ d t v : ( v ) ω A ω ( v ; t ) d TV ( W ω ( v ; t ) , W ˜ ω ( v ; t ) )
We next make use of the fact that mechanism v’s contribution to the EP rate of unit ω (Equation (33)) can be written in terms of the Kullback–Leibler (KL) divergence between the conditional distributions of the forward and backward processes as
ζ ˙ ω v ( t ) = A ω ( v ; t ) D KL ( W ω ( v ; t ) , W ˜ ω ( v ; t ) )
Some positive monotonic concave functions g ( . ) relate the total variation distance to the KL divergence [19] according to
d TV ( p ; q ) g ( D KL ( p ; q ) )
We can use this relationship to relate Equation (55) to l ω . Combining Equations (54)–(56),
l ω 0 τ d t v : ( v ) ω A ω ( v ; t ) f ζ ˙ ω v ( t ) A ω ( v ; t )
Next, define ζ ω v = 0 τ d t ζ ˙ ω v ( t ) as the total (ensemble-average) contribution to the EP of unit ω caused by an interaction of the system with mechanism v during the time period [ 0 , τ ] . Moreover, define A ω tot ( v ; τ ) = 0 τ d t A ω ( v ; t ) as the total (ensemble-average) number of state transitions in the unit ω that are caused by an interaction of the system with mechanism v. Then, using the positivity of the dynamical activity and of EP, together with the concavity of g, we can further bound the right-hand side to obtain a general limit for composite processes:
l ω v : ( v ) ω A ω tot ( v ; τ ) g ζ ω v ( τ ) A ω tot ( v ; τ )
This result provides an upper bound on how much l ω can change during the time interval [ 0 , τ ] , in terms of the associated activity of ω and the contribution of ω to EP. Thus, Equation (58) is a thermodynamic speed limit theorem, involving only the subsystems in the unit ω .
By comparison, the speed limit in [19] applied to a unit ω reads
l ω A ω tot ( τ ) f σ ω ( τ ) A ω tot ( τ )
For a composite process, the right-hand side of this “global” bound expands to
l ω v : ( v ) ω A ω tot ( v ; τ ) g v : ( v ) ω ζ ω v ( τ ) v : ( v ) ω A ω tot ( v ; τ )
By Jensen’s inequality, the speed limit for composite processes (Equation (58)) is always tighter than the speed limit provided by [19] (Equation (59)). For a concave function g, a set of numbers { x v } in its domain, and positive weights a v , Jensen’s inequality states that
v a v g v a v x v v a v v a v g ( x v )
Setting a v = A ω tot ( v ; τ ) and x v = ζ ω v ( τ ) A ω tot ( v ; τ ) proves that Equation (58) is always tighter than Equation (59). Intuitively, this occurs because we are able to define the mechanism-specific contributions to the EP and activity in a composite process.
Ref. [19] provides some examples of acceptable functions g. For example, if we follow Pinsker’s inequality and choose g = x 2 , then the speed limit provided by [19] collapses to the speed limit derived in [17]. If we insert this choice of g into Equation (58), extract the parameter τ using the average frequency of state transitions A ω ( v ) τ = A ω tot ( v ; τ ) τ , and rearrange the terms, we obtain the bounds
ω N : τ ( 𝕃 ( p x ω ( 0 ) , p x ω ( τ ) ) ) 2 2 v : ( v ) ω ζ ω v ( τ ) A ω v τ 2
the tightest of which is given by
τ max ω N ( 𝕃 ( p x ω ( 0 ) , p x ω ( τ ) ) ) 2 2 v : ( v ) ω ζ ω v ( τ ) A v τ 2
This particular speed limit tells us that the speed of the evolution of the system’s probability distribution cannot be greater than the speed of evolution of the distribution over the coordinates of the “slowest-evolving” unit.

6. Discussion

In this paper, we extend previous work on the stochastic thermodynamics of composite processes [4]. A central feature in the stochastic thermodynamics of a composite process is represented by the units of the process. These are subsets of the subsystems of the overall system whose joint dynamics does not depend on the state of the rest of the system. Our first contribution is to extend previous work that showed how the overlap among the units in a system allows us to “mix and match” previous bounds on EP from the literature (e.g., TURs, KURs, mismatch cost, SLTs, etc.) to derive strictly positive lower bounds on the EP by applying these bounds to the units within the system [2]. Crucially, as we show, this can sometimes be done when none of the previous bounds apply to the entire system as a whole.
We then present a preliminary analysis of how information flows in a composite process are constrained by the EPs of units. In the analysis, we demonstrate that bounds on the speed of transformation of a system’s probability distribution over states can be tightened with knowledge of the contributions to the EP and dynamical activity from each mechanism with which the system interacts.
This paper fits into a growing branch of research on the stochastic thermodynamics of constraints. One example of research in this area investigates the effect of constraints on the control protocol (time sequence of rate matrices evolving the probability distribution) [41]. There has also been some important work where the “constraint” on such a many-degree-of-freedom classical system is simply that it is some very narrowly defined type of system, whose dynamics is specified by many different types of parameters. For example, there has been analysis of the stochastic thermodynamics of chemical reaction networks [23,24], of electronic circuits [25,42,43], and of biological copying mechanisms [44]. This paper analyzes the consequences of a major class of dynamical constraints that arises because many of these systems are most naturally modeled as a set of multiple co-evolving subsystems [1,2,3,4,26,45,46,47]. In particular, the main constraints on such systems are that only certain subsets of subsystems can simultaneously change state a given time, and that the dependencies between subsystems impose restrictions on their joint dynamics.
There remain many avenues of potential future work, especially in the thermodynamics of computation. Many computational processes consist of multiple, co-evolving systems with a broad set of constraints that allow them to be easily modeled as a composite process. Moreover, the current physical implementations of (almost) all digital computations are periodic processes. Thus, the periodicity mismatch cost lower bound on EP in Equation (19) applies directly. Research in this direction would first require formalizing the notion of computation in a composite process. One such computation, which equates to the identity map, is simply communication (information transmission). One could extend the recent study on the fundamental thermodynamic costs of communication [48] to tie Shannon information theory to the stochastic thermodynamics of composite processes. More generally, for any given computation, one could analyze the trade-offs between the energy cost required to implement that computation and the performance (accuracy, time, etc.) of a composite process. In particular, there could be a rich structure in how the properties of the dependency network in a composite process affect these trade-offs.
We emphasize, though, that there are also issues for future work that do not involve physical implementations of computational systems. In particular, it is important to try to extend the composite systems’ TURs derived above for systems at an NESS to TURs with time-dependent driving [49] or discrete-time TURs [50]. (We thank an anonymous referee for this particular suggestion.)

Author Contributions

Conceptualization, D.H.W.; methodology, D.H.W. and F.T.; writing—original draft preparation, D.H.W. and F.T.; writing—review and editing, D.H.W. and F.T.; supervision, D.H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by US NSF EAGER Grant CCF-2221345, the Santa Fe Institute, and the MIT Media Lab Consortium.

Data Availability Statement

Not applicable.

Acknowledgments

F.T. and D.H.W. thank Tarek Tohme for the initial discussions regarding TURs for information flows in multipartite processes. F.T. thanks Nahuel Freitas for the discussions regarding how circuits can be modeled as composite processes. D.H.W. thanks Artemy Kolchinsky for another interesting discussion.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Horowitz, J.M. Multipartite information flow for multiple Maxwell demons. J. Stat. Mech. Theory Exp. 2015, 2015, P03006. [Google Scholar] [CrossRef]
  2. Wolpert, D.H. Combining lower bounds on entropy production in complex systems with multiple interacting components. In Frontiers in Entropy across the Disciplines: Panorama of Entropy: Theory, Computation, and Applications; World Scientific: Singapore, 2023; pp. 405–453. [Google Scholar]
  3. Wolpert, D.H. Minimal entropy production rate of interacting systems. New J. Phys. 2020, 22, 113013. [Google Scholar] [CrossRef]
  4. Wolpert, D.H. Strengthened second law for multi-dimensional systems coupled to multiple thermodynamic reservoirs. Philos. Trans. R. Soc. A 2022, 380, 20200428. [Google Scholar] [CrossRef] [PubMed]
  5. Chetrite, R.; Rosinberg, M.; Sagawa, T.; Tarjus, G. Information thermodynamics for interacting stochastic systems without bipartite structure. J. Stat. Mech. Theory Exp. 2019, 2019, 114002. [Google Scholar] [CrossRef] [Green Version]
  6. Barato, A.C.; Seifert, U. Thermodynamic uncertainty relation for biomolecular processes. Phys. Rev. Lett. 2015, 114, 158101. [Google Scholar] [CrossRef] [Green Version]
  7. Aurell, E.; Mejía-Monasterio, C.; Muratore-Ginanneschi, P. Optimal protocols and optimal transport in stochastic thermodynamics. Phys. Rev. Lett. 2011, 106, 250601. [Google Scholar] [CrossRef] [Green Version]
  8. Horowitz, J.M.; Gingrich, T.R. Proof of the finite-time thermodynamic uncertainty relation for steady-state currents. Phys. Rev. E 2017, 96, 020103. [Google Scholar] [CrossRef] [Green Version]
  9. Dechant, A. Multidimensional thermodynamic uncertainty relations. J. Phys. A Math. Theor. 2018, 52, 035001. [Google Scholar] [CrossRef] [Green Version]
  10. Hasegawa, Y.; Van Vu, T. Fluctuation theorem uncertainty relation. Phys. Rev. Lett. 2019, 123, 110602. [Google Scholar] [CrossRef] [Green Version]
  11. Di Terlizzi, I.; Baiesi, M. Kinetic uncertainty relation. J. Phys. A Math. Theor. 2018, 52, 02LT03. [Google Scholar] [CrossRef] [Green Version]
  12. Vo, V.T.; Van Vu, T.; Hasegawa, Y. Unified thermodynamic–kinetic uncertainty relation. J. Phys. A Math. Theor. 2022, 55, 405004. [Google Scholar] [CrossRef]
  13. Kolchinsky, A.; Wolpert, D.H. Dependence of dissipation on the initial distribution over states. J. Stat. Mech. Theory Exp. 2017, 2017, 083202. [Google Scholar] [CrossRef]
  14. Wolpert, D.H. The stochastic thermodynamics of computation. J. Phys. A Math. Theor. 2019, 52, 193001. [Google Scholar] [CrossRef] [Green Version]
  15. Kolchinsky, A.; Wolpert, D.H. Dependence of integrated, instantaneous, and fluctuating entropy production on the initial state in quantum and classical processes. Phys. Rev. E 2021, 104, 054107. [Google Scholar] [CrossRef] [PubMed]
  16. Riechers, P.M.; Gu, M. Initial-state dependence of thermodynamic dissipation for any quantum process. Phys. Rev. E 2021, 103, 042145. [Google Scholar] [CrossRef] [PubMed]
  17. Shiraishi, N.; Funo, K.; Saito, K. Speed Limit for Classical Stochastic Processes. Phys. Rev. Lett. 2018, 121, 070601. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Shiraishi, N.; Saito, K. Speed limit for open systems coupled to general environments. Phys. Rev. Res. 2021, 3, 023074. [Google Scholar] [CrossRef]
  19. Lee, J.S.; Lee, S.; Kwon, H.; Park, H. Speed limit for a highly irreversible process and tight finite-time Landauer’s bound. Phys. Rev. Lett. 2022, 129, 120603. [Google Scholar] [CrossRef]
  20. Esposito, M.; Van den Broeck, C. Three faces of the second law. I. Master equation formulation. Phys. Rev. E 2010, 82, 011143. [Google Scholar] [CrossRef] [Green Version]
  21. Van den Broeck, C.; Esposito, M. Ensemble and trajectory thermodynamics: A brief introduction. Phys. A Stat. Mech. Its Appl. 2015, 418, 6–16. [Google Scholar] [CrossRef] [Green Version]
  22. Wachtel, A.; Rao, R.; Esposito, M. Thermodynamically consistent coarse graining of biocatalysts beyond Michaelis–Menten. New J. Phys. 2018, 20, 042002. [Google Scholar] [CrossRef]
  23. Rao, R.; Esposito, M. Nonequilibrium thermodynamics of chemical reaction networks: Wisdom from stochastic thermodynamics. Phys. Rev. X 2016, 6, 041064. [Google Scholar] [CrossRef] [Green Version]
  24. Rao, R.; Esposito, M. Conservation laws and work fluctuation relations in chemical reaction networks. J. Chem. Phys. 2018, 149, 245101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Freitas, N.; Delvenne, J.C.; Esposito, M. Stochastic Thermodynamics of Non-Linear Electronic Circuits: A Realistic Framework for Thermodynamics of Computation. arXiv 2020, arXiv:2008.10578. [Google Scholar]
  26. Horowitz, J.M.; Esposito, M. Thermodynamics with continuous information flow. Phys. Rev. X 2014, 4, 031015. [Google Scholar] [CrossRef] [Green Version]
  27. Rao, R.; Esposito, M. Detailed fluctuation theorems: A unifying perspective. Entropy 2018, 20, 635. [Google Scholar] [CrossRef] [Green Version]
  28. Gingrich, T.R.; Horowitz, J.M. Fundamental bounds on first passage time fluctuations for currents. Phys. Rev. Lett. 2017, 119, 170601. [Google Scholar] [CrossRef] [Green Version]
  29. Neri, I.; Roldán, É.; Jülicher, F. Statistics of infima and stopping times of entropy production and applications to active molecular processes. Phys. Rev. X 2017, 7, 011019. [Google Scholar] [CrossRef]
  30. Neri, I.; Roldán, É.; Pigolotti, S.; Jülicher, F. Integral fluctuation relations for entropy production at stopping times. J. Stat. Mech. Theory Exp. 2019, 2019, 104006. [Google Scholar] [CrossRef] [Green Version]
  31. Andreescu, T.; Feng, Z.; Andreescu, T.; Feng, Z. Inclusion-exclusion principle. In A Path to Combinatorics for Undergraduates: Counting Strategies; Birkhäuser: Boston, MA, USA, 2004; pp. 117–141. [Google Scholar]
  32. Ting, H.K. On the amount of information. Theory Probab. Its Appl. 1962, 7, 439–447. [Google Scholar] [CrossRef]
  33. Wolpert, D.H. Extending Landauer’s bound from bit erasure to arbitrary computation. arXiv 2015, arXiv:1508.05319. [Google Scholar]
  34. Manzano, G.; Kardes, G.; Roldan, E.; Wolpert, D. Thermodynamics of Computations with Absolute Irreversibility, Unidirectional Transitions, and Stochastic Computation Times. arXiv 2023, arXiv:2307.05713. [Google Scholar]
  35. Wikipedia Contributors. Jensen-Shannon Divergence—Wikipedia, The Free Encyclopedia. 2023. Available online: https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence (accessed on 8 July 2023).
  36. Korbel, J.; Wolpert, D.H. Nonequilibrium thermodynamics of uncertain stochastic processes. arXiv 2022, arXiv:2210.05249. [Google Scholar]
  37. Ouldridge, T.E.; Wolpert, D.H. Thermodynamics of deterministic finite automata operating locally and periodically. arXiv 2022, arXiv:2208.06895. [Google Scholar]
  38. Cover, T.; Thomas, J. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 1991. [Google Scholar]
  39. Pietzonka, P.; Ritort, F.; Seifert, U. Finite-time generalization of the thermodynamic uncertainty relation. Phys. Rev. E 2017, 96, 012101. [Google Scholar] [CrossRef] [Green Version]
  40. Hartich, D.; Barato, A.C.; Seifert, U. Sensory capacity: An information theoretical measure of the performance of a sensor. Phys. Rev. E 2016, 93, 022116. [Google Scholar] [CrossRef] [Green Version]
  41. Kolchinsky, A.; Wolpert, D.H. Entropy production and thermodynamics of information under protocol constraints. arXiv 2020, arXiv:2008.10764. [Google Scholar] [CrossRef]
  42. Gao, C.Y.; Limmer, D.T. Principles of low dissipation computing from a stochastic circuit model. arXiv 2021, arXiv:2102.13067. [Google Scholar] [CrossRef]
  43. Wolpert, D.H.; Kolchinsky, A. Thermodynamics of computing with circuits. New J. Phys. 2020, 22, 063047. [Google Scholar] [CrossRef] [Green Version]
  44. Poulton, J.M.; Ten Wolde, P.R.; Ouldridge, T.E. Nonequilibrium correlations in minimal dynamical models of polymer copying. Proc. Natl. Acad. Sci. USA 2019, 116, 1946–1951. [Google Scholar] [CrossRef] [Green Version]
  45. D’Souza, R.M. Structure comes to random graphs. Nat. Phys. 2009, 5, 627–628. [Google Scholar] [CrossRef]
  46. Ito, S.; Sagawa, T. Information thermodynamics on causal networks. Phys. Rev. Lett. 2013, 111, 180603. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Wolpert, D.H. Uncertainty Relations and Fluctuation Theorems for Bayes Nets. Phys. Rev. Lett. 2020, 125, 200602. [Google Scholar] [CrossRef]
  48. Tasnim, F.; Freitas, N.; Wolpert, D.H. The fundamental thermodynamic costs of communication. arXiv 2023, arXiv:2302.04320. [Google Scholar]
  49. Koyuk, T.; Seifert, U. Thermodynamic uncertainty relation for time-dependent driving. Phys. Rev. Lett. 2020, 125, 260604. [Google Scholar] [CrossRef]
  50. Proesmans, K.; Van den Broeck, C. Discrete-time thermodynamic uncertainty relation. Europhys. Lett. 2017, 119, 20001. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Examples of systems whose dynamics can be modeled as composite processes. Each system consists of multiple subsystems (blue circles). Mechanisms are denoted as r, and their puppet sets P ( r ) are indicated by translucent white bubbles. (a) An example stochastic CRN consists of four co-evolving species { X 1 , X 2 , X 3 , X 4 } that change state according to three chemical reactions { A , B , C } . (b) An example toy circuit consists of four conductors { 1 , 2 , 3 , 4 } that change state via interactions with three devices { A , B , C } .
Figure 1. Examples of systems whose dynamics can be modeled as composite processes. Each system consists of multiple subsystems (blue circles). Mechanisms are denoted as r, and their puppet sets P ( r ) are indicated by translucent white bubbles. (a) An example stochastic CRN consists of four co-evolving species { X 1 , X 2 , X 3 , X 4 } that change state according to three chemical reactions { A , B , C } . (b) An example toy circuit consists of four conductors { 1 , 2 , 3 , 4 } that change state via interactions with three devices { A , B , C } .
Entropy 25 01078 g001
Figure 2. The dependency network specifies how the dynamics of each subsystem is governed by the states of other subsystems. This network defines the leader sets in a composite process.
Figure 2. The dependency network specifies how the dynamics of each subsystem is governed by the states of other subsystems. This network defines the leader sets in a composite process.
Entropy 25 01078 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tasnim, F.; Wolpert, D.H. Stochastic Thermodynamics of Multiple Co-Evolving Systems—Beyond Multipartite Processes. Entropy 2023, 25, 1078. https://doi.org/10.3390/e25071078

AMA Style

Tasnim F, Wolpert DH. Stochastic Thermodynamics of Multiple Co-Evolving Systems—Beyond Multipartite Processes. Entropy. 2023; 25(7):1078. https://doi.org/10.3390/e25071078

Chicago/Turabian Style

Tasnim, Farita, and David H. Wolpert. 2023. "Stochastic Thermodynamics of Multiple Co-Evolving Systems—Beyond Multipartite Processes" Entropy 25, no. 7: 1078. https://doi.org/10.3390/e25071078

APA Style

Tasnim, F., & Wolpert, D. H. (2023). Stochastic Thermodynamics of Multiple Co-Evolving Systems—Beyond Multipartite Processes. Entropy, 25(7), 1078. https://doi.org/10.3390/e25071078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop