Next Article in Journal
Testing a Quantum Heat Pump with a Two-Level Spin
Next Article in Special Issue
A Cost/Speed/Reliability Tradeoff to Erasing
Previous Article in Journal
Operational Complexity of Supplier-Customer Systems Measured by Entropy—Case Studies
Previous Article in Special Issue
The Free Energy Requirements of Biological Organisms; Implications for Evolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open Markov Processes: A Compositional Perspective on Non-Equilibrium Steady States in Biology

Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA
Entropy 2016, 18(4), 140; https://doi.org/10.3390/e18040140
Submission received: 5 January 2016 / Revised: 16 February 2016 / Accepted: 6 April 2016 / Published: 15 April 2016
(This article belongs to the Special Issue Information and Entropy in Biological Systems)

Abstract

:
In recent work, Baez, Fong and the author introduced a framework for describing Markov processes equipped with a detailed balanced equilibrium as open systems of a certain type. These “open Markov processes” serve as the building blocks for more complicated processes. In this paper, we describe the potential application of this framework in the modeling of biological systems as open systems maintained away from equilibrium. We show that non-equilibrium steady states emerge in open systems of this type, even when the rates of the underlying process are such that a detailed balanced equilibrium is permitted. It is shown that these non-equilibrium steady states minimize a quadratic form which we call “dissipation”. In some circumstances, the dissipation is approximately equal to the rate of change of relative entropy plus a correction term. On the other hand, Prigogine’s principle of minimum entropy production generally fails for non-equilibrium steady states. We use a simple model of membrane transport to illustrate these concepts.

1. Introduction

Life exists away from equilibrium. Left isolated, systems will tend toward thermodynamic equilibrium. Open systems can be maintained away from equilibrium via the exchange of energy and matter with the environment. In addition, biological systems typically consist of a large number of interacting parts. This paper presents a way of describing these “parts” as morphisms in a category. A category consists of a collection of objects along with morphisms or arrows between objects, obeying certain conditions. We consider time-homogeneous Markov processes as a general framework for modeling various biological and biochemical systems whose dynamical equations are linear. Viewed as morphisms in a category, the “open Markov processes” discussed in this paper provide a framework for describing open systems which can be combined to build larger systems.
Intuitively, one can think of a Markov process as specifying the dynamics of a probability or “population” distribution that is spread across a finite set of states. A population distribution is a non-normalized probability distribution, see for example [1]. The population of a particular state can be any non-negative real number. The total population in an open Markov process is not constant in time as population can flow in and out through certain boundary states. Part of the utility of Markov processes as models of physical or biological systems stems from the flexibility in choosing the correspondence between the states of the Markov process and the actual system it is to model. For instance, the states of a Markov process could correspond to different internal states of a particular molecule or chemical species. In this case, the transition rates describe the rates at which the molecule transitions among these states. Or, the states of a Markov process could correspond to a molecule’s physical location. In this case, the transition rates encode the rates at which that molecule moves from place to place.
This paper is structured as follows. In Section 2, we give some preliminary definitions from the theory of Markov processes and explain the concept of an open Markov process. In Section 3, we introduce a model of membrane transport as a simple example of an open Markov process. In Section 4, we introduce the category DetBalMark . The objects in DetBalMark are finite sets of “states” whose elements are labeled by non-negative real numbers which we call “populations”. The morphisms in DetBalMark are Markov processes equipped with a detailed balanced equilibrium distribution as well as maps specifying input and output states. If the outputs of one process match the inputs of another process the two can be composed, yielding a new open Markov process. We refer to the union of the input and output states as the “boundary” of an open Markov process.
In Section 5, we show that if the populations at the boundary of an open detailed balanced Markov process are held fixed, then the non-equilibrium steady states which emerge minimize a quadratic form, which we call the “dissipation”, subject to the constraint on the boundary populations. Depending on the values of the boundary populations, these non-equilibrium steady states can exist arbitrarily far from the detailed balanced equilibrium of the underlying Markov process. In Section 6, we show that, for fixed boundary populations, this principle of minimum dissipation approximates Prigogine’s principle of minimum entropy production in the neighborhood of equilibrium plus a correction term involving only the flow of relative entropy through the boundary of the open Markov process.

2. Open Markov Processes

In this section, we define open Markov processes, describe the detailed balanced condition for equilibria and define non-equilibrium steady states for Markov processes.
An open Markov process, or open, continuous time, discrete state Markov chain, is a triple ( V , B , H ) where V is a finite set of states, B V is the subset of boundary states and H : R V R V is an infinitesimal stochastic Hamiltonian
H i j 0 , i j
i H i j = 0 .
For each i V , the dynamical variable p i [ 0 , ) , i V , is the population at the i th state. We call the resulting function p : V [ 0 , ) the population distribution. Populations evolve in time according to the open master equation
d p i d t = j H i j p j , i V B p i ( t ) = b i ( t ) , i B ,
The off-diagonal entries H i j , i j are the rates at which population transitions from the j th to the i th state. A steady state distribution is a population distribution which is constant in time:
d p i d t = 0 for all i V .
A closed Markov process, or continuous time, discrete state Markov chain, is an open Markov process whose boundary is empty. For a closed Markov process, the open master equation becomes the usual master equation
d p d t = H p .
In a closed Markov process, the total population is conserved:
i d p i d t = i , j H i j p j = 0 ,
enabling one to talk about the relative probabilities of being in particular states. A steady-state distribution in a closed Markov process is typically called an equilibrium. We say an equilibrium q [ 0 , ) V of a Markov process is detailed balanced if
H i j q j = H j i q i for all i , j V .
An open detailed balanced Markov process is an open Markov process ( V , B , H ) together with a detailed balanced equilibrium q : V ( 0 , ) on V. In Section 5, we define the “dissipation”, which depends on the detailed balanced equilibrium populations, hence we equip an open Markov process with a specific detailed balanced equilibrium of the underlying closed Markov process. Thus, if a Markov process admits multiple detailed balanced equilibria, we choose a specific one. Note that we consider only detailed balanced equilibria such that the populations of all states are non-zero. Later, it will become clear why this is important.
For a pair of distinct states i , j V , the term H i j p j is the flow of population from j to i. The net flow of population from the j th state to the i th is
J i j ( p ) = H i j p j H j i p i .
Summing the net flows into a particular state we can define the net inflow J i ( p ) R of a particular state to be
J i ( p ) = j J i j ( p ) = j H i j p j H j i p i .
Since j H j i p i = 0 , the right side of this equation is the time derivative of the population at the i th state. Writing the master equation in terms of J i j ( p ) or J i ( p ) we have
d p i d t = j J i j ( p ) = J i ( p ) .
The net flow between each pair of states vanishes identically in a detailed balanced equilibrium q:
J i j ( q ) = 0 .
For a closed Markov process, the existence of a detailed balanced equilibrium is equivalent to a condition on the rates of a Markov process known as Kolmogorov’s criterion [2], namely that
H i 1 i 2 H i 2 i 3 H i n 1 i n H i n i 1 = H i 1 i n H i n i n 1 H i 3 i 2 H i 2 i 1
for any finite sequence of states i 1 , i 2 , , i n of any length. This condition says that the product of the rates along any cycle is equal to the product of the rates along the same cycle in the reverse direction.
A non-equilibrium steady state is a steady state in which the net flow between at least one pair of states is non-zero. Thus, there could be population flowing between pairs of states, but in such a way that these flows still yield constant populations at all states. In a closed Markov process, the existence of non-equilibrium steady states requires that the rates of the Markov process violate Kolmogorov’s criterion. We show that open Markov processes with constant boundary populations admit non-equilibrium steady states even when the rates of the process satisfy Kolmogorov’s criterion. Throughout this paper, we use the term equilibrium to mean detailed balanced equilibrium.

3. Membrane Diffusion as an Open Markov Process

To illustrate these ideas, we consider a simple model of the diffusion of neutral particles across a membrane as an open detailed balanced Markov process ( V , B , H , q ) with three states V = { A , B , C } , input A and output C. The states A and C correspond to the each side of the membrane, while B corresponds within the membrane itself, see Figure 1.
In this model, p A is the number of particles on one side of the membrane, p B the number of particles within the membrane and p C the number of particles on the other side of the membrane. The off-diagonal entries in the Hamiltonian H i j , i j are the rates at which population hops from j to i. For example, H A B is the rate at which population moves from B to A, or from inside the membrane to the top of the membrane. Let us assume that the membrane is symmetric in the sense that the rate at which particles hop from outside of the membrane to the interior is the same on either side, i.e., H B A = H B C = H i n and H A B = H C B = H o u t . We can draw such an open Markov process as a labeled graph, see for instance Figure 2.
The labels on the edges are the corresponding transition rates. The states are labeled by their detailed balanced equilibrium populations, which, up to an overall scaling, are given by q A = q C = H i n H o u t and q B = H i n 2 . Suppose the populations p A and p C are externally maintained at constant values, i.e., whenever a particle diffuses from outside the cell into the membrane, the environment around the cell provides another particle and similarly when particles move from inside the membrane to the outside. We call ( p A , p C ) the boundary populations. Given the values of p A and p C , the steady state population p B compatible with these values is
p B = H i n p A + H i n p C H B B = H i n H o u t p A + p C 2 .
In Section 5, we show that this steady state population minimizes the dissipation, subject to the constraints on p A and p C .
We thus have a non-equilibrium steady state p = ( p A , p B , p C ) with p B given in terms of the boundary populations above. From these values, we can compute the boundary flows, J A , J C as
J A = j J A j ( p ) = H o u t p B H i n p A
and
J C = j J C j ( p ) = H o u t p B H i n p C .
Written in terms of the boundary populations this gives
J A = H i n ( p C p A ) 2
and
J C = H i n ( p A p C ) 2 .
Note that J A = J C implying that there is a constant net flow through the open Markov process. As one would expect, if p A > p C there is a positive flow from A to C and vice versa. Of course, in actual membranes there exist much more complex transport mechanisms than the simple diffusion model presented here. A number of authors have modeled more complicated transport phenomena using the framework of networked master equation systems [3,4].
In our framework, we call the collection of all boundary population-flows pairs the steady state “behavior” of the open Markov process. In recent work [5], Baez, Fong and the author construct a functor : DetBalMark LinRel from the category of open detailed balanced Markov process to the category of linear relations. Applied to an open detailed balanced Markov process, this functor yields the set of allowed steady state boundary population-flow pairs. One can imagine a situation in which only the populations and flows of boundary states are observable, thus characterizing a process in terms of its behavior. This provides an effective “black-boxing” of open detailed balanced Markov processes.
As morphisms in a category, open detailed balanced Markov processes can be composed, thereby building up more complex processes from these open building blocks. The fact that “black-boxing” is accomplished via a functor means that the behavior of a composite Markov process can be built up from the composite behaviors of the open Markov processes from which it is built. In this paper, we illustrate how this framework can be utilized to study linear master equation systems far from equilibrium with a particular emphasis on the modeling of biological phenomena.
Markovian or master equation systems have a long history of being used to model and understand biological systems. We make no attempt to provide a complete review of this line of work. Schnakenberg, in his paper on networked master equation systems, defines the entropy production in a Markov process and shows that a quantity related to entropy serves as a Lyapunov function for master equation systems [6]. His book [4] provides a number of biochemical applications of networked master equation systems. Oster, Perelson and Katchalsky developed a theory of “networked thermodynamics” [7], which they went on to apply to the study of biological systems [3]. Perelson and Oster went on to extend this work into the realm of chemical reactions [8].
Starting in the 1970s, T. L. Hill spearheaded a line of research focused on what he called “free energy transduction” in biology. A shortened and updated form of his 1977 text on the subject [9] was republished in 2005 [10]. Hill applied various techniques, such as the use of the cycle basis, in the analysis of biological systems. His model of muscle contraction provides one example [11].
One quantity central to the study of non-equilibrium systems is the rate of entropy production [12,13,14,15]. Prigogine’s principle of minimum entropy production [16] asserts that for non-equilibrium steady states that are near equilibrium, entropy production is minimized. This is an approximate principle that is obtained by linearizing the relevant equations about an equilibrium state. In fact, for open detailed balanced Markov processes, non-equilibrium steady states are governed by a different minimum principle that holds exactly, arbitrarily far from equilibrium. We show that for fixed boundary conditions, non-equilibrium steady states minimize a quantity we call “dissipation”. If the populations of the non-equilibrium steady state are close to the population of the underlying detailed balanced equilibrium, one can show that dissipation is close to the rate of change of relative entropy plus a boundary term. Dissipation is in fact related to the Glansdorff–Prigogine criterion, which states that a non-equilibrium steady state is stable if the second order variation of the entropy production is non-negative [6,12].
Many of the mathematical results underlying the theory of non-equilibrium steady states can be found in the book by D. Jiang, M. Qian and M.P. Qian [17]. More recently, results concerning fluctuations have been extended to master equation systems [18]. In the past two decades, H. Qian of the University of Washington and collaborators have published numerous results on non-equilibrium thermodynamics, biology and related topics [19,20,21].
This paper is part of a larger project which uses category theory to unify a variety of diagrammatic approaches found across the sciences including, but not limited to, electrical circuits, control theory and bond graphs [22,23]. We hope that the categorical approach will shed new light on each of these subjects as well as their interrelation, particularly as we generalize the results presented in this and recent papers to the more general, non-linear, setting of open chemical reaction networks.

4. The Category of Open Detailed Balanced Markov Processes

In this section, we describe how open detailed balanced Markov processes are the morphisms in a certain type of symmetric, monoidal, dagger-compact category. In previous work, Baez, Fong and the author [5] used the framework of decorated cospans [24] to construct the category DetBalMark . Here, we give an intuitive description of this category and refer to those papers for the mathematical details.
An object in DetBalMark is a finite set with populations, i.e., a finite set X together with a map p X : X [ 0 , ) assigning a population p i [ 0 , ) to each element i X . A morphism M : ( X , p X ) ( Y , p Y ) consists of an open detailed balanced Markov process ( V , B , H , q ) together with input and output maps i : X V and o : Y V which preserve population, i.e., p X = q i and p Y = q i . The union of the images of the input and output maps form the boundary of the open Markov processes B = i ( X ) o ( Y ) .
One can draw an open detailed balanced Markov process as a labeled directed graph whose vertices are labeled by their equilibrium populations and with specified subsets of the vertices as the input and the output states. Recall our simple model of membrane diffusion as an open detailed balanced Markov process, which we draw in Figure 3, as a morphism from the input X = { A } to the output Y = { C }
This is a morphism in DetBalMark from X to Y where X and Y are finite sets with populations. In this simple example, X and Y both contain a single element, namely A and C respectively. Suppose we had another such membrane as depicted in Figure 4.
This is a morphism in DetBalMark from with input Y = { C } and output Z = { E } . Two open detailed balanced Markov processes can be composed if the detailed balanced equilibrium populations at the outputs of one match the detailed balanced equilibrium populations at the inputs of the other. This requirement guarantees that the composite of two open detailed balanced Markov process still admits a detailed balanced equilibrium, see Figure 5.
If q C = q C in our two membrane models, we can compose them by identifying C with C to yield an open detailed balanced Markov process modeling the diffusion of neutral particles across membranes arranged in series, see Figure 6.
Notice that the states corresponding to C and C in each process have been identified and become internal states in the composite which is a morphism from X = { A } to Z = { E } . This open Markov process can be thought of as modeling the diffusion across two membranes in series, see Figure 7.
One can “black-box” an open detailed balanced Markov process by converting it into an electrical circuit, applying the already known black-boxing functor for electrical circuits [23] and translating the result back into the language of open Markov processes [5]. The key step in this process is the construction of a quadratic form which we call “dissipation”, analogous to power in electrical circuits, which is minimized when the populations of an open Markov process are in a steady state.

5. Principle of Minimum Dissipation

Here, we show that by externally fixing the populations at boundary states, one induces steady states which minimize a quadratic form which we call “dissipation”.
Definition 1. 
Given an open detailed balanced Markov process we define the dissipation functional of a population distribution p to be
D ( p ) = 1 2 i , j H i j q j p j q j p i q i 2 .
Given boundary populations b [ 0 , ) B , we can minimize this functional over all p which agree on the boundary. Differentiating the dissipation functional with respect to an internal population, we get
D ( p ) p n = 2 j H n j p j q n .
Multiplying by q n 2 yields
q n 2 D ( p ) p n = j H n j p j ,
where we recognize the right-hand side from the open master equation for internal states. We see from Equation (1) that, for fixed boundary populations, the conditions for p to be a steady state, namely that
d p i d t = 0 for all i V
is equivalent to the condition that
D ( p ) p n = 0 for all n V B .
Definition 2. 
We say a population distribution obeys the principle of minimum dissipation with boundary population b if p minimizes D ( p ) subject to the constraint that p | b = b .
With this, we can state the following theorem:
Theorem 3. 
A population distribution p R V is a steady state with boundary population b R B if and only if p obeys the principle of minimum dissipation with boundary population b.
Proof. 
This follows from Theorem 28 in [5].  ☐
Given specified boundary populations, one can compute the steady state boundary flows by minimizing the dissipation subject to the boundary conditions.
Definition 4. 
We call a population-flow pair a steady state population-flow pair if the flows arise from a population distribution which obeys the principle of minimum dissipation.
Definition 5. 
The behavior of an open detailed balanced Markov process with boundary B is the set of all steady state population-flow pairs ( p B , J B ) along the boundary.
Indeed, there is a functor :   DetBalMark LinRel which maps open detailed balanced Markov processes to their steady state behaviors. This is the main result of our previous paper [5]. The fact that this is a functor means that the behavior of a composite open detailed balanced Markov process can be computed as the composite of the behaviors.

6. Dissipation and Entropy Production

In the last section, we saw that non-equilibrium steady states with fixed boundary populations minimize the dissipation. In this section, we relate the dissipation to a divergence between population distributions known in various circles as the relative entropy, relative information or the Kullback–Leibler divergence. The relative entropy is not symmetric and violates the triangle inequality, which is why it is called a “divergence” rather than a metric, or distance function. We show that for population distributions near a detailed balanced equilibrium, the rate of change of the relative entropy is approximately equal to the dissipation plus a “boundary term”.
The relative entropy of two distributions p , q is given by
I ( p , q ) = i p i ln p i q i .
It is well known that, for a closed Markov process admitting a detailed balanced equilibrium, the relative entropy with respect to this detailed balanced equilibrium distribution is monotonically decreasing with time, see for instance [2]. There is an unfortunate sign convention in the definition of relative entropy: while entropy is typically increasing, relative entropy typically decreases. More generally, the relative entropy between any two population distributions is non-increasing in a closed Markov process.
In an open Markov process, the sign of the rate of change of relative entropy is indeterminate. Consider an open Markov process ( V , B , H ) . For any two population distributions p ( t ) and q ( t ) which obey the open master equation let us introduce the quantities
D p i D t = d p i d t j V H i j p j
and
D q i D t = d q i d t j V H i j q j ,
which measure the rate at which population flows into the i th state from outside the system. These quantities are sometimes referred to as the boundary-fluxes. Notice that D p i D t = 0 for i V B , as the populations of internal states evolve according to the master equation. In terms of these quantities, the rate of change of relative entropy for an open Markov process can be written as
d d t I ( p ( t ) , q ( t ) ) = i , j V H i j p j ln p i q i p i q j q i p j + i B D p i D t I p i + D q i D t I q i .
The first term is the rate of change of relative entropy for a closed Markov process. This is less than or equal to zero [25,26]. Thus, the rate of change of relative entropy in an open Markov process satisfies
d d t I ( p ( t ) , q ( t ) ) i B D p i D t I p i + D q i D t I q i .
This inequality tells us that the rate of change of relative entropy in an open Markov processes is bounded by the rate at which relative entropy flows through its boundary. If q is an equilibrium solution of the master equation
d q d t = H q = 0 ,
then the rate of change of relative entropy can be written as
d d t I ( p ( t ) , q ) = i , j V ( H i j p j H j i p i ) ln p i q j q i p j + i B D p i D t I p i .
Furthermore, if q satisfies detailed balance we can write this as
d d t I ( p ( t ) , q ) = 1 2 i , j V J i j A i j + i B D p i D t I p i ,
where
J i j ( p ) = H i j p j H j i p i
is the thermodynamic flux from j to i and
A i j ( p ) = ln H i j p j H j i p i
is the conjugate thermodynamic force. This quantity:
1 2 i , j V J i j A i j
is what Schnakenberg calls “the rate of entropy production” [6]. This is always non-negative. Note that due to the sign convention in the definition of relative entropy, in the absence of the boundary term, a positive rate of entropy production corresponds to a decreasing relative entropy.
We shall shortly relate the rate of change of relative entropy to the dissipation for open detailed balanced Markov processes, but first let us consider the quantity A i j ( p ) . It is the entropy production per unit flow from j to i. If J i j ( p ) > 0 , i.e., if there is a positive net flow of population from j to i, then A i j ( p ) > 0 . In addition, J i j ( p ) = 0 implies that A i j ( p ) = 0 . Thus, we see that this form of entropy production is, by definition, non-negative.
We can understand A i j ( p ) as the force resulting from a difference in chemical potential. Let us elaborate on this point to clarify the relation of our framework to the language of chemical potentials used in non-equilibrium thermodynamics. Markov processes are special cases of chemical reactions obeying mass action kinetics in which each reaction is unimolecular. Let us assume that we are dealing with only unimolecular reactions and that our system is an ideal mixture so that the chemical potential μ i associated to the i th state or species is given by:
μ i = μ i o + T ln ( x i ) ,
where T is the temperature of the system in units where Boltzmann’s constant is equal to one, μ i o is some reference chemical potential of the i th species and x i = n i i n i is the molar fraction of the i th species with n i giving the number of moles of the i th species [13]. Note that this is equal to the fraction of the population in the i th state x i = n i i n i = p i i p i . The difference in chemical potential between two states gives the force associated with the flow which seeks to reduce this difference in chemical potential
μ j μ i = μ j o μ i o + T ln p j p i .
This potential difference vanishes when p i and p j are in equilibrium and we have
0 = μ j o μ i o + T ln q j q i ,
or that
q j q i = e μ j o μ i o T .
If the equilibrium distribution q satisfies detailed balance, then this also gives an expression for the ratio of the transition rates H j i H i j in terms of the standard chemical potentials. Thus, we can translate between differences in chemical potential and ratios of populations via the relation
μ j μ i = T ln p j q i q j p i ,
which, if q satisfies detailed balance gives
μ j μ i = T ln H i j p j H j i p i .
We recognize the right hand side as the force A i j ( p ) times the temperature of the system T:
μ j μ i T = A i j ( p ) .
Let us return to our expression for d d t I ( p ( t ) , q ) where q is an equilibrium distribution:
d d t I ( p ( t ) , q ) = 1 2 i , j V H i j p j H j i p i ln q i p j q j p i + i B D p i D t I p i .
Consider the situation in which p is near to the equilibrium distribution q and let ϵ i denote the deviation in the ratio p i q i from unity so that
p i q i = 1 + ϵ i .
We collect these deviations in a vector denoted by ϵ. Expanding the logarithm to first order in ϵ we have that
d d t I ( p ( t ) , q ) = 1 2 i , j V H i j p j H j i p i ϵ j ϵ i + i B D p i D t I p i + O ( ϵ 2 ) ,
which gives
d d t I ( p ( t ) , q ) = 1 2 i , j V H i j p j H j i p i p j q j p i q i + i B D p i D t I p i + O ( ϵ 2 ) .
By O ( ϵ 2 ) , we mean a sum of terms of order ϵ i 2 . When q is a detailed balanced equilibrium, we can rewrite this quantity as
d d t I ( p ( t ) , q ) = 1 2 i , j H i j q j p j q j p i q i 2 + i B D p i D t I p i + O ( ϵ 2 ) .
We recognize the first term as the negative of the dissipation D ( p ) which yields
d d t I ( p ( t ) , q ) = D ( p ) + i B D p i D t I p i + O ( ϵ 2 ) .
We see that for open Markov processes, minimizing the dissipation approximately minimizes the rate of decrease of relative entropy plus a term which depends on the boundary populations. In the case that boundary populations are held fixed so that d p i d t = 0 , i B , we have that
D p i D t = j V H i j p j , i B .
In this case, the rate of change of relative entropy can be written as
d d t I ( p ( t ) , q ) = i V B d p i d t p i q i + O ( ϵ 2 ) .
Summarizing the results of this section, we have that for p arbitrarily far from the detailed balanced equilibrium equilibrium q, the rate of relative entropy reduction can be written as
d I ( p ( t ) , q ) d t = 1 2 i , j J i j ( p ) A i j ( p ) + i B D p i D t I p i .
For p in the vicinity of a detailed balanced equilibrium, we have that
d I ( p ( t ) , q ) d t = D ( p ) + i B D p i D t I p i + O ( ϵ 2 ) ,
where D ( p ) is the dissipation and ϵ i = p i q i 1 measures the deviations of the populations p i from their equilibrium values. We have seen that in a non-equilibrium steady state with fixed boundary populations, dissipation is minimized. We showed that for steady states near equilibrium, the rate of change of relative entropy is approximately equal to minus the dissipation plus a boundary term. Minimum dissipation coincides with minimum entropy production only in the limit ϵ 0 .

7. Minimum Dissipation versus Minimum Entropy Production

We return to our simple three-state example of membrane transport to illustrate the difference between populations which minimize dissipation and those which minimize entropy production, depicted in Figure 8.
For simplicity, we have set all transition rates equal to one. In this case, the detailed balance equilibrium distribution is uniform. We take q A = q B = q C = 1 . If the populations p A and p C are externally fixed, then the population p B which minimizes the dissipation is simply the arithmetic mean of the boundary populations
p B = p A + p C 2 .
The rate of change of the relative entropy I ( p ( t ) , q ) where q is the uniform detailed balanced equilibrium is given by
d d t I ( p ( t ) , q ) = ( p A p B ) ln p A p B ( p B p C ) ln p B p C 1 2 i , j V J i j A i j +   ( p A p B ) ( ln ( p A ) + 1 ) + ( p C p B ) ( ln ( p C ) + 1 ) i B D p i D t I p i .
Differentiating this quantity with respect to p B for fixed p A and p C yields the condition
p A + p C 2 p B ln ( p B ) 2 = 0 .
The solution of this equation gives the population p B , which extremizes the rate of change of relative entropy, namely
p B = p A + p C 2 W ( p A + p C ) 2 e 2 ,
where W ( x ) is the Lambert W-function or the omega function which satisfies the following relation
x = W ( x ) e W ( x ) .
The Lambert W-function is defined for x 1 e and double valued for x [ 1 e , 0 ) . This simple example illustrates the difference between distributions which minimize dissipation subject to boundary constraints and those which extremize the rate of change of relative entropy. For fixed boundary populations, dissipation is minimized in steady states arbitrarily far from equilibrium. For steady states in the neighborhood of the detailed balanced equilibrium, the rate of change of relative entropy is approximately equal to minus the dissipation plus a boundary term.

8. Discussion

Treating Markov processes as morphisms in a category leads naturally to open systems which admit non-equilibrium steady states, even when the transition rates of the underlying process satisfy Kolmogorov’s criterion. Microscopically, all reactions should be reversible with perhaps a large disparity between the forward and reverse rates. Nonetheless, it is clear that biological organisms are capable, at least locally, of storing free energy. This is typically accomplished via the interaction with other systems or the environment. In this paper, the environment served as a reservoir maintaining boundary populations at constant values. Since open Markov processes are morphisms in the category DetBalMark , one can compose these open systems, thereby building up complicated systems in a systematic way. We saw that the non-equilibrium steady states which emerge minimize a quadratic form which depends on the deviation of the steady state populations from the populations of the underlying detailed balanced equilibrium. For steady states in the neighborhood of equilibrium, we saw that the dissipation is in fact the linear approximation of the rate of change of relative entropy with respect to a detailed balanced equilibrium plus a boundary term. In our framework, dissipation appears to be the fundamental quantity as it is minimized for non-equilibrium steady states arbitrarily far from equilibrium. There has been much work examining the regime of validity of Prigogine’s principle of minimum entropy production [27,28,29]. In future work, we aim to generalize our framework for composing Markov processes to the non-linear regime of chemical reaction networks with an eye towards incorporating recent interesting results in the area [30]. We anticipate that the perspective achieved by viewing interacting systems as morphisms in a category will bring new insight to the study of living systems far from equilibrium.

Acknowledgments

The author would like to thank John C. Baez for his help developing the ideas presented in this paper and improving the quality and clarity of their exposition. The author also thanks Brendan Fong for many useful discussions as well as Daniel Cicala for his comments on the draft of this article. The author is grateful to the organizers of the Workshop on Information and Entropy in Biology held at the National Institute for Mathematical and Biological Synthesis (NIMBIOS) in Knoxville, TN, USA as well as to NIMBIOS for its support in attending the workshop. Part of this project was completed during the author’s visit to the Centre for Quantum Technologies (CQT) at the National University of Singapore (NUS) which was supported by the NSF’s East Asia and Pacific Summer Institutes Program (EAPSI) in partnership with the National Research Foundation of Singapore (NRF).

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Kingman, J.F.C. Markov population processes. J. Appl. Probab. 1969, 6, 1–18. [Google Scholar] [CrossRef]
  2. Kelly, F.P. Reversibility and Stochastic Networks; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  3. Oster, G.; Perelson, A.; Katchalsky, A. Network thermodynamics: Dynamic modeling of biophysical systems. Q. Rev. Biophys. 1973, 1, 1–134. [Google Scholar]
  4. Schnakenberg, J. Thermodynamic Network Analysis of Biological Systems; Springer: Berlin, Germany, 1981. [Google Scholar]
  5. Baez, J.C.; Fong, B.; Pollard, B. A compositional framework for open Markov processes. J. Math. Phys. 2016, 57, 033301. [Google Scholar]
  6. Schnakenberg, J. Network theory of microscopic and macroscopic behavior of master equation systems. Rev. Mod. Phys. 1976, 48, 571–585. [Google Scholar]
  7. Oster, G.; Perelson, A.; Katchalsky, A. Network thermodynamics. Nature 1971, 234, 393–399. [Google Scholar]
  8. Perelson, A.; Oster, G. Chemical reaction networks. IEEE Trans. Circuits Sys. 1974, 21, 709–721. [Google Scholar]
  9. Hill, T.L. Free Energy Transduction in Biology: The Steady-State Kinetic and Thermodynamic Formalism; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  10. Hill, T.L. Free Energy Transduction and Biochemical Cycle Kinetics; Springer-Verlag: New York, NY, USA, 1989. [Google Scholar]
  11. Hill, T.L.; Eisenberg, E. Muscle contraction and free energy transduction in biological systems. Science 1985, 227, 999–1006. [Google Scholar]
  12. Glandsorf, P.; Prigogine, I. Thermodynamic Theory of Structure, Stability and Fluctuations; Wiley-Interscience: New York, NY, USA, 1971. [Google Scholar]
  13. De Groot, S.R.; Mazur, P. Non-Equilibrium Thermodynamics; North-Holland Publishing Company: Amsterdam, The Netherlands, 1962. [Google Scholar]
  14. Lindblad, C. Non-Equilibrium Entropy and Irreversibility; D. Reidel Publishing Company: Dordrecht, the Netherland, 1983. [Google Scholar]
  15. Prigogine, I. Non-Equilibrium Statistical Mechanics; Interscience Publishers: New York, NY, USA, 1962. [Google Scholar]
  16. Prigogine, I. Etudé Thermodynamique des Phénoménes Irréversibles; Dunod: Paris, France, 1947. (In French) [Google Scholar]
  17. Jiang, D.; Qian, M.; Qian, M.P. Mathematical Theory of Nonequilibrium Steady States; Springer: Berlin, Germany, 2004. [Google Scholar]
  18. Andrieux, D.; Gaspard, P. Fluctuation theorem for currents and Schnakenberg network theory. J. Stat. Mech. Theory Exp. 2006, 127, 107–131. [Google Scholar]
  19. Qian, H. Open-system nonequilibrium steady state: statistical thermodynamics, fluctuations, and chemical oscillations. J. Phys. Chem. B 2006, 31, 15063–15074. [Google Scholar]
  20. Qian, H.; Beard, D.A. Thermodynamics of stoichiometric biochemical networks in living systems far from equilibrium. Biophys. Chem. 2005, 114, 213–220. [Google Scholar]
  21. Qian, H.; Bishop, L. The chemical master equation approach to nonequilibrium steady-state of open biochemical systems: Linear single-molecule enzyme kinetics and nonlinear biochemical reaction networks. Int. J. Mol. Sci. 2010, 11, 3472–3500. [Google Scholar]
  22. Baez, J.C.; Eberle, J. Categories in Control. Theory Appl. Categ. 2015, 30, 836–881. [Google Scholar]
  23. Baez, J.C.; Fong, B. A compositional framework for passive linear networks. 2015; arxiv.org/abs/1504.05625. [Google Scholar]
  24. Fong, B. Decorated cospans. Theory Appl. Categ. 2015, 30, 1096–1120. [Google Scholar]
  25. Baez, J.C.; Pollard, B. Relative entropy in biological systems. Entropy 2016, 18, 46. [Google Scholar]
  26. Pollard, B. A Second Law for open Markov processes. Open Syst. Inf. Dyn. 2015, 23, 1650006. [Google Scholar]
  27. Bruers, S.; Maes, C.; Netočný, K. On the validity of entropy production principles for linear electrical circuits. J. Stat. Phys. 2007, 129, 725–740. [Google Scholar]
  28. Landauer, R. Inadequacy of entropy and entropy derivatives in characterizing the steady state. Phys. Rev. A 1975, 12, 636–638. [Google Scholar]
  29. Landauer, R. Stability and entropy production in electrical circuits. J. Stat. Phys. 1975, 13, 1–16. [Google Scholar]
  30. Poletinni, M.; Esposito, M. Irreversible thermodynamics of open chemical networks I: Emergent cycles and broken conservation laws. J. Chem. Phys. 2014, 141, 024117. [Google Scholar]
Figure 1. A simple model for passive diffusion across a membrane.
Figure 1. A simple model for passive diffusion across a membrane.
Entropy 18 00140 g001
Figure 2. A depiction of an open Markov process as a labeled, directed graph.
Figure 2. A depiction of an open Markov process as a labeled, directed graph.
Entropy 18 00140 g002
Figure 3. An open detailed balanced Markov process modeling membrane transport.
Figure 3. An open detailed balanced Markov process modeling membrane transport.
Entropy 18 00140 g003
Figure 4. Another layer of membrane whose interior population is labeled by D and whose exterior populations are labeled by C and E.
Figure 4. Another layer of membrane whose interior population is labeled by D and whose exterior populations are labeled by C and E.
Entropy 18 00140 g004
Figure 5. Membranes arranged in series modeled as an open detailed balanced Markov process.
Figure 5. Membranes arranged in series modeled as an open detailed balanced Markov process.
Entropy 18 00140 g005
Figure 6. Composition of open detailed balanced Markov processes results in an open detailed balanced Markov process.
Figure 6. Composition of open detailed balanced Markov processes results in an open detailed balanced Markov process.
Entropy 18 00140 g006
Figure 7. A depiction of two membranes arranged in series.
Figure 7. A depiction of two membranes arranged in series.
Entropy 18 00140 g007
Figure 8. A model of passive transport across a membrane where all transition rates are set equal.
Figure 8. A model of passive transport across a membrane where all transition rates are set equal.
Entropy 18 00140 g008

Share and Cite

MDPI and ACS Style

Pollard, B.S. Open Markov Processes: A Compositional Perspective on Non-Equilibrium Steady States in Biology. Entropy 2016, 18, 140. https://doi.org/10.3390/e18040140

AMA Style

Pollard BS. Open Markov Processes: A Compositional Perspective on Non-Equilibrium Steady States in Biology. Entropy. 2016; 18(4):140. https://doi.org/10.3390/e18040140

Chicago/Turabian Style

Pollard, Blake S. 2016. "Open Markov Processes: A Compositional Perspective on Non-Equilibrium Steady States in Biology" Entropy 18, no. 4: 140. https://doi.org/10.3390/e18040140

APA Style

Pollard, B. S. (2016). Open Markov Processes: A Compositional Perspective on Non-Equilibrium Steady States in Biology. Entropy, 18(4), 140. https://doi.org/10.3390/e18040140

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop