Next Article in Journal
Spatiotemporal Evolution of the Water System’s Structure and Its Relationship with Urban System Based on Fractal Dimension: A Case Study of the Huaihe River Basin, China
Previous Article in Journal
Structured Dynamics in the Algorithmic Agent
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximizing Free Energy Gain

1
Department of Medicine and Life Sciences, Universitat Pompeu Fabra, 08003 Barcelona, Spain
2
Physics and Electrical Engineering, Duke University, Durham, NC 27708, USA
3
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
4
Yau Mathematical Sciences Center, Tsinghua University, Beijing 100084, China
5
Department of Mathematics, Center for Theoretical Physics and CSAIL, MIT, Cambridge, MA 02139, USA
6
IBM Quantum Almaden, San Jose, CA 95120, USA
7
Department of Physics, MIT, Cambridge, MA 02139, USA
8
Sandia National Laboratory, Albuquerque, NM 87123, USA
9
Santa Fe Institute, Santa Fe, NM 87501, USA
10
Center for Bio-Social Complex Systems, Arizona State University, Tempe, AZ 85287, USA
11
Department of Mechanical Engineering, MIT, Cambridge, MA 02139, USA
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(1), 91; https://doi.org/10.3390/e27010091
Submission received: 6 November 2024 / Revised: 26 December 2024 / Accepted: 30 December 2024 / Published: 20 January 2025
(This article belongs to the Section Statistical Physics)

Abstract

:
Maximizing the amount of work harvested from an environment is important for a wide variety of biological and technological processes, from energy-harvesting processes such as photosynthesis to energy storage systems such as fuels and batteries. Here, we consider the maximization of free energy—and by extension, the maximum extractable work—that can be gained by a classical or quantum system that undergoes driving by its environment. We consider how the free energy gain depends on the initial state of the system while also accounting for the cost of preparing the system. We provide simple necessary and sufficient conditions for increasing the gain of free energy by varying the initial state. We also derive simple formulae that relate the free energy gained using the optimal initial state rather than another suboptimal initial state. Finally, we demonstrate that the problem of finding the optimal initial state may have two distinct regimes, one easy and one difficult, depending on the temperatures used for preparation and work extraction. We illustrate our results on a simple model of an information engine.

1. Introduction

The last few decades have seen a revolution in non-equilibrium statistical mechanics [1,2,3,4,5,6] with the realization that many thermodynamic processes are governed by exact relations such as the Jarzynski equality [1] and the Crooks fluctuation theorem [2]. An example of such a result can be found in ref. [7], which derives expressions governing the work dissipated by a system undergoing some driven process. Specifically, a simple formula is derived that relates the minimum amount of dissipated work, versus the actual amount dissipated, as a function of the initial statistical state of the process.
In this paper, we consider a physical system that undergoes an interaction with its environment as described by a driven classical or quantum–mechanical process. By extending the result mentioned above [7], we calculate how much non-equilibrium free energy the system gains during this interaction. We also consider how the gain of free energy can be optimized by a judicious choice of the initial state. Optimizing the gain of free energy is physically meaningful, because the free energy gain sets a bound on the amount of work that the system can extract and store by interacting with a thermal environment: the maximum amount of work that can be extracted is equal to the non-equilibrium free energy minus the free energy at thermal equilibrium.
As a motivating example, consider a photosynthetic organism: before the sun rises in the morning, the organism must invest resources in order to prepare its photosynthetic machinery for harvesting free energy from the sun. When the sun sets in the evening, it stops photosynthesizing and uses the harvested free energy to survive the night, reproduce, etc. All else being equal, the organism should prepare its photosynthetic machinery in the state that maximizes the gain of free energy, since this will typically translate into higher fitness.
In the next section, we formulate our general setup and use it to calculate the free energy gain as a function of the initial state. Importantly, our calculation takes into account the preparation of the initial state as well as the extraction of free energy into a work reservoir. State preparation and work extraction may utilize external heat baths, possibly at two different temperatures.
In our first result, we derive simple necessary and sufficient conditions to guarantee that for a given interaction with the environment, free energy gain can be optimized by varying the initial state. We then derive a simple information–theoretic formula that describes the dependence of the free energy gain on the initial state. Using this formula, we relate the free energy gained when the process begins in an optimal initial state versus that gained when the process begins in a suboptimal initial state. Finally, we show that the problem of identifying the optimal initial state exhibits two distinct regimes, depending on the bath temperatures involved in state preparation versus work extraction. When work extraction happens at a lower temperature than preparation, the problem involves the maximization of a concave function, and it can be easily solved by gradient ascent. In this regime, a biological species in which each successive generation harvests more free energy is headed for the global maximum. On the other hand, when work extraction happens at a higher temperature than preparation, the objective may become nonconcave, and gradient ascent may become stuck in a suboptimal solution. At the end of this paper, we illustrate our results on an information engine.
Our results complement existing research on work extraction and free energy harvesting in classical and quantum thermodynamics [6,8,9,10,11,12,13,14,15,16]. Such research typically considers how extractable work depends on properties of the physical process—such as its speed of evolution [17,18], constrained control [19,20], or stochastic fluctuations [21,22]—given some fixed initial state. Here, we consider the complementary question of how extractable work depends on the initial state, given a fixed physical process. See also refs. [23,24,25,26,27] for related results concerning the dependence of entropy production on the initial state.

2. Preliminaries

We consider a physical system that harvests free energy from its environment and extracts it as work. The system may be classical or quantum, although for maximum generality, we usually employ quantum mechanical notation. For simplicity, we assume that the system is finite-dimensional, although most results can be extended to the infinite-dimensional case [24]. We will use the notation S ( ρ ) = tr { ρ ln ρ } for the von Neumann entropy and S ( ρ σ ) = tr { ρ ( ln ρ ln σ ) } for the quantum relative entropy.
Our analysis will use the relationship between work and free energy for isothermal processes. Consider a process that transforms some initial state and Hamiltonian ρ , H to final state and Hamiltonian ρ , H while coupled to a heat bath at temperature T. According to the Second Law of Thermodynamics, the work that can be extracted during this transformation is bounded by the drop of nonequilibrium free energy [6,8,9,10,11]:
W F H , T ( ρ ) F H , T ( ρ ) ,
where non-equilibrium free energy is defined as
F H , T ( ρ ) = tr { ρ H } T S ( ρ ) .
Throughout this paper, we choose energy units so that Boltzmann’s constant is k B = 1 . We also use the convention that W > 0 indicates work extraction while W < 0 indicates work investment.
The bound (1) can be achieved in a classical system using a slow (quasistatic) driving protocol that remains close to equilibrium throughout and thus achieves thermodynamic reversibility [8,9,10]. For quantum systems, this bound is achievable by a quasistatic protocol when the two states ρ and ρ are diagonal in their respective energy bases (as defined by H and H respectively) [28]. The bound is also achievable if the quasistatic protocol operates on a large number of identical copies of the quantum system, in which case Equation (1) refers to the work per copy. In the most general case, where ρ and/or ρ are non-diagonal and the protocol operates on a single copy of the system, the achievability of the bound remains an open question in quantum thermodynamics, possibly depending on available catalytic resources [29].
With some rearrangement, the non-equilibrium free energy can also be expressed as
F H , T ( ρ ) = T S ( ρ ρ eq ) + F H , T eq ,
where F H , T eq = F H , T ( ρ eq ) is the equilibrium free energy, which is defined using the Gibbs state ρ eq = e H / T / tr { e H / T } . The first term,
T S ( ρ ρ eq ) = F H , T ( ρ ) F H , T eq ,
is called the availability. It quantifies the maximum work that can be extracted from the state ρ by bringing it to equilibrium ρ eq .

3. Free Energy Harvesting

Suppose that the system has access to an internal work reservoir (e.g., a battery), which is used for state preparation and work extraction. Suppose also that the system also has access to two heat baths at temperatures T 0 and T 1 . The system undergoes the following four-stage procedure, which is also illustrated in Figure 1:
  • (A) Preparation: The system begins in an unprepared state ω and Hamiltonian H . It is then driven to the prepared state ρ and Hamiltonian H 0 while coupled to the internal work reservoir and heat bath at temperature T 0 . Given Equation (1), the work extracted during this transformation is bounded by
    W A F H , T 0 ( ω ) F H 0 , T 0 ( ρ ) .
  • (B) Interaction/Free energy harvesting: The system is disconnected from the work reservoir. It then undergoes a fixed interaction with the environment, which may contain any number of thermodynamic reservoirs, free energy sources, and external work reservoirs (e.g., the sun). At the end of this stage, the system has Hamiltonian H 1 and state Φ ( ρ ) . Here, Φ is the quantum channel (completely positive and trace-preserving map) that describes the system’s evolution due to the interaction with the environment.
  • (C) Work Extraction: The system is coupled to the work reservoir and the heat bath at temperature T 1 . It is then driven from state Φ ( ρ ) and Hamiltonian H 1 to final state ω and Hamiltonian H . According to Equation (1), the maximum work that can be extracted during this transformation is
    W C F H 1 , T 1 ( Φ ( ρ ) ) F H , T 1 ( ω ) .
  • (D) Reset: The system is disconnected from the internal work reservoir and then undergoes another interaction with the environment. As a result of this interaction—which, in some cases, may be a simple relaxation — the system ends in state ω and Hamiltonian H . This completes the cycle, thereby preparing the system for Stage A. In the special case where ω = ω and H = H , the Reset stage is not necessary.
Figure 1. Four-stage protocol used to harvest free energy from the environment. During the Preparation stage, the system is coupled to the internal work reservoir and a heat bath at temperature T 0 . During Interaction, the system harvests free energy from the external environment. During Work Extraction, the system is coupled to the internal work reservoir and a heat bath at temperature T 1 . During Reset, the system is again coupled to the external environment.
Figure 1. Four-stage protocol used to harvest free energy from the environment. During the Preparation stage, the system is coupled to the internal work reservoir and a heat bath at temperature T 0 . During Interaction, the system harvests free energy from the external environment. During Work Extraction, the system is coupled to the internal work reservoir and a heat bath at temperature T 1 . During Reset, the system is again coupled to the external environment.
Entropy 27 00091 g001
As a concrete—though still highly idealized—example of our setup, one might imagine a simple photosynthetic system, such as Bacteriorhodopsin in archaea [20,30]. During Preparation, the organism spends free energy (by hydrolyzing ATP) to synthesize the Bacteriorhodopsin protein from free-floating amino acids. During Interaction, the protein uses solar energy to pump protons across the cellular membrane, thereby increasing the membrane potential. During Work Extraction, the membrane potential is used by ATPase to synthesize ATP. Reset may occur by consumption of any additional ATP and degradation of the Bacteriorhodopsin protein back into amino acids. We note that in this system, Preparation and Work Extraction steps may occur at different temperatures.
We now calculate a bound on the work that can be extracted using this four-stage process. Since the system only interacts with the work reservoir during Stages A and C, the total amount of extracted work is
W = W C + W A .
Equations (5) and (6) then imply the upper bound W G ( ρ ) , where we define
G ( ρ ) = [ F H 1 , T 1 ( Φ ( ρ ) ) F H 0 , T 0 ( ρ ) ] [ F H , T 1 ( ω ) F H , T 0 ( ω ) ] .
Observe that G ( ρ ) is a function of the initial state and that it consists of two terms. The first term is the gain of non-equilibrium free energy during the Interaction with the environment (Stage B). The second term is the loss of non-equilibrium free energy during the Reset (Stage D).
We may also rewrite G ( ρ ) in the following form:
G ( ρ ) = T 1 S ( Φ ( ρ ) π 1 ) T 0 S ( ρ π 0 ) + G base ,
where π 0 = e H 0 / T 0 / tr { e H 0 / T 0 } and π 1 = e H 1 / T 1 / tr { e H 1 / T 1 } refer to Gibbs states corresponding to ( H 0 , T 0 ) and ( H 1 , T 1 ) respectively.
This expression follows by combining Equations (3) and (8) and rearranging, while also defining the “baseline” term
G base = F H , T 0 ( ω ) F H 0 , T 0 eq F H , T 1 ( ω ) F H 1 , T 1 eq
Observe that Equation (9) expresses G ( ρ ) as the gain of availability, Equation (4), during the transition from ρ to Φ ( ρ ), plus a constant offset ( G base ).
In the following, we will generally be interested in the dependence of G ( ρ ) on the initial state ρ . In Equation (8), this dependence is captured by first term, the gain of non-equilibrium free energy, since the second term does not depend on ρ (nor on the channel Φ ). In Equation (9), this dependence is captured entirely by the gain of availability, since G base again does not depend on ρ (nor on the channel Φ ).
For convenience, we will often refer to G ( ρ ) simply as the “free energy gain”.

4. Increasing Free Energy Gain

We now consider the problem of maximizing the free energy gain G ( ρ ) with respect to the initial state ρ . Before proceeding, we note that maximizing free energy gain is not always the same as maximizing extracted work W. The two optimization problems are equivalent when the the Preparation and Work Extraction stages are thermodynamically reversible, so that the bounds (5) and (6) are saturated. The optimization of free energy gain G ( ρ ) is relevant when Preparation and Work Extraction stages are thermodynamically optimal. Moreover, because G ( ρ ) always sets an upper bound on extractable work, maximizing G ( ρ ) is also relevant when the precise details of Preparation and Work Extraction are unknown, varying, or simply undefined.
To study the problem of optimizing G ( ρ ) , we first consider a few special cases. First, suppose that Φ is a (generalized) Gibbs-preserving map that transforms the initial Gibbs state to the final Gibbs state π 1 , Φ ( π 0 ) = π 1 . This situation applies if the driving by the environment is sufficiently slow so that the system remains in equilibrium throughout (quasistatic driving), or if the system is allowed to equilibrate at the end of its interaction with the environment. Then, by monotonicity of relative entropy [31],
S ( Φ ( ρ ) π 1 ) = S ( Φ ( ρ ) Φ ( π 0 ) ) S ( ρ π 0 ) .
Combining with Equation (9) implies that G ( ρ ) G base for all ρ whenever T 1 T 0 . That is, if Φ maps π 0 to π 1 and the temperature of Work Extraction is less than or equal to the temperature of Preparation, it is impossible to extract more work than G base regardless of the initial state. Moreover, since G ( π 0 ) = G base , one cannot do better than the naive strategy of setting the initial state to π 0 , i.e., letting the system relax fully to equilibrium for Hamiltonian H 0 and temperature T 0 .
On the other hand, suppose that Φ is not Gibbs preserving, so Φ ( π 0 ) π 1 . Suppose we still choose the initial state as π 0 . Equation (9) then gives
G ( π 0 ) = T 1 S ( Φ ( π 0 ) π 1 ) + G base > G base ,
where the last inequality follows from the positivity of the relative entropy between different states. Thus, if Φ is not Gibbs preserving, there is always at least one initial state for which G is strictly greater than G base . Moreover, generally, G can be increased even further by optimizing the choice of the initial state.
These conditions are formalized by the following statement.
Theorem 1. 
  • (1a) If Φ ( π 0 ) = π 1 , there exists some ρ with G ( ρ ) > G base only if T 1 > T 0 .
  • (1b) If Φ ( π 0 ) π 1 , there always exists some ρ with G ( ρ ) > G base .
As a special case, consider the situation where the quantum channel is the identity ( Φ ( ρ ) = ρ for all inputs) and the Hamiltonians H 0 , H 1 are equal. In that case, Φ ( π 0 ) = π 1 only if T 0 = T 1 . Then, Theorem 1 implies that free energy can be gained beyond G base if and only if T 0 T 1 . In this special case, the interaction with the environment provides no free energy, so free energy can only be gained using a temperature difference between the baths, as in a heat engine.

5. Dependence on the Initial State

We now consider how the free energy gain depends on the choice of the initial state. Before proceeding, we provide a useful expression for the (one-sided) directional derivative of G . Recall that the directional derivative at state σ toward state ρ is defined as
D ρ σ G ( σ ) = lim λ 0 + G [ σ + λ ( ρ σ ) ] G ( σ ) λ
In Appendix A, we show that this directional derivative can be expressed as
D ρ σ G ( σ ) = G ( ρ ) G ( σ ) + T 0 S ( ρ σ ) T 1 S [ Φ ( ρ ) Φ ( σ ) ] .
This expression is particularly useful when considering σ for which the directional derivatives vanishes. This is shown in the following result, which is proved in Appendix B.
Theorem 2. 
Let σ be a (local or global) minimum, maximum, or saddle point of G . Then, for any ρ with S ( ρ σ ) < ,
G ( σ ) G ( ρ ) = T 0 S ( ρ σ ) T 1 S [ Φ ( ρ ) Φ ( σ ) ] .
Theorem 2 means that the increase in free energy gain when using initial state σ versus ρ has a universal information-theoretic expression. While the left-hand side of Equation (15) contains thermodynamic terms, the right-hand side of this equality consists purely of information-theoretic quantities, that is relative entropies, scaled by the temperatures. The change in the relative entropies can be understood as the loss of distinguishability between ρ and σ during the process, and it does not explicitly depend on the energy functions. Indeed, Theorem 2 provides a simple example of a relationship between information-theoretic and physical quantities. Such relationships have been found to be very useful in the resource theory of thermodynamics [12,14,32,33].

6. Optimal Initial State

We now consider the problem of optimizing the initial state so as to maximize free energy gain. Consider the initial state σ that is a local or global maximizer of G . Then, according to Theorem 2, for any other initial state ρ with S ( ρ σ ) < ,
G ( σ ) G ( ρ ) = T 0 S ( ρ σ ) T 1 S [ Φ ( ρ ) Φ ( σ ) ]
Equation (16) provides a simple formula for computing the free energy gain that is lost when the system is prepared in the “wrong” initial state: it is given by comparing the relative entropy between the “wrong” and “right” (i.e., optimal) states at the beginning of the process with the the same relative entropy at the end of the process, multiplied by the temperatures T 0 and T 1 .
Next, we consider the difficulty of finding the optimal state σ . As we now show, G is a concave function of the initial state if the temperature of Preparation is no cooler than the temperature of Work Extraction. The proof is found in Appendix B.
Theorem 3. 
G ( ρ ) is a concave function of ρ if T 0 T 1 .
Any local maximizer of a concave function must also be a global maximizer. Thus, Theorem 3 implies that as long as T 0 T 1 , the global optimization of free energy harvesting can be accomplished by a simple procedure, e.g., gradient ascent in the space of density matrices [25]. Consider an adaptive system that undergoes the same free energy harvesting process many times. Each time the system goes through the free energy harvesting cycle, it has the opportunity to vary its initial state ρ to try to increase the free energy gain. The concavity of free energy maximization implies that if the adaptive process is able to alter the initial state to improve the amount of free energy harvested in each round, then the adaptive system is headed for the global optimum, and will not become stuck in a local optimum.
A population of photosynthetic bacteria, for example, exhibits genetic variation in the individuals’ molecular mechanisms for performing the Preparation, Interaction, and Work Extraction stages, all of which impact the viability of the individual organisms and their ability to reproduce. In general, we expect that the more efficient an individual bacterium is at harvesting free energy, the more viable it will be, resulting in its offspring forming a larger fraction of the population in subsequent generations. Focusing only on the free energy harvesting stage, we see that genetic variations which increase the amount of free energy harvested—e.g., a small change in the structure of a photo-harvesting chromophore which provides greater overlap with the absorptive spectrum of the chromophore and ambient light conditions—will guide the population as a whole to adapt its composition to become more efficient at energy harvesting.
The concavity of free energy gain as a function of the initial state of the bacteria implies that if the free energy harvesting is suboptimal, then there is always a nearby initial state that improves the free energy gain. The only way for the adaptive process to become stuck in a local optimum is if the genetic variation in the population is unable to explore fully the space of initial probabilistic states.
If Work Extraction occurs at a warmer temperature than Preparation, however, in general, there is no guarantee that G is concave. In this case, finding the global optimum may be a much harder problem, and gradient ascent may become trapped in suboptimal local maxima.
Finally, we note that Equation (16) only holds for those ρ that obey S ( ρ σ ) < . This condition is equivalent to the requirement that the support of ρ falls within the support of σ . For this reason, the applicability of Equation (16) is most general in those cases where the optimizer σ has full support. In our final result, we provide some simple sufficient conditions for σ to have full support. The proof is found in Appendix B.
Theorem 4. 
Any (local or global) maximizer σ of G has full support if T 0 > 0 and Φ ( ρ ) has full support for all ρ, or if T 0 > T 1 .

7. Example

We illustrate our results using a simple example of a two-level system. The system can be considered as a model of an “information engine” which gains free energy by interacting with a heat bath and a low-entropy environment [34,35].
For simplicity, we begin by focusing on the classical case, where all Hamiltonians, states, and channels are diagonal in the same basis. We use classical notation p (instead of ρ ) to indicate the actual initial probability distribution of the system, q (instead of σ ) to indicate the optimal initial distribution, and P (instead of Φ ) to indicate the classical transition matrix (conditional probability of outputs given inputs). We also use the notation D ( p q ) = x p ( x ) ln [ p ( x ) / q ( x ) ] to indicate the classical relative entropy (also known as Kullback–Leibler divergence) and H ( p ) = x p ( x ) ln p ( x ) for the Shannon entropy.
The engine is modeled as an overdamped two-level system X { 0 , 1 } with energy gap ϵ 0 , with energy function H ( 0 ) = 0 , H ( 1 ) = ϵ . The engine is coupled to the environment, another two-level system Y { 0 , 1 } with a uniform energy function, H env ( 0 ) = H env ( 1 ) = 0 . The engine and environment are weakly coupled, so their joint energy function can be decomposed as H tot ( x , y ) H ( x ) + H env ( y ) . Also, initially at time t = 0 , the engine and the environment are statistically independent: p tot ( x , y ) = p ( x ) p env ( y ) . Then, over the time interval t [ 0 , τ ] , the two systems relax freely while connected to a heat bath at temperature T = 1 . The environment and the engine have coupled transitions: the 0 1 transition in the engine occurs only when the environment simultaneously undergoes a 0 1 transition, and this is vice versa for the 1 0 transition. No transitions occur in/out of microstates where the engine and environment occupy different levels, ( x , y ) { ( 0 , 1 ) , ( 1 , 0 ) } .
Suppose the engine is used to extract work using the four-stage protocol shown in Figure 1. We assume that Stage A (Preparation) starts and ends with the same Hamiltonian ( H = H 0 = H ), and that the unprepared state is the Gibbs state π 0 at some temperature T 0 ( ω = π 0 ). During Stage B (Interaction), the engine evolves according to a transition matrix P, which is defined below. Finally, we assume Stage C (Work Extraction) starts and ends with the same Hamiltonian ( H 1 = H = H ) and that the final state is the Gibbs state π 1 at some temperature T 1 .
The net amount of extracted work is bounded W G ( p ) by the gain of availability:
G ( p ) = T 1 D ( P p π 1 ) T 0 D ( p π 0 )
This result follows from Equation (9) and the fact that G base = 0 (given our assumptions). At the same time, our results imply that G can be expressed as
G ( p ) = G ( q ) T 0 D ( p q ) T 1 D ( P p P q ) ,
where q arg max p G ( p ) is a maximizer of G , as given in Equation (16), which is valid whenever q has full support. A simple sufficient condition for q to have full support is for environment distribution p env to have full support; this follows from Theorem 4 and because then P p has full support for all p (see Equation (21) below). The function G and the optimizer q will depend on the parameters of the problem: the energy gap ϵ , the initial environment distribution p env , and the temperature of Preparation ( T 0 ), Interaction (T), and Work Extraction ( T 1 ).
To construct P, we assume that the engine and environment undergo continuous-time Markovian dynamics, which are represented by the rate matrix
R = 1 0 0 e ϵ 0 0 0 0 0 0 0 0 1 0 0 e ϵ ,
where R i j is the transition rate from state j to state i with the four states referring to ( x , y ) = { ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 ) } . R obeys local detailed balance for the energy function H tot and interaction temperature T = 1 . We consider the limit of a long relaxation, corresponding to the following joint transition matrix:
P tot = lim τ e τ R = 1 1 + e ϵ 0 0 1 1 + e ϵ 0 1 0 0 0 0 1 0 e ϵ 1 + e ϵ 0 0 e ϵ 1 + e ϵ .
The transition matrix of the engine subsystem X is computed by marginalizing
P ( x | x ) = y , y P tot ( x , y | x , y ) p tot ( y | x ) = y , y P tot ( x , y | x , y ) p env ( y ) ,
and it can written explicitly in matrix notation as
P = 1 1 + e ϵ 1 + e ϵ p env ( 0 ) e ϵ 1 p env ( 0 ) p env ( 0 ) e ϵ p env ( 0 ) + e ϵ
We now illustrate our results with some numerical experiments. In Figure 2, we show the gain of availability G ( p ) as a function of the engine’s initial distribution p, which is computed using Equation (17). Since the engine only has two microstates, p is fixed by probability p ( 0 ) of microstate X = 0 , as shown on the horizontal axis. The different subplots correspond to the different initial distribution of the environment p env . The other parameters are set to ϵ = 1 and T 0 = T 1 = T = 1 . We show the location of the optimal initial distribution q found by numerical optimization, and the value of G ( p ) is computed using using our information-theoretic expression (17). We also show the location of the initial equilibrium distribution π 0 = 1 1 + e ϵ , e ϵ 1 + e ϵ .
We comment on some interesting aspects of the results in Figure 2. First, G is a concave function of the initial distribution, which is in accordance with Theorem 3. We also verify our main result, showing that the thermodynamic (17) and information-theoretic (18) expressions for G are equivalent. Note that maximum availability gain is non-monotonic in p env ( 0 ) . In Figure 2b, the environment starts from the maximum entropy state p env = ( 0.5 , 0.5 ) ; in this case, the optimal initial distribution is the equilibrium one ( q = π 0 ), and it is not possible to harvest strictly positive availability ( G ( q ) = 0 ). On the other hand, strictly positive availability can be harvested when the environment is biased to microstate Y = 0 , see Figure 2a, or Y = 1 , see Figure 2c,d, but the optimal strategy differs in these two cases. When the environment is biased to Y = 1 ( p env ( 0 ) < 0.5 ), the optimal initial distribution is biased to X = 0 relative to equilibrium ( q ( 0 ) > π ( 0 ) ). Conversely, when the environment is biased to Y = 0 ( p env ( 0 ) < 0.5 ), the optimal initial distribution is biased toward X = 1 relative to equilibrium ( q ( 0 ) < π ( 0 ) ). This reflects the balance between two effects. On one hand, there is an advantage to biasing the engine’s initial distribution toward X = 0 , because the transition 0 1 harvests ϵ energy from the heat bath. On the other hand, availability can also be harvested by decreasing the Shannon entropy of the engine, i.e., by increasing
Δ H : = T 0 H ( p ) T 1 H ( P p ) .
This quantity is shown as the dashed red line in Figure 2. It can be seen that this second effect shifts the optimal distribution toward X = 1 relative to equilibrium, and it becomes stronger when the environment is more concentrated on state Y = 1 .
Figure 2d shows that q has full support even though p env = ( 1 , 0 ) does not have full support (so P p does not have full support for all p; see Equation (21)). This demonstrates that Theorem 4 is only a sufficient, but not necessary, condition for the optimizer q to have full support.
In Figure 3, we consider the same system but now setting the temperature of Work Extraction higher than that of Preparation and Interaction, T 0 = T = 1 , T 1 = 3 . As above, different subplots correspond to different initial distributions of the environment. To emphasize interesting features, we make several changes with respect to Figure 2: we explore different initial environment distributions, the scale of the y-axes are different, and for simplicity, we do not show the change of Shannon entropy Δ H .
When T 1 > T 0 , Theorem 3 no longer applies and the function G ( p ) may become non-concave, as seen in Figure 3b–d. Moreover, Figure 3b,c show that G may even have multiple local maxima. Note that in these two plots, the shape of G changes and the identity of the higher local maxima switches, even though p env undergoes a very small change. In Figure 3a–c, we verify that the thermodynamic (17) and information-theoretic (18) expressions for G are equivalent. For systems with several local maxima/minima, we verified that this equivalence holds regardless of which critical point is chosen as q. Note that T 1 > T 0 is necessary but not sufficient for non-concavity, since G remains concave in Figure 3a. Also, Figure 3d, p env does not have full support and Theorem 4 no longer holds. In this case, the optimal initial distribution q = ( 0 , 1 ) does not have full support, so our information-theoretic expression no longer applies.
In our last numerical experiment, we consider this system in the quantum regime. We define a quantum channel Φ that dephases any input state ρ in the reference basis { | 0 , | 1 } ; then, we apply the transition matrix P in this basis:
Φ ( ρ ) = x { 0 , 1 } , x { 0 , 1 } P x x | x x | x | ρ | x .
The Hamiltonians used for Preparation and Work Extraction are allowed to be coherent (non-diagonal) with respect to the reference basis. Specifically, we consider the same Hamiltonian with energy gap ϵ but rotated by angle θ with respect to the reference basis:
H = H 0 = H 1 = H = ϵ U θ | 1 1 | U θ U θ = cos θ sin θ sin θ cos θ
The net amount of extracted work is bounded by the gain of availability:
G ( ρ ) = T 1 S ( Φ ( ρ ) π 1 ) T 0 S ( p π 0 )
where π 0 and π 1 indicate Gibbs states for ( H 0 , T 0 ) and ( H 1 , T 1 ) , respectively. As above, our results imply that G can also be expressed as
G ( ρ ) = G ( σ ) T 0 S ( ρ σ ) T 1 S ( Φ ( ρ ) Φ ( σ ) ) ,
where σ arg max ρ G ( ρ ) is a maximizer of G . Equations (25) and (26) are simply the quantum versions of Equations (17) and (18).
Figure 4 shows the results for four values of θ , which varies the amount of coherence. The other parameters are chosen as in Figure 3b ( T 0 = T = 1 , T 1 = 3 , ϵ = 1 , p env = ( 0.8 , 0.2 ) ). For each θ , we use Equation (25) to calculate the value of G for two sets of states ρ , which are always indexed by the (lower energy) eigenvalue λ 0 . First, solid lines indicate G for states ρ diagonal in the reference basis, ρ = λ 0 | 0 0 | + ( 1 λ 0 ) | 1 1 | . Second, dashed lines indicate G for states ρ * diagonal in the same basis as the optimal initial state σ . Vertical lines indicate the location of σ in this optimal basis. We also plot the values of G calculated using our information-theoretic expression Equation (26), verifying that it matches those calculated using Equation (25) for both sets of states.
There is no coherence in Figure 4a, so we effectively recover the classical case shown in Figure 3b. For higher θ > 0 , Figure 4b–d demonstrate that availability gain can be increased by preparing the initial state in the correct basis. Moreover, we verified that the optimal state σ is not diagonal in either the reference basis nor the basis of the rotated Hamiltonian H 0 = H 1 . The optimal basis arises from the balance of two effects: the cost of preparation (which favors σ being in the same basis as H) versus the free energy dissipated when σ is dephased by Φ (which favors σ being in the reference basis). Under the optimal strategy, the engine gains availability not only by increasing energy or decreasing entropy but also by harvesting coherence with respect to the basis of H 0 = H 1 .

8. Discussion

In this paper, we investigated how the gain of free energy depends on the initial state while considering a broad class of classical and quantum process. We first showed that the initial state can be used to optimize the gain of free energy if and only if a process fails to map equilibrium distributions to equilibrium distributions. We also derived information-theoretic formulae for the gain of free energy as a function of the initial state, and we then used these to quantify the difference in free energy gain between the optimal initial state and some suboptimal initial state. This difference was shown to be equal to the drop of the relative entropy between the initial and final states, which were scaled by temperatures.
For macroscopic systems, the deficit in free energy harvested by a suboptimal initial state may itself be a macroscopic quantity. Moreover, for a living system that requires free energy to survive and reproduce, there is considerable evolutionary pressure to increase the amount of free energy harvested. The difficulty of maximizing this objective depends on whether it is concave or not. We derived conditions for the free energy gain to be a concave function of the initial state. When these conditions hold, the objective can be maximized using a simple strategy like gradient ascent; for example, a species where each generation does slightly better at harvesting free energy will eventually approach the global maximum. In cases where the conditions do not hold, the free energy gain may become nonconcave. In such cases, the maximization of the objective becomes qualitatively more difficult, and simple strategies like gradient ascent may become trapped in local optima.
As mentioned in Section 4 above, the optimization of free energy gain is not necessarily the same as optimization of extracted work W, although the two problems become equivalent when the Preparation and Work Extraction stages are thermodynamically reversible. When the Preparation and Work Extraction stages are not reversible, the state that maximizes extracted work—for example, as might be found varying some control parameters of the Preparation stage (assuming all else held fixed)—is not necessarily the same as the state that maximizes free energy gain. An interesting direction for future research would consider the optimization of extracted work, assuming some realistic constraints on Preparation and/or Work Extraction protocols, e.g., constraints on preparable initial states or finite-time constraints.

Author Contributions

Conceptualization, D.W.; Formal analysis, A.K., I.M., Z.-W.L., P.S., O.S., D.W. and S.L.; Investigation, C.G. and K.T.; Methodology, A.K., I.M., P.S., K.T., D.W. and S.L.; Validation, C.G., Z.-W.L. and O.S.; Writing—original draft, S.L.; Writing—review & editing, A.K. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by NSF under an INSPIRE program. S.L. was supported by ARO and AFOSR. AK was partly supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101068029, and by Grant 62828 from the John Templeton Foundation. This paper was also made possible through the support of Grant No. TWCF0079/AB47 from the Templeton World Charity Foundation, Grant No. FQXi-RFP-1622 from the FQXi foundation, and Grant No. CHE-1648973 from the U.S. National Science Foundation. The opinions expressed in this paper are those of the authors and do not necessarily reflect the view of Templeton World Charity Foundation or the John Templeton Foundation.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

A.K. and D.W. would like to thank the Santa Fe Institute for helping to support this research.

Conflicts of Interest

O.S. states that this research was conducted at MIT Physics, prior to their current position at IBM Quantum. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A. Derivation of Directional Derivative, Equation (14)

Using Equations (9) and (13), we may write the directional derivative of G as
D ρ σ G ( σ ) = T 1 D Φ ( ρ ) Φ ( σ ) S ( Φ ( σ ) π 1 ) T 0 D ρ σ S ( σ π 0 ) ,
where the directional derivatives of the relative entropy terms are with respect to the first argument. The directional derivative of the relative entropy S ( σ π 0 ) is [36] (Lemma 1)
D ρ σ S ( σ π 0 ) = tr { ( ρ σ ) ( ln σ ln π 0 ) } .
Rearranging the right-hand side gives
D ρ σ S ( σ π 0 ) = S ( ρ π 0 ) S ( ρ σ ) S ( σ π 0 ) .
Similarly, the directional derivative of the relative entropy S ( · π 1 ) obeys
D ρ σ S ( Φ ( σ ) π 1 ) = S [ Φ ( ρ ) π 1 ] S [ Φ ( ρ ) Φ ( σ ) ] S [ Φ ( σ ) π 1 ] .
Plugging these expressions into Equation (A1) gives
D ρ σ G ( σ ) = T 1 S [ Φ ( σ ) π 1 ] S [ Φ ( ρ ) Φ ( σ ) ] S [ Φ ( ρ ) π 1 ] T 0 [ S ( ρ π 0 ) S ( ρ σ ) S ( σ π 0 ) ] .
Finally, we plug in the definitions of G ( σ ) and G ( ρ ) from Equation (9) and rearrange to give Equation (14).

Appendix B. Proofs

Proof of Theorem 2. 
If S ( ρ σ ) < , then the support of ρ falls within the support of σ , which in turn implies that σ α ρ is positive-definite for some α > 0 [37] (p. 15). Therefore, it is possible to move along the line σ + λ ( ρ σ ) both toward ρ ( λ > 0 ) and away from ρ ( λ < 0 ). If σ is a minimum (or maximum), the directional derivative D ρ σ G ( σ ) must vanish, since otherwise, G could be decreased (or increased) by a small perturbation toward or away from ρ . On the other hand, if σ is a saddle point, the directional derivative vanishes by definition. The result follows by plugging D ρ σ G ( σ ) = 0 into Equation (14) and rearranging. □
Proof of Theorem 3. 
For any pair of states ρ and σ , let σ λ : = ( 1 λ ) σ + λ ρ refer to the convex mixture with weight λ [ 0 , 1 ] . Next, we define the following quantity:
χ = ( 1 λ ) G ( σ ) + λ G ( ρ ) G ( σ λ ) ,
which measures the degree of convexity G over the states { σ λ : λ [ 0 , 1 ] } . It is positive when G is convex, negative when concave, and zero when linear. Using Equation (9), we may express χ as
χ = T 1 [ ( 1 λ ) S ( Φ ( σ ) π 1 ) + λ S ( Φ ( ρ ) π 1 ) S ( Φ ( σ λ ) π 1 ) ] T 0 [ ( 1 λ ) S ( σ π 0 ) + λ S ( ρ π 0 ) S ( σ λ π 0 ) ]
The bracketed term in second line can be rearranged as
( 1 λ ) S ( σ π 0 ) + λ S ( ρ π 0 ) S ( σ λ π 0 ) = ( 1 λ ) S ( σ σ λ ) + λ S ( ρ σ λ )
In a similar way, the bracketed term in the first line can be rearranged as
( 1 λ ) S ( Φ ( σ ) Φ ( σ λ ) ) + λ S ( Φ ( ρ ) Φ ( σ λ ) )
Plugging back into Equation (A2) gives
χ = ( 1 λ ) [ T 1 S ( Φ ( σ ) Φ ( σ λ ) ) T 0 S ( σ σ λ ) ] + λ [ T 1 S ( Φ ( ρ ) Φ ( σ λ ) ) T 0 S ( ρ σ λ ) ] ( 1 λ ) ( T 1 T 0 ) S ( σ σ λ ) + λ ( T 1 T 0 ) S ( ρ σ λ ) 0 .
The first inequality uses the monotonicity of relative entropy [31], and the second inequality uses the assumption that T 1 T 0 . Since our derivations holds for all ρ , σ , λ , the function G is concave everywhere. □
Proof of Theorem 4. 
Suppose that σ is a maximizer and it does not have full support, and let ρ be any state with full support. If Φ ( · ) has full support for all inputs, then S [ Φ ( ρ ) Φ ( σ ) ] < . Then, observe that the directional derivative diverges, D ρ σ G ( σ ) = , since T 0 S ( ρ σ ) = while all other terms in Equation (14) are finite. The strict positivity (actually infinity) of the directional derivative contradicts the assumption that σ is a maximizer. Next, consider the case when T 0 > T 1 . Then,
T 0 S ( ρ σ ) T 1 S [ Φ ( ρ ) Φ ( σ ) ] ( T 0 T 1 ) S ( ρ σ ) = ,
where we used the monotonicity of relative entropy [31]. We again have D ρ σ G ( σ ) = from Equation (14), contradicting the assumption that σ is a maximizer. □

References

  1. Jarzynski, C. Nonequilibrium equality for free energy differences. Phys. Rev. Lett. 1997, 78, 2690. [Google Scholar] [CrossRef]
  2. Crooks, G.E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E 1999, 60, 2721. [Google Scholar] [CrossRef] [PubMed]
  3. Crooks, G.E. Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. J. Stat. Phys. 1998, 90, 1481–1487. [Google Scholar] [CrossRef]
  4. Touchette, H.; Lloyd, S. Information-theoretic approach to the study of control systems. Phys. A Stat. Mech. Its Appl. 2004, 331, 140–172. [Google Scholar] [CrossRef]
  5. Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 2012, 75, 126001. [Google Scholar] [CrossRef] [PubMed]
  6. Parrondo, J.M.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys. 2015, 11, 131–139. [Google Scholar] [CrossRef]
  7. Kolchinsky, A.; Wolpert, D.H. Dependence of dissipation on the initial distribution over states. J. Stat. Mech. Theory Exp. 2017, 2017, 083202. [Google Scholar] [CrossRef]
  8. Procaccia, I.; Levine, R.D. Potential work: A statistical-mechanical approach for systems in disequilibrium. J. Chem. Phys. 1976, 65, 3357–3364. [Google Scholar] [CrossRef]
  9. Esposito, M.; Van den Broeck, C. Second law and Landauer principle far from equilibrium. EPL (Europhysics Lett.) 2011, 95, 40004. [Google Scholar] [CrossRef]
  10. Takara, K.; Hasegawa, H.H.; Driebe, D. Generalization of the second law for a transition between nonequilibrium states. Phys. Lett. A 2010, 375, 88–92. [Google Scholar] [CrossRef]
  11. Shiraishi, N.; Sagawa, T. Quantum Thermodynamics of Correlated-Catalytic State Conversion at Small Scale. Phys. Rev. Lett. 2021, 126, 150502. [Google Scholar] [CrossRef] [PubMed]
  12. Horodecki, M.; Oppenheim, J. Fundamental limitations for quantum and nanoscale thermodynamics. Nat. Commun. 2013, 4, 2059. [Google Scholar] [CrossRef] [PubMed]
  13. Brandão, F.G.; Horodecki, M.; Oppenheim, J.; Renes, J.M.; Spekkens, R.W. Resource theory of quantum states out of thermal equilibrium. Phys. Rev. Lett. 2013, 111, 250404. [Google Scholar] [CrossRef]
  14. Sparaciari, C.; Oppenheim, J.; Fritz, T. Resource theory for work and heat. Phys. Rev. A 2017, 96, 052112. [Google Scholar] [CrossRef]
  15. Allahverdyan, A.E.; Balian, R.; Nieuwenhuizen, T.M. Maximal work extraction from finite quantum systems. Europhys. Lett. 2004, 67, 565. [Google Scholar] [CrossRef]
  16. Skrzypczyk, P.; Short, A.J.; Popescu, S. Work extraction and thermodynamics for individual quantum systems. Nat. Commun. 2014, 5, 4185. [Google Scholar] [CrossRef] [PubMed]
  17. Schmiedl, T.; Seifert, U. Optimal finite-time processes in stochastic thermodynamics. Phys. Rev. Lett. 2007, 98, 108301. [Google Scholar] [CrossRef] [PubMed]
  18. Nakazato, M.; Ito, S. Geometrical aspects of entropy production in stochastic thermodynamics based on Wasserstein distance. Phys. Rev. Res. 2021, 3, 043093. [Google Scholar] [CrossRef]
  19. Kolchinsky, A.; Wolpert, D.H. Work, entropy production, and thermodynamics of information under protocol constraints. Phys. Rev. X 2021, 11, 041024. [Google Scholar] [CrossRef]
  20. Piñero, J.; Solé, R.; Kolchinsky, A. Optimization of nonequilibrium free energy harvesting illustrated on bacteriorhodopsin. Phys. Rev. Res. 2024, 6, 013275. [Google Scholar] [CrossRef]
  21. Solon, A.P.; Horowitz, J.M. Phase transition in protocols minimizing work fluctuations. Phys. Rev. Lett. 2018, 120, 180605. [Google Scholar] [CrossRef]
  22. Richens, J.G.; Masanes, L. Work extraction from quantum systems with bounded fluctuations in work. Nat. Commun. 2016, 7, 13511. [Google Scholar] [CrossRef]
  23. Riechers, P.M.; Gu, M. Initial-state dependence of thermodynamic dissipation for any quantum process. Phys. Rev. E 2021, 103, 042145. [Google Scholar] [CrossRef] [PubMed]
  24. Kolchinsky, A.; Wolpert, D.H. Dependence of integrated, instantaneous, and fluctuating entropy production on the initial state in quantum and classical processes. Phys. Rev. E 2021, 104, 054107. [Google Scholar] [CrossRef] [PubMed]
  25. Riechers, P.M.; Gupta, C.; Kolchinsky, A.; Gu, M. Thermodynamically Ideal Quantum State Inputs to Any Device. PRX Quantum 2024, 5, 030318. [Google Scholar] [CrossRef]
  26. Manzano, G.; Kardeş, G.; Roldán, É.; Wolpert, D.H. Thermodynamics of computations with absolute irreversibility, unidirectional transitions, and stochastic computation times. Phys. Rev. X 2024, 14, 021026. [Google Scholar] [CrossRef]
  27. Wolpert, D.; Korbel, J.; Lynn, C.; Tasnim, F.; Grochow, J.; Kardeş, G.; Aimone, J.; Balasubramanian, V.; De Giuli, E.; Doty, D.; et al. Is stochastic thermodynamics the key to understanding the energy costs of computation? Proc. Natl. Acad. Sci. USA 2024, 121, e2321112121. [Google Scholar] [CrossRef]
  28. Müller, M.P. Correlating Thermal Machines and the Second Law at the Nanoscale. Phys. Rev. X 2018, 8, 041051. [Google Scholar] [CrossRef]
  29. Lipka-Bartosik, P.; Wilming, H.; Ng, N.H. Catalysis in quantum information theory. Rev. Mod. Phys. 2024, 96, 025005. [Google Scholar] [CrossRef]
  30. Lanyi, J.K. Bacteriorhodopsin. Annu. Rev. Physiol. 2004, 66, 665–688. [Google Scholar] [CrossRef]
  31. Müller-Hermes, A.; Reeb, D. Monotonicity of the Quantum Relative Entropy Under Positive Maps. Ann. Henri Poincaré 2017, 18, 1777–1788. [Google Scholar] [CrossRef]
  32. Brandão, F.G.; Gour, G. The general structure of quantum resource theories. arXiv 2015, arXiv:1502.03149. [Google Scholar]
  33. Guryanova, Y.; Popescu, S.; Short, A.J.; Silva, R.; Skrzypczyk, P. Thermodynamics of quantum systems with multiple conserved quantities. Nat. Commun. 2016, 7, 12049. [Google Scholar] [CrossRef] [PubMed]
  34. Mandal, D.; Jarzynski, C. Work and information processing in a solvable model of Maxwell’s demon. Proc. Natl. Acad. Sci. USA 2012, 109, 11641–11645. [Google Scholar] [CrossRef] [PubMed]
  35. Barato, A.C.; Seifert, U. An autonomous and reversible Maxwell’s demon. EPL (Europhysics Lett.) 2013, 101, 60001. [Google Scholar] [CrossRef]
  36. Audenaert, K.M.R.; Eisert, J. Continuity bounds on the quantum relative entropy. J. Math. Phys. 2005, 46, 102104. [Google Scholar] [CrossRef]
  37. Watrous, J. Advanced Topics in Quantum Information Theory. Lecture Notes. 2020. Available online: https://cs.uwaterloo.ca/~watrous/QIT-notes (accessed on 29 December 2024).
Figure 2. Availability gain G ( p ) as a function of the engine initial distribution ( p ( 0 ) , 1 p ( 0 ) ) . (ad) correspond to four different environment initial distributions p env . Black lines show G ( p ) computed using Equation (17); green dots indicate predictions made using our information-theoretic expression (18) (using shorthand G ( q ) + Δ D in legend). Optimal initial distribution q and equilibrium initial distribution π are indicated using vertical lines. Dashed curve indicates reduction in the engine’s Shannon entropy as a function of initial distribution, Δ H from Equation (22). Vertical axes have the same scale. Other parameters: T 0 = T 1 = T = 1 , ϵ = 1 .
Figure 2. Availability gain G ( p ) as a function of the engine initial distribution ( p ( 0 ) , 1 p ( 0 ) ) . (ad) correspond to four different environment initial distributions p env . Black lines show G ( p ) computed using Equation (17); green dots indicate predictions made using our information-theoretic expression (18) (using shorthand G ( q ) + Δ D in legend). Optimal initial distribution q and equilibrium initial distribution π are indicated using vertical lines. Dashed curve indicates reduction in the engine’s Shannon entropy as a function of initial distribution, Δ H from Equation (22). Vertical axes have the same scale. Other parameters: T 0 = T 1 = T = 1 , ϵ = 1 .
Entropy 27 00091 g002
Figure 3. Same as in Figure 2, but where the temperature of Work Extraction is higher than of Preparation, T 1 = 3 > T 0 = 1 . Black lines show G ( p ) computed using Equation (17); green dots indicate predictions made using information-theoretic expression (18). (ad) correspond to different initial states of the environment. Observe that in some cases, the function G is non-concave and may have multiple local maxima. In (d), the optimal distribution q does not have full support, so the equivalence between Equations (17) and (18) does not hold.
Figure 3. Same as in Figure 2, but where the temperature of Work Extraction is higher than of Preparation, T 1 = 3 > T 0 = 1 . Black lines show G ( p ) computed using Equation (17); green dots indicate predictions made using information-theoretic expression (18). (ad) correspond to different initial states of the environment. Observe that in some cases, the function G is non-concave and may have multiple local maxima. In (d), the optimal distribution q does not have full support, so the equivalence between Equations (17) and (18) does not hold.
Entropy 27 00091 g003
Figure 4. Gain of availability G ( ρ ) in a quantum system for different amounts of coherence (parameterized by θ ). Solid black line shows G for states diagonal in the reference basis, dashed black line shows G for states diagonal in the basis of the optimizer σ , both calculated using Equation (25). Markers indicate predicted values of G from information-theoretic expression (26). (a) For θ = 0 (no coherence), we recover the classical result shown in Figure 3b. (bd) Advantage of selecting initial state in the optimal basis increases with increased coherence. Vertical axes have the same scale. See text for details.
Figure 4. Gain of availability G ( ρ ) in a quantum system for different amounts of coherence (parameterized by θ ). Solid black line shows G for states diagonal in the reference basis, dashed black line shows G for states diagonal in the basis of the optimizer σ , both calculated using Equation (25). Markers indicate predicted values of G from information-theoretic expression (26). (a) For θ = 0 (no coherence), we recover the classical result shown in Figure 3b. (bd) Advantage of selecting initial state in the optimal basis increases with increased coherence. Vertical axes have the same scale. See text for details.
Entropy 27 00091 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kolchinsky, A.; Marvian, I.; Gokler, C.; Liu, Z.-W.; Shor, P.; Shtanko, O.; Thompson, K.; Wolpert, D.; Lloyd, S. Maximizing Free Energy Gain. Entropy 2025, 27, 91. https://doi.org/10.3390/e27010091

AMA Style

Kolchinsky A, Marvian I, Gokler C, Liu Z-W, Shor P, Shtanko O, Thompson K, Wolpert D, Lloyd S. Maximizing Free Energy Gain. Entropy. 2025; 27(1):91. https://doi.org/10.3390/e27010091

Chicago/Turabian Style

Kolchinsky, Artemy, Iman Marvian, Can Gokler, Zi-Wen Liu, Peter Shor, Oles Shtanko, Kevin Thompson, David Wolpert, and Seth Lloyd. 2025. "Maximizing Free Energy Gain" Entropy 27, no. 1: 91. https://doi.org/10.3390/e27010091

APA Style

Kolchinsky, A., Marvian, I., Gokler, C., Liu, Z.-W., Shor, P., Shtanko, O., Thompson, K., Wolpert, D., & Lloyd, S. (2025). Maximizing Free Energy Gain. Entropy, 27(1), 91. https://doi.org/10.3390/e27010091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop