1. Introduction
Since the early 1970s [
1]
contact geometry has been recognized as underlying macroscopic thermodynamics, starting from Gibbs’ fundamental thermodynamic relation. This has spurred a series of papers on the geometry of thermodynamics; including [
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27]; see [
28] for an introduction and survey. Nevertheless, this literature points to major differences with, for example, the geometric theory of classical mechanics (using symplectic geometry), and hints at aspects which have not yet been addressed. First, the thermodynamic phase space (which is formulated as a contact manifold) comprises the extensive
and intensive variables, and thus, its dimension is more than twice the minimal number of variables to describe the thermodynamic system at any moment of time. Second, most of the theory is about
thermostatics, and the proper geometric formulation of the dynamics is much less clear. Third, the contact geometric approach to thermodynamics is usually based on the
energy representation of thermodynamic systems and its corresponding Gibbs one-form. On the other hand, there is also the
entropy formulation which corresponds to another (although conformally equivalent) one-form. This led [
29] to the use of homogeneous coordinates for the intensive variables, and thus to extend the thermodynamic phase space by one more degree of freedom. This was followed up in [
25,
26] by emphasizing the formulation of the thermodynamic phase space as the projection of the cotangent bundle over the space of extensive variables. Thus contact geometry is approached from the vantage point of the geometry of cotangent bundles with their Liouville one-form. Fourth, until now, not much work has been performed regarding the geometry of
irreversible thermodynamics, based on the factorization of the irreversible entropy production. Fifth, how to use these geometric frameworks for the
control of thermodynamic systems has not yet been addressed.
The present paper continues the investigation of all of these aspects. In
Section 2, a systems and control perspective on macroscopic thermodynamics is emphasized by primarily regarding thermodynamic systems as systems interacting with their surroundings via heat, mechanical work, exchange of chemical species, etc. A classical example is, of course, the heat engine. A summary of how dissipativity theory provides a natural framework for interpreting and formulating the first and second laws of thermodynamics, Clausius’ inequality, and eventually entropy is provided. Indeed, energy and entropy reveal themselves to be the storage functions corresponding to two supply rates involving the thermal and mechanical ports of the thermodynamic system. Finally, this leads to Gibbs’ fundamental relation and to the definition of the thermodynamic phase space.
Section 3 focuses on geometric descriptions of irreversible thermodynamic systems. The way that the classical factorization of the irreversible entropy production suggests quasi-Hamiltonian formulations (somewhat resembling GENERIC [
30]) based on energy conservation and the increase of entropy of the autonomous part of the dynamics are discussed. This paper also indicates how such formulations may be used for stability analysis.
Section 4 starts with the geometry of the thermodynamic phase space from the point of view of the Liouville geometry of the cotangent bundle over the space of extensive variables. Identifying the constitutive relations (‘thermostatics’) of the thermodynamic system as a Liouville submanifold, and the dynamics as homogeneous Hamiltonian dynamics lead to the definition of a port-thermodynamic system. Such systems interact with their environment via power ports and/or entropy flow ports. In
Section 5, this is used for ‘control by interconnection’ of port-thermodynamic systems, where the dynamics of the system are sought to be controlled by interconnection with a suitable controller port-thermodynamic system. Finally,
Section 6 contains conclusions and a discussion of venues for further research.
The sections are illustrated by three running examples: the gas-piston-damper system, chemical reaction networks, and the heat exchanger. Overall, the paper heavily builds upon previous papers [
25,
26,
31,
32,
33,
34], in which further details and background can be found.
2. The First and Second Law from the Point of View of Dissipativity Theory
The first law of thermodynamics expresses two fundamental properties: (1) the different types of interaction of a thermodynamic system with its surroundings (e.g., heat flow, mechanical work, flow of chemical species, etc.) all result in an exchange of a common quantity called energy, (2) there exists a function of the macroscopic thermodynamic variables that represents the energy stored in the system, and the increase of this function during any time interval is equal to the sum of the energies supplied to the system during this time interval by its surroundings (conservation of energy). Thus, energy manifests itself in different physical forms, which are equivalent and to a certain extent exchangeable. ‘To a certain extent’ because, as expressed by the second law of thermodynamics, there are limitations to the conversion of heat to other forms of energy.
The first law can be mathematically formulated through the use of
dissipativity theory as formulated in [
35]; see also [
31,
36,
37]. Consider a simple thermodynamic system such as a gas, described by three variables: volume
V, pressure
P, and temperature
T. Then, mechanical power (rate of mechanical work) provided by the surroundings to the thermodynamic system is given by
, where
is the rate of volume change. (In the physics convention for the pressure
P,
is the rate of mechanical work exerted by the system
on the surroundings). The second type of interaction with the surroundings comes from heat delivered to the system (for instance, from a heat source). Let us denote, using
q, the heat flow (heat per second) from the heat source into the system. Then the first law is expressed by the existence of a function
of the thermodynamic state
x (e.g.,
satisfying the equation of state), expressing the energy of the system and satisfying, at all times,
tThat is, the increase of the total energy E of the thermodynamic system is equal to the incoming heat flow (through the thermal port) minus the mechanical work performed by the system on its surroundings (through the mechanical port). Equivalently, in the terminology of dissipativity theory, the first law amounts to the system being cyclo-lossless for the supply rate , with storage functionE. This is directly extended to more involved thermodynamic systems. For example, suppose that apart from mechanical and thermal interaction with the surroundings, there is additional mass inflow of chemical species. Then, the supply rate is extended to . Here, , with the mole number of the k-th chemical species, and its chemical potential.
The first law emphasizes the role of thermodynamic systems as devices for
energy conversion; energy from one physical domain is converted into energy in another domain. ‘Optimal’ conversion of heat into mechanical work, motivated by the design of steam engines in the beginning of the 19th century, was one of the starting points of thermodynamic theory. Electro-chemical devices such as batteries, and electro-mechanical systems including electrical motors and generators, are among the many other classical examples of energy-converting devices [
38]. On the other hand, almost from the very start of thermodynamic theory, it was realized that there are intrinsic
limitations to energy conversion. In particular, heat
cannot just be converted into mechanical work. This is the origin of the second law of thermodynamics. The second law also admits a dissipativity interpretation; however, more involved than that of the first law. Let us start with the formulation of the second law, as given by Lord Kelvin (see [
39]):
A transformation of a thermodynamic system whose only final result is to transform into work heat extracted from a source which is at the same temperature throughout is impossible.
Since the work done during a time interval
is equal to
, Kelvin’s formulation immediately implies that for each
constant temperature
T, any thermodynamic system is
cyclo-passive with respect to the supply rate
. However, the second law is
stronger than that. Namely, Kelvin’s formulation also forbids the conversion into work of heat from a source at constant temperature for all transformations in which the system interacts as well with a
second heat source at
another temperature, as long as the net heat taken from this second heat source is zero. As demonstrated by Carnot, the interaction with heat sources at
different temperatures is crucial for the conversion of heat into mechanical energy. This led to the famous Carnot cycle which can be described as follows: consider a simple thermodynamic system, in particular, a fluid or gas in a confined space of a certain volume. Control of the system functions in two ways: (1) via
isothermal transformations, where
heat is supplied to, or taken from, the system at a constant temperature (classically described as the interconnection of the thermodynamic system with an infinite heat reservoir at the temperature of the isothermal process), (2) via
adiabatic transformations, where the only interaction with the surroundings is via
work supplied to, or taken from, the system (classically described by the movement of a piston that changes the volume of the system, with a pressure equal to the pressure of the gas). A
cycle consists of two isothermal transformations and two adiabatic transformations: first, an isothermal transformation at temperature
(‘hot’) takes the system from an initial state to another state, secondly, an adiabatic transformation lowers the temperature of the system to
(‘cold’), thirdly, an isothermal transformation at temperature
takes the system to a state from which, fourthly, an adiabatic transformation takes the system back to the original initial state; see
Figure 1.
The cycle is called a Carnot cycle if it is reversible; i.e., can be traversed in the opposite direction as well.
Remark 1. In the exposition of the Carnot cycle, often terminologies such as ‘infinitesimally slow’, ‘quasi-reversible’, ‘quasi-static’, etc., are used. This is largely with regard to the interaction of a system with its surroundings as being implemented by actual physical devices. For example, an isothermal transformation is viewed as the result of the ‘real’ physical action of a force exerted by a piston on the gas (implying that the pressure delivered by the piston could be different from the pressure of the gas). Furthermore, the system is considered to be in ‘real’ physical contact with a heat reservoir at a certain temperature (which could differ from that of the gas). In contrast in, e.g., electrical network theory and control theory the concept of an ‘ideal’ control action is employed, where, for instance, the pressure and the temperature are directly controlled.
The heat delivered to the system during the first isothermal at temperature is denoted by , and during the second isothermal at by (generally is negative). Then, by the first law, since the final state is equal to the initial state, , where is the mechanical work that is done by the thermodynamic system on its surroundings.
By an intricate reasoning from [
39], see also [
31], Kelvin’s formulation of the second law yields for any cycle the fundamental inequality
with
equality in the case of a Carnot cycle. Furthermore, the reasoning can be extended to complex cycles, consisting of
n isothermals at temperatures
and absorbed heat quantities
,
, interlaced by
n adiabatics, leading to
with equality in the case of reversibility. Finally, a slight extension (approximating continuous heat flow time-functions
by step functions with step values
) yields the celebrated
Clausius inequality
for all cyclic processes
(where
q is the heat flow into the thermodynamic system, and
T is the temperature of the system), with equality
holding for all reversible cyclic processes (see [
31] for details and refinements).
From the point of view of dissipativity theory [
31,
35] the Clausius inequality (
4) is the same as
cyclo-dissipativity of the thermodynamic system with respect to the
supply rate . Thus, assuming reachability from and controllability from some ground state
this means, see [
40], that there exists a
storage function F for the supply rate
, that is
. Hence
satisfies
The function S was called ‘entropy’ by Clausius, from the Greek word for ‘transformation’.
From the point of view of dissipativity theory, the storage function
F need not be unique. In order to guarantee the uniqueness of
F (modulo a constant), and therefore of the entropy
S, we additionally assume [
31,
40] that, given some ground state, for every thermodynamic state there exists a reversible cyclic transformation through this state and the ground state satisfying
This uniqueness of S is, explicitly or implicitly, always assumed in expositions of macroscopic thermodynamics, and also in this paper.
Remark 2. The dissipativity theory formulation of the second law already appears in [35], but under the additional assumption that F is nonnegative. In fact, in [35,37] there exists a nonnegative storage function for the supply rate (and thus the system is dissipative instead of merely cyclo-dissipative) if and only if for all initial conditions xwhere the supremum is taken over all and all heat flow functions on the time interval , and corresponding temperature profiles resulting from . Furthermore, if (8) holds, then is minimal among all nonnegative storage functions. It follows that given byis maximal among all nonpositive storage functions. Since an arbitrary constant may be added to S while still satisfying (6), the assumption that S is nonpositive is equivalent to S being bounded from above. However, in many thermodynamic systems the entropy is not bounded from above. Thus thermodynamic systems are generally only cyclo-dissipative with respect to the supply rate , and not dissipative. The Thermodynamic Phase Space and Gibbs’ Relation
The next step is now to
add the energy and entropy as extra extensive variables to the description of the thermodynamic system. In order to illustrate this, consider a simple thermodynamic system, with extensive variable
V (volume) and intensive variables
(pressure and temperature). The
equation of state is an equation
for some scalar function
f. (For example, for an ideal gas
with
R the universal gas constant.) Any
satisfying
is called a
state of the thermodynamic system. Hence, under regularity conditions the set of states of the thermodynamic system is a 2-dimensional
submanifoldM of
. Then, consider the functions
(
energy) and
(
entropy) as obtained from dissipativity theory. Then, we may equally well represent the set of states
by the 2-dimensional submanifold
comprising the
total set of extensive and intensive variables
:
(With some abuse of notation, the extra variables , are denoted by the same letters as used for the functions defined before.) The space of all extensive and intensive variables is called the thermodynamic phase.
Furthermore, by the first law
, while for any state there exists a path through this state and the ground state such that
. Taken together, this implies that the
Gibbs one-form on the thermodynamic phase space
defined as
is
zero restricted to
L. This is called
Gibbs’ fundamental thermodynamic relation. The thermodynamic phase space, together with the Gibbs one-form, defines a
contact manifold. Furthermore, a submanifold of the thermodynamic phase space
restricted to which the Gibbs one-form (
11) is zero, and moreover has maximal dimension (in this case 2), is called a
Legendre submanifold. Gibbs’ fundamental relation implies that any Legendre submanifold
L is actually given as
for some energy functions
. Thus,
L is completely described by expressing the energy
E as a function
of the other two extensive variables
, hence the name
energy representation. Instead of relying on such an energy function (or its partial Legendre transforms), there is still an
alternative way of describing
L. This is to start, not with
, but instead with the expression of the
entropy as a function
. For a simple thermodynamic system this leads to the
entropy representation of the submanifold
given as
3. Irreversible Thermodynamics
Clausius interpreted the term
in the inequality (
6) as the part of the infinitesimal transformation
that is
compensated by the opposite rate of change
of the entropy of the
surroundings; that is, of the reservoir supplying the heat to the thermodynamic system. The remaining part
was called the ‘uncompensated transformation’ by Clausius, and later the
irreversible entropy production [
38].
Irreversible thermodynamics is concerned with thermodynamics where
is different from zero, implying an autonomous (independent from external heat flow) increase of the entropy
S. Sometimes it is also referred to as
non-equilibrium thermodynamics, because the entropy increase is resulting from (internal) non-equilibrium conditions.
The standard postulate of irreversible thermodynamics (see e.g., [
38]) is that
can be factorized as
where
are called the
thermodynamic forces and
are the
thermodynamic flows (or fluxes), in such a way that
In
linear irreversible thermodynamics [
38] it is furthermore assumed that the vectors
F and
J with components
and
are related by a symmetric linear map
These are the celebrated
Onsager reciprocity relations [
38], corresponding to the symmetric factorization
.
Example 1 (The heat exchanger)
. The perhaps simplest example of irreversible dynamics and irreversible entropy production is offered by the heat exchanger. Consider two heat compartments, having temperatures and
(‘hot’ and ‘cold’), connected by a heat-conducting wall. In the absence of the conducting wall (and thus, without irreversible entropy production), these are two separate systems with entropies and , each satisfyingDue to the conducting wall, there is a heat flow q from the hot to the cold compartment, which is given by Fourier’s law for heat conduction asfor some positive constant. Furthermore, in view of the first law. Hence, the total entropysatisfies This yields the following expression for the irreversible entropy productiondue to heat conduction (non-equilibrium conditions) In this example the thermodynamic force is, while the thermodynamic flow is. Indeedif, and only if,. Despite its simplicity, this is an example of nonlinear irreversible thermodynamics, since the thermodynamic flowcannot be expressed as a linear function of the thermodynamic force.
Example 2 (The gas-piston-damper system)
. Another simple example is the gas-piston-damper system. Consider a cylinder containing a gas whose volume can be controlled by a piston actuated by an external force u, and is subject to linear damping. The total energy E of the system can be expressed as a function of the other extensive variables aswith S representing entropy, V volume, momentum of the piston with mass m, and representing the internal energy of the gas. Assuming that the heat as produced by the damping of the piston is fully absorbed by the gas in the cylinder, the dynamics are given aswith the velocity of the piston, A its area, d the damping constant, the temperature, and u the external force on the piston. The thermodynamic force F is identified as and the thermodynamic flow as . Clearly Onsager relations are satisfied with .
Example 3 (Chemical reaction network)
. A third, more involved, example of irreversible thermodynamics are the dynamics of chemical reaction networks [34,41]. Consider an isolated (no incoming or outgoing chemical species, and no external heat flow) reaction network, with m chemical species and r reactions. Disregarding volume and pressure, consider the vector of concentrations of the chemical species. The dynamics take the formwhere is the vector of reaction fluxes. The stoichiometric matrixN, which consists of positive and negative integer elements, captures the structural balance laws of the reactions. Chemical reaction network theory, as originating from [42,43,44], identifies the edges of the underlying directed graph with the r reactions, and the nodes with the ccomplexes of the chemical reactions, i.e., the different left- and right-hand sides of the reactions in the network. This means that the stoichiometric matrix N is factorized as , with B denoting the incidence matrix of the graph of complexes, and Z denoting the complex composition matrix (a matrix of nonnegative integers), whose -th column captures the expression of the -th complex in the m chemical species. It is shown in [45] that the dynamics of a large class of chemical reaction networks (including detailed-balanced mass action kinetics networks) can be written into the compact formwhere is the vector exponential mapping ,
R is the gas constant, T is the temperature, and is the m-dimensional vector of chemical potentials of the chemical species (for which e.g., in the case of detailed-balanced mass action kinetics explicit expressions are available). Furthermore, the matrix in (24) defines a weighted Laplacian matrix for the graph of complexes, with the diagonal elements of the diagonal matrix , depending on the temperature T and the reference state. We have the following fundamental property [45]Expressing the entropy S as a function of x and the total energy E, Gibbs’ fundamental relation yields. This implieswith equality if, and only if,, i.e., if and only if the chemical affinitiesof the reactions are all zero. Hence the equilibria of the system correspond to states of minimal (i.e., zero) entropy production, in accordance with the theory of irreversible thermodynamics [38].
The vectors of thermodynamic forces F and thermodynamic flows J are given asand indeed by (25)if and only if.
Note that Jcannot be expressed as a linear function of F and thus, in general, chemical reaction networks define nonlinear irreversible thermodynamics. 3.1. Quasi-Hamiltonian Formulation of Irreversible Thermodynamic Systems
Conservative mechanical systems are well-known to admit a Hamiltonian formulation. The same holds for many other physical systems. The Hamiltonian formulation of the dynamics of thermodynamic systems is, however, much more elusive. This has already studied and elaborated upon in, e.g., [
16,
24,
41,
46]. The present formulation emphasizes the factorization (
15) of the irreversible entropy production.
Consider an isolated thermodynamic system with entropy S and energy E. Collect all other extensive variables in a vector denoted by z. The energy E can be expressed as a function of z and S. Now consider the irreversible entropy production , with J the vector of thermodynamic flows and F the vector of thermodynamic forces. Often (as illustrated by the examples to be discussed), the thermodynamic force F can be expressed as for some matrix C, whose elements are possibly depending on , as well as on . Note that equals the vector of intensive variables associated with the extensive variables z, while the intensive variable equals the temperature T.
Energy conservation
together with
suggests writing the dynamics of
z and
S into the form
for some skew-symmetric matrix
, possibly depending on
and
. This implies that the extended matrix
is also skew-symmetric, and thus indeed
. Note however, that since the matrix
may depend on the
intensive variables
, it does
not define a Poisson bracket on the state space with coordinates
. Therefore (
28) will be called a
quasi-Hamiltonian formulation.
This is illustrated by the previously discussed examples of the gas-piston-damper system, chemical reaction network, and heat exchanger as follows.
Example 4 (Gas-piston-damper system continued)
. The dynamics of the gas-piston-damper system (22) can be written into the quasi-Hamiltonian form as (see also [41])with as the velocity of the piston and the temperature. The thermodynamic flow and force are and
, respectively. Hence, is of the form as given in (28) with .
depends on the intensive variables T and v, therefore, it does not define a Poisson bracket.
Example 5 (Chemical reaction network continued)
. In the case of chemical reaction networks, the vector of thermodynamic forces is given as with , and the vector of chemical potentials. Furthermore, according to (27) the vector of thermodynamic flows is given as . This leads to the quasi-Hamiltonian representation Example 6 (Heat exchanger continued)
. The quasi-Hamiltonian formulation of the heat exchanger is slightly different. This caused by the fact that, in this example, we have two entropies, and , corresponding to the two compartments (and not a total entropy as in the previous two examples). In fact, the quasi-Hamiltonian formulation of the heat exchanger is given as (see [41])since , and the heat flow from compartment 1 to 2 is given by . Here, we recognize as the thermodynamic force.
A further structured form of quasi-Hamiltonian modeling of irreversible thermodynamic systems, called
irreversible port-Hamiltonian systems, was introduced in [
24]; see [
41,
46,
47] for more developments and references.
A special case occurs if the total energy
splits as
for some thermal energy
and remaining energy
. In this case, one obtains the equations
If, furthermore,
with
(Onsager’s reciprocity relations) then the dynamical equations for the extensive variables
z can be combined into
This is the standard internal dynamics of a
port-Hamiltonian system with state vector
z; see e.g., [
37,
48,
49]. In this case, irreversibility means that, even though the total energy
is preserved, the part of the energy given by
is continuously transformed (by the resistive power flow
) into the thermal energy
. Conversely, one can show [
31] that any port-Hamiltonian system can be embedded into an energy-conserving thermodynamic system.
Example 7 (Mass-spring-damper system)
. A simple example is the ubiquitous mass-spring-damper system. Its dynamics are very similar to that of the gas-piston-damper system, the difference being that the internal energy of the gas is replaced by the sum ,
where is the potential energy of the spring (with x denoting the elongation of the spring), and is the thermal energy of the system. This leads to the dynamics (compare with (29))as well as the following port-Hamiltonian formulation of the mass-spring-damper system 3.2. Stability Analysis
The quasi-Hamiltonian formulation can be readily used for
stability analysis. Note, however, that the conditions
,
for
E having a minimum often do not correspond to equilibria of interest. This is illustrated by the gas-piston-damper system, where these conditions correspond to pressure, velocity, and temperature all being equal to zero. Instead, in such cases it is of much more interest to consider the stability of
steady states corresponding to a non-zero force
delivered by the piston. In view of (
29) and the energy expression
, this corresponds to the steady state condition
(Note that
is ensured by
implying that
, and thus corresponds to a singularity in the skew-symmetric matrix
, instead of a vanishing of all the partial derivatives of
E. In particular, the temperature
at steady state will not be zero.) Instead of using
as a candidate Lyapunov function which leads to the consideration of the
availability function [
50] (also called Bregman divergence or shifted Hamiltonian [
37])
where
and
are the pressure and temperature at steady state, for some arbitrary value
. Indeed, using the steady state condition (
37), a direct computation yields
for all values of the temperature
and the steady state temperature
. Furthermore, given that for thermodynamic systems the internal energy
(and therefore
) is a
convex function,
is also convex with minimum at
. Hence, if
is strictly convex (which is often the case), then this proves the asymptotic stability of the steady state. The use of the availability function for stability analysis and stabilization was already advocated for in [
51]; see also, e.g., [
47,
52] for related work using the availability function in the context of passivity-based control of irreversible port-Hamiltonian systems.
This is extended to
general quasi-Hamiltonian systems
where the skew–symmetric matrix
and the input matrix
G may both depend on the extensive variables
and the intensive variables
. The steady state condition for
is given as
where
and
denote the values of
and
G at steady state, i.e.,
Assuming the energy function
to be
convex (which is normally the case in thermodynamic systems), then the availability function is given as the convex function
having a minimum at
. A key property of the availability function
is that
where
denotes the gradient vector of
(written as a column vector). The computation of
yields, exploiting the steady state condition (
41),
It follows that
if, and only if
(This condition is similar to the condition for asymptotic stability of steady states of port-Hamiltonian systems as derived in [
53]; see also [
37].) Hence if (
46) is satisfied and
is not only convex but even strictly convex, then
serves as a Lyapunov function for assessing the (asymptotic) stability of the steady state
.
Instead of expressing the energy
E as a function
of the remaining extensive variables
and writing the dynamics as a quasi-Hamiltonian system with Hamiltonian given by
E, one may also write the entropy
S as a function
and try to express the dynamics as being generated by the gradient of this entropy function. However, since (in the isolated case)
, this constitutes quite a different scenario. An example where it
is possible is a chemical reaction network as mentioned before. Instead of the quasi-Hamiltonian formulation (
30), one rewrites the dynamics as (with
z replaced by the vector of concentrations
x)
where
and
. Consequently
because of
. Since
, the availability function corresponding to
, with
the constant total energy of the system, can be used as a Lyapunov function for stability analysis; cf. [
34] for details. Note that the matrix
, in the right-hand of (
47), is
not a skew-symmetric matrix anymore. In fact, the formulation (
47) resembles the formulation of thermodynamic systems as used in the GENERIC formalism; see, e.g., [
30].
Another interesting case are
isothermal chemical reaction networks. In this case [
45] one considers the Gibbs free energy (Legendre transform of
with respect to
S)
for constant
T. By the properties of the Legendre transform
(the vector of chemical potentials). Hence, in view of (
47) one obtains for constant
T
with
the irreversible entropy production. Alternatively expressed, whenever the temperature
T is constant,
, while
(with
q the heat flow needed to keep the temperature constant) and
. Taken together this indeed yields
.
5. Control by Interconnection
Control by interconnection is the paradigm of controlling a system by interconnecting it (through its inputs and outputs) to an additional
controller system. The aim is to influence the dynamics of the original system by
shaping the dynamics of the interconnected system by a proper choice of the controller system. Applied to port-thermodynamic systems, this means that given a plant thermodynamic system we interconnect it to a controller port-thermodynamic system such that in the closed-loop port-thermodynamic system the plant states converge to the desired set-point values. Port-thermodynamic systems can be interconnected, either by their power ports or by their entropy flow ports; cf. [
26] for details. For example, the power port interconnection of two systems with variables
is defined as follows. With the homogeneity assumption in
p in mind, impose the following constraint
This leads to the summation of the Liouville one-forms
and
given by
on the
composed space defined as
Let the constitutive relations of the two port-thermodynamic systems be defined by the Liouville submanifolds
. Then, the constitutive relations of the interconnected system are defined by the composition
Furthermore, consider the dynamics on
defined by Hamiltonians
, where
is the row vector of control Hamiltonians of system
. Additionally
assume that the functions
do
not depend on the energy variables
. Then
is well-defined on
for all
. Next, consider the power conjugate outputs
. By imposing
interconnection constraints on the power port variables
satisfying the
power preservation property
one obtains an
interconnected port-thermodynamic system with constitutive relations described by
. Similarly, interconnecting the inputs
to the entropy flow outputs
in such a way that
leads again to a port-thermodynamic system.
A basic control problem concerns the
stabilization of a system to a desired set-point value (
regulation). How can we use control by interconnection to address this problem? Suppose we want to stabilize the system at some set-point value
. If
already has a strict minimum at
then one may asymptotically stabilize
by the interconnection with a
damper system [
32]. In fact, assume for simplicity of exposition that
(scalar output
). Then, consider an additional linear damper system (cf. [
26]), whose Liouville submanifold is given as
with entropy as
, internal energy as
, and
its temperature. The dynamics of this damper system are generated by the Hamiltonian (see [
26])
(note the
quadratic dependence on the input
), with power conjugate output
(damping force). Then interconnect the plant port-thermodynamic system
to this damper system by setting
This results (after setting
) in the interconnected port-thermodynamic system with total Hamiltonian given as
with total energy
. This implies
Hence, by an application of LaSalle’s invariance principle, the system converges to the largest invariant set within the set where the power conjugate output
is zero. Note that
corresponds to zero entropy production
; in accordance with irreversible thermodynamics. If the largest invariant set where
y is zero equals the singleton
then asymptotic stability of
results; for some limiting value
of the entropy
of the damper system; see also [
32].
What can be done if
does not have a strict minimum at
? This can be approached via the (generalized)
Energy-Casimir method; similar to the theory of control by interconnection for port-Hamiltonian systems, see e.g., [
37,
61]. Consider a port-thermodynamic system with Liouville submanifold
with the generating function (in the energy representation)
. A classical tool in the stability analysis of ordinary Hamiltonian dynamics is to consider additional
conserved quantities; see e.g., [
56,
57,
58]. In order to extend this idea to the present case, let us strengthen our assumption on
by requiring that
everywhere on
; i.e., not just on
. Next, consider
additional conserved quantities for the dynamics
that are only depending on the extensive variables
; i.e., functions
such that
, where
is the standard Poisson bracket on
. Hence, also
. Subsequently, note that [
32]
Hence, the transformation
is a point transformation (that is, leaving the Liouville form invariant). Note that in the new coordinates the intensive variables
are transformed into the
new intensive variables
In these new coordinates the generating function for
in entropy representation is given by
. Furthermore, since
, the transformed Hamiltonian
satisfies
. Hence, in the new coordinates we are back to the situation considered before: if
has a strict minimum at
, then
is a Lyapunov function for the dynamics restricted to
, and the equilibrium
with
, is stable with respect to the dynamics on
.
Finally, note that the row vector
in the new coordinates transforms to
, leading to the
transformed power conjugate outputs
and the
transformed entropy flow conjugate outputs
All this is illustrated by the stabilization of the gas-piston system in the following example.
Example 11 (Regulation of gas-piston system)
. Consider the gas-piston system (without a damper) with extensive variables ,
as before. The constitutive properties of the system are given by the Liouville submanifold as in (65) with energy expression (‘p’ for plant). Without damping ( the dynamics are generated by the Hamiltonianwith power conjugate output (velocity of the piston). A scalar controller system with extensive variables is given by the port-thermodynamic system , with energy , and dynamicswith output . The function is a design parameter, specifying the controller system.
The closed-loop system is obtained by the negative feedback (with v a new external input)together with This leads to the closed-loop Hamiltonian It is immediately seen that for any function is a conserved quantity. This motivates a consideration of new canonical coordinates , where while . In the new coordinates we computeasleading to the same power conjugate output (velocity of the piston). For any set-point the functions and should be chosen in such a way that the function has a strict minimum at for some value of and state of the controller system. As discussed before, this can be turned into asymptotic stabilization by additionally interconnecting the obtained closed-loop system with a damper system through the power port .