Next Article in Journal
Information and Agreement in the Reputation Game Simulation
Next Article in Special Issue
Testing Nonlinearity with Rényi and Tsallis Mutual Information with an Application in the EKC Hypothesis
Previous Article in Journal
Path Planning Research of a UAV Base Station Searching for Disaster Victims’ Location Information Based on Deep Reinforcement Learning
Previous Article in Special Issue
Entropy Optimization, Generalized Logarithms, and Duality Relations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Additive Entropy Composition Rules Connected with Finite Heat-Bath Effects

by
Tamás Sándor Biró
1,2,†
1
Wigner Research Centre for Physics, 1121 Budapest, Hungary
2
Institute for Physics, University Babeş-Bolyai, 400294 Cluj, Romania
External Faculty Member at Complexity Science Hub, 1080 Vienna, Austria.
Entropy 2022, 24(12), 1769; https://doi.org/10.3390/e24121769
Submission received: 27 October 2022 / Revised: 28 November 2022 / Accepted: 29 November 2022 / Published: 3 December 2022
(This article belongs to the Special Issue Non-additive Entropy Formulas: Motivation and Derivations)

Abstract

:
Mathematical generalizations of the additive Boltzmann–Gibbs–Shannon entropy formula have been numerous since the 1960s. In this paper we seek an interpretation of the Rényi and Tsallis q-entropy formulas single parameter in terms of physical properties of a finite capacity heat-bath and fluctuations of temperature. Ideal gases of non-interacting particles are used as a demonstrating example.

1. Introduction

Entropy is a great tool in thermodynamics and statistical physics. Conceptualized originally by Clausius [1] as a state descriptor, that distinguishes it from heat, it became a basic principle for statistical and informatics calculations. Its classical form, the Boltzmann entropy has a wide use [2,3,4,5]. Nevertheless, mostly in mathematical approaches to informatics, its generalizations occurred: altered, non-logarithmic formulas between the probability of a given sate and the total entropy of the system [6,7,8,9,10].
Still, the classical logarithmic formula is of widest use, having a number of properties that make it destined to be a convex measure of information and probability. The most known generalization is due to Alfred Renyi [6], who constructed a formula being also additive for factorizing joint probability, but abandoned the logarithm as the sole function with this property. There is a parameter, occurring as a power of probability, denoted by q or α . The classical formula emerges in the q 1 limit, formally.
As interesting as the Renyi entropy is, its form is not an expectation value. The q-entropy as an expectation value was suggested by C. Tsallis [11,12,13], as that form is not additive for factorizing joint probabilities (or being additive for correlated, non-factorizing probabilities). Other and further generalizations, naming more parameters, were also suggested formulated in terms of leading order corrections to the Boltzmann formula in the thermodynamical limit or just utilizing more parameters for possible deformations of the original formula [9,14,15,16,17,18,19]. Properties of generalized entropy formulas were intensely studied, for two selected examples with respect to the Tsallis entropy see [20,21].
In the present paper another viewpoint is presented: (i) first we identify deviations from the classical logarithmic formula derived from deviations from additivity; (ii) then we demonstrate how phase space finiteness effects cause corrections to the additivity of entropy in coincidence with the factorization of probability in a microcanonical approach; (iii) and finally we ask the question of which modified entropy can be the most additive one in this respect. More closely, a general group entropy [22] is considered following non-addition rules and a limit of its infinite repetitions on small amounts is considered as an asymptotic rule of composition [23]. Associative rules form group operations, therefore a logarithm of the formal group can be derived from a general composition law, which is then additive. This will be the content of the next section.
There were decade-long discussions about the physical (or statistical) meaning of the parameter q, the first non-universal parameter occurring in generalized entropy formulas. It may be bound to the sort of the system under discussion, to its material properties, but at the same time it occurs generally in a given class of statistical systems, including informatics, statistical mechanics, dynamics at the edge of chaos, and complex random networks. A few approaches in the quest of uncovering physical mechanisms determining the value of q in particular cases in which the present author was involved, are in Refs. [23,24,25,26,27]. Similar studies by others are copiously cited in review books, cf. [12,24].
Following this, a physical interpretation of the parameter q will be established connected to the finite heat capacity of an environment [28] and to possible fluctuations in phase space dimensionality. The latter is akin to the superstatistical approach [29,30,31,32,33]. A balance between physical factors reducing and increasing the value of q may ensure the classical q = 1 case, however, in a general setting it is not provided. One is then tempted to restore additivity at best—since classical thermodynamics is based on this property. The attempt is made by using a logarithm of the formal group of entropy composition instead of the original entropy, S K ( S ) , and deriving again the associated q K parameter. Then q k = 1 generates a differential equation for the K ( S ) function, and due to that, a new composition rule.
Finally some examples will be discussed and families of K ( S ) forms will be established. The Boltzmann, Renyi, Tsallis entropies are all special cases, and they represent physical extremes in terms of the heat container capacity and the relative size of superstatistical fluctuations.

2. Logarithm of the Formal Group

It is worth starting our mathematical considerations with the composition law of entropy, or any other real valued physical quantity, when contacting and combining two systems to a bigger, unified one. Abandoning additivity, which ensures the co-extensivity properties of entropy and other extensive quantities, constructed also as expectation values, further composition rules are considered for non-extensive statistical mechanics [12]. The Abe-Tsallis composition law,
S 12 = S 1 + S 2 + ( q 1 ) S 1 S 2 ,
is a particular case for a more general one, described by a two-variable function, x y = h ( x , y ) . In such composition rules the entropies, S i (i = 1, 2, or 12) can be any state functions, S ( E , N , X 1 , ) and assumed to be valid in general, both for equilibrium and non-equilibrium entropies. Here we do not address such questions, just look for the consequences of adopting non-additive rules for the entropy. The value of the parameter q is usually in the open interval ( 0 , 2 ) , in measured cases very often close but not equal to q = 1 . Formally, however, it can be any real number, even negative. Certainly for q < 1 , the entropies cannot be too large in order to avoid a negative composite result. In thermodynamics we deal mostly with large systems; thus, the requirement of associativity is natural:
h ( h ( x , y ) , z ) = h ( x , h ( y , z ) ) .
Having the third law in mind, on the other hand, zero entropy is a valid value, and its addition must be trivial:
h ( x , 0 ) = x ,
and similarly h ( 0 , y ) = y . These two requirements already circumvent composition as a group operation. The question of inverse building remains only nontrivial.
Here, the logarithm of the formal group is helpful. It can be shown that, from the associativity Equation (2), it follows the existence of a monotonous and hence invertible mapping, which maps the general rule to the addition:
K ( h ( x , y ) ) = K ( x ) + K ( y ) .
The function K ( z ) is the formal logarithm; it can be constructed asymptotically from the rule h ( x , y ) as follows. Imagine we compose from x to x y = h ( x , y ) by adding small amounts, Δ y , a number of time, N, so that N Δ y = y . Then, in a general step, one proceeds as
x n + 1 = x n Δ y = h ( x n , Δ y ) = h ( x n , 0 ) + Δ y h y ( x n , 0 ) + l d o t s
The index n in this change is additive, while x and y are not. Seeking for a continuous limit in the variable t = n / N between zero and one, we arrive at
x ( 0 ) x ( 1 ) d x h y ( x , 0 ) = 0 1 y d t = y .
Here x ( 0 ) = x and x ( 1 ) = x y . Denoting the primitive function of the above integral by K ( z ) , our result reads as
K ( x y ) K ( x ) = y .
This form breaks the symmetry between x and y. We remedy this problem by re-defining Δ y = K ( y ) / N , i.e., taking steps by the additive quantity K ( y ) . By this we obtain the K-additivity
K ( x y ) = K ( x ) + K ( y ) .
This is the sought mapping to additivity; therefore, K ( x ) is called the formal logarithm. We note by passing this point that due to the continuous limit in the above derivation, the new rule is only asymptotic:
h ( x , y ) = K 1 K ( x ) + K ( y )
does not always coincide with the starting rule, h ( x , y ) . Such asymptotic rules, however, build attractors among all composition rules. The result Equation (6) can also be obtained by taking the partial derivative of Equation (4) at y = 0 , using h ( x , 0 ) = x and integrating. The asymptotic rule is a reconstruction of the composition rule from its first derivative at a very small (zero) second argument.
Here, we mention examples. The Tsallis-Abe rule, h ( x , y ) = x + y + a x y with a = q 1 , leads to the formal logarithm K ( x ) = 1 a ln ( 1 + a x ) and does not change in the asymptotics: h ( x , y ) = x + y + a x y . A more general rule, using a general function of the product of the composables, h ( x , y ) = x + y + f ( x y ) on the other hand leads back to the Tsallis-Abe rule with a = f ( 0 ) . Triviality requires f ( 0 ) = 0 , of course. The properties K ( 0 ) = 0 and K ( 0 ) = 1 also do hold.
Lesser-known, more complex rules can also be investigated. For example, h ( x , y ) = ( x + y ) A ( x y ) + B ( x y ) results in a logarithm of a rational function for K ( x ) . Instead of listing more and more examples, however, let us close this section with a more general comment. Once we change the simple additivity to K-additivity, equivalent to the use of an associative composition rule, the entropy formula in terms of the probability also changes.
Considering an ensemble in the Gibbs sense, the state i is realized W i times, while altogether, W = i W i instances are investigated. The probability of being in state i approaches the p i = W i / W ratio. Since the individual contribution to entropy would be ln p i in the classical approach, a composition of W such instances from which the i-th is repeated W i times shall be constructed by K-additivity. The logarithm of the probability being additive for factorizing joint probabilities, a non-additive entropy can be constructed by the inverse function of the formal logarithm:
S tot , non add = i W i K 1 ( ln p i ) .
In this way the generalized entropy formula belongs to an ensemble average value (or expectation value):
K 1 ( S ) = i p i K 1 ( ln p i ) .
This formula may need a little more explanation. Since we replaced the original additivity assumption with K-additivity, the additive quantities, reflected in the ln ( 1 / p i ) formula which is additive for factorizing probabilities, must be a K-function of the non-additive ones. The above Equations (10) and (11) are for the non-additive quantities; therefore, the inverse function, K 1 is used on the additive log.
With the example of the Tsallis–Abe composition law, we have K ( S ) = 1 a ln ( 1 + a S ) and K 1 ( z ) = ( e a z 1 ) / a . This delivers the Tsallis entropy,
S T = K 1 ( S ) = 1 a i p i 1 a p i ,
with q = 1 a as the non-additive, but expectation value-like construction, with the corresponding Rényi entropy,
S R = K ( S T ) = = 1 a ln i p i 1 a ,
as the version additive for factorizing probabilities, but not being an expectation value (ensemble average).
Consequently, equilibrium distributions when maximizing the entropy or its monotonous function, K ( S ) , deliver corresponding solutions of
S p i = α + β ϵ i ,
while keeping an average energy fixed. Both for the Rényi and Tsallis q-entropy the resulting canonical distribution becomes a cut power law in the individual energy, ϵ i :
p i = 1 Z 1 + a ϵ i T 1 / a .
In the a 0 ( q 1 ) limit the Boltzmann-Gibbs exponential emerges. The factor Z ensures the normalization, i p i = 1 .

3. q Parameter in the Boltzmannian Approach

At a first glance, it seems arbitrary which composition rule, and consequently which formal logarithm, is to be used in our models. However, the parameters occurring in a modification of the entropy addition law need some connection to physical reality. In this section we first extend the familiar textbook derivation of the exponential canonical distribution by going a step further in the thermodynamical limit expansion and then compare the result with the cut power-law canonical distribution. This gives rise to a possible physical interpretation of the parameter q.
Following the classical argumentation, there is a factor in the probability of a subsystem having energy ϵ out of the total E occurring as a ration of corresponding phase space volumes:
ρ ( ϵ ) = Ω ( E ϵ ) Ω ( E ) .
Here, the averaging is over parallel ensemble copies of the same system, allowing for microscopical fluctuations in parameters beyond the total energy, E, like particle numbers, charges, etc. The occupied phase space volumes are connected to the entropy, Ω ( E ) = e S ( E ) . Expanding the expression Equation (16) up to first order in ϵ E one arrives at the well known canonical factor
ρ 1 ( ϵ ) = e ϵ S ( E ) = e ϵ S ( E ) = 1 Z e ϵ / T .
Here Z is a normalization factor ensuring 0 E ρ ( ϵ ) d ϵ = E . The temperature is interpreted as 1 / T = S ( E ) . The information about the environment (heat bath) is comprised into this single parameter, traditionally.
Now we look at the consequences of going one step further, i.e., performing an O ( ϵ 2 ) expansion:
ρ 2 ( ϵ ) = e ϵ S ( E ) + ϵ 2 S ( E ) / 2 = 1 ϵ S ( E ) + 1 2 ϵ 2 S ( E ) + S ( E ) 2 .
This result is to be compared with the canonical distribution following from the Rényi and Tsallis entropy, to the Tsallis–Pareto distribution [12,34] to the same order
1 + ( q 1 ) ϵ T 1 q 1 = 1 ϵ T + q 2 ϵ 2 T 2 .
Term by term comparison between Equations (18) and (19) interprets the parameters T and q in the Tsallis–Pareto distribution as being
1 T = S ( E ) , q T 2 = S ( E ) + S ( E ) 2 .
Denoting S ( E ) = β as a fluctuating quantity, one uses its variance in the above interpretation of q, besides the derivative of the temperature d T / d E = 1 / C , with C being the total capacity of the heat bath in the formula S ( E ) = d d E 1 T = 1 / C T 2 :
q = 1 1 C + T 2 Δ β 2 .
This result allows for q values both smaller and larger than one; the q = 1 remaining a special choice. Indeed, textbooks [35] suggest that the q = 1 is the only possible choice and then conclude that T Δ β = 1 / C argues for the ”one over square root law” for energy fluctuations. This argumentation is, however, misleading: the physical situation and size of the heat bath actually present must determine the value of the parameter q.

4. Optimal Restoration of Additivity

In the present section we shall optimize the choice of K ( S ) instead of S in estimating the phase space volumes above in order to achieve q K = 1 . This requirement leads to a differential equation restricting the function K ( S ) . Since according to the Tsallis-Abe composition law q = 1 is the additive case, seeking for q K = 1 we call restoration of additivity. This is the best result possible, keeping in mind that the Tsallis-Pareto distribution is also an approximation in case of finite heat bathes, although one term improved beyond the traditional Boltzmann–Gibbs exponential.
The canonical statistical factor in this case transmutes to
ρ K = e K ( S ( E ϵ ) ) K ( S ( E ) ) .
Expanding up to ( ϵ 2 ) terms, as in the previous section, with the assumption that the K ( S ) function is universal, i.e., independent from the energy stored in the heat bath, we obtain
ρ K = 1 ϵ S K + ϵ 2 2 S K + S 2 ( K 2 + K ) +
Comparing this with a Tsallis–Pareto distribution of the same approximation, we obtain another temperature and different variance parameters, T K and q:
1 T K = S K q K = S T 2 K + S 2 1 + K K 2 .
The requirement, q K = 1 , singles out a K ( S ) formal logarithm for the entropy composition rule, which optimizes to the subleading order O ( ϵ 2 ) in general. This results in a simple, solvable differential equation for F = 1 / K :
q K = 1 C F + ( 1 + T 2 Δ β 2 ) ( 1 F ) = 1 .
Here, we again used the fact that S = 1 / T and S = 1 / C T 2 . Ordering this equation to F one obtains
F + 1 / C 1 + T 2 Δ β 2 F = T 2 Δ β 2 1 + T 2 Δ β 2 .
Such an equation can be solved by quadrature even for complicated functions of the entropy.
The simplest physical system is an ideal gas: in this case, the heat capacity is independent of the total entropy, so 1 / C is a constant. On the other hand, we may also assume that the relative variance in temperature, comprised in the term T Δ β = Δ β / β , is also a constant with respect to the total entropy. In this simplest case, it is straightforward to obtain the optimal formal logarithm, K ( S ) from Equation (26).
Here we present the solution, which contains two parameters:
K ( S ) = 1 μ ln 1 + μ λ ( e λ S 1 ) .
This ansatz is a “to and back” construction in a K ( S ) = h μ ( h λ 1 ( S ) ) form with h μ ( x ) = 1 μ ln ( 1 + μ x ) being the formal logarithm associated to the Tsallis–Abe composition rule. The reciprocal inverse of the first derivative of K ( S ) is obtained as
F ( S ) = 1 K ( S ) = μ λ + 1 μ λ e λ S
satisfying the F ( 0 ) = 1 condition. Substituting this function and its first derivative into Equation (26) we conclude that F + λ F = μ with
μ = T 2 Δ β 2 1 + T 2 Δ β 2 , λ = 1 / C 1 + T 2 Δ β 2 .
It is interesting to check some limits of this expression. For μ λ or in the μ = 0 case, corresponding to zero fluctuations in the thermodynamical temperature, one obtains
K ( S ) = 1 λ e λ S 1 = C ( e S / C 1 ) .
This generates a non-additive entropy formula
K 1 ( S ) = C i p i ln 1 1 C ln p i .
The corresponding canonical distribution is a complicated expression involving Lambert’s function.
In the other extreme, λ = 0 , meaning the presence of an infinite heat capacity (ideal) heat bath, we arrive at
K ( S ) = 1 μ ln 1 + μ S .
In this case the non-additive entropy formula delivers a Tsallis entropy, cf. Equations (11) and (12), and the canonical distribution is a Tsallis–Pareto distribution.
It is most intriguing that the choice μ = λ , i.e., stating that the temperature fluctuations are exactly following the inverse square root law, T Δ β = 1 / s q r t C ; and therefore, μ = λ = 1 / ( C + 1 ) , always the Boltzmann–Gibbs formula emerges:
K ( S ) = S ,
and S itself is additive.
One realizes that the balance between ensemble fluctuations and the finiteness of the heat bath determines which is the optimally additive entropy formula. It is convenient to use the formal logarithm K ( S ) instead of S as an additive quantity. Still, the traditional balance derived from q = 1 ( λ = μ ) is not always given in physical situations. Some might find it strange that the parameter q depends on the heat capacity controlling the reservoir. Here the relation is with the same system’s heat capacity. Analogously, a much simpler correspondence is true for a fixed volume of photon gas, where C = 3 S , since the equation of state is given by S E 3 / 4 V 1 / 4 , stemming from E / V T 4 and S / V T 3 .

5. Fluctuations in Phase Space Dimension

One of the most prominent cases when fluctuations are “external”, occurring due to the way of collecting data and not stemming from the finiteness of the heat bath, is the study of single particle energy spectra in high energy collisions. Hadronization makes n particles, event by event a different number, while the total energy shared by them is approximately constant. This situation is opposite to the energy fluctuations with a fixed number of particles (atoms).
In this case the phase space to be filled by individual energies has a fluctuating dimension, while the total energy determining the microcanonical hypersurface is fixed. Then, depending on the actual probability of making n hadrons in a single collision event, P n , the summed distribution of single particle energy, frequently measured by the transverse momentum for very energetic particles, will differ from the traditionally expected exponential or Gaussian. In order to dwell on this problem, let us first review how the microcanonical phase space is calculated at a given total energy, E, and how dimensions for n particles move in some spatial dimensions.
Phase space is over momenta. Individual energies are functions of momenta according to the corresponding dispersion relation. A number of such relations look like a power of the absolute value, so they can be comprised into an L p -norm:
i = 1 n | p i | p 1 / p R ( E )
with p i individual momentum components, altogether n-dimension in phase space and E total energy. The R ( E ) function also reflects the dispersion relation between energy and momenta. For a one dimensional jet E = | p | , here simply R ( E ) = E and the L 1 norm is to be used. For nonrelativistic ideal gases E = | p | 2 / 2 m , therefore, R ( E ) = 2 m E and the L 2 norm is used.
For extreme relativistic particles p = 1 , and R ( E ) = E measures the volume satisfying
i = 1 n | p i | E .
The general formula reads as
Ω n ( p ) ( R ) = Γ ( 1 / p + 1 ) n Γ ( n / p + 1 ) ( 2 R ) n .
Originally Dirichlet obtained this formula in a french publication [36]. More contemporary popularizations are due to Smith and Vamanamurthy [37] from 1989 and Xianfu Wang [38] from 2005. Wikipedia also has an entry on this formula [39] and a recursive proof in few lines can be obtained from [40]. A microcanonical constrained hypersurface is the derivative of the above volume formula against the total energy
G n ( p ) ( E ) = d d E Ω n ( p ) ( R ( E ) ) .
Meanwhile the surface is the derivative against R.
We define a ratio of volumes, and consider the p = 1 case, related to relativistic particles in a 1-dimensional jet:
r n ( 1 ) = Ω 1 ( 1 ) ( ϵ ) Ω n 1 ( 1 ) ( E ϵ ) Ω n ( 1 ) ( E ) = n ϵ E 1 ϵ E n 1 .
For normalization we use the pure environmental factor, the above ratio without the single particle factor:
ρ n ( 1 ) = Ω n 1 ( 1 ) ( E ϵ ) Ω n ( 1 ) ( E ) = n 2 E 1 ϵ E n 1 .
This ρ n ( 1 ) ( ϵ , E ) is normalized over an integral in the single-particle phase space. This means an integral over p between E and + E , while ϵ = | p | with the absolute value:
E + E ρ n ( 1 ) d p = n 2 E E + E 1 | p | E n 1 d p = 1 .
Due to the | p | absolute value expression, this integral is twice of that between 0 and E, whence the factor 2 in the denominator becomes cancelled.
Once its integral is normalized to 1, also the mixture
ρ ( 1 ) ( ϵ ; E ) = n = 0 P n ρ n ( 1 ) ( ϵ )
is normalized to 1, provided that the n P n = 1 sum is also normalized.
Finally, note that the ratio of volumes and energy shells in the 1-dimensional, relativistic case are simply related:
G n ( p ) ( E ) = d d E Ω n ( p ) ( R ( E ) ) = Ω n ( p ) ( R ( E ) ) d d E ln ( R n ) ,
in the p = 1 (relativistic) L 1 -norm delivers due to R ( E ) = E
G n ( 1 ) ( E ) = n E Ω n ( 1 ) ( E ) .
So we have G 1 ( 1 ) ( ϵ ) = 1 ϵ Ω 1 ( 1 ) = 2 and
g n ( 1 ) ( ϵ ; E ) = G 1 ( 1 ) ( ϵ ) G n 1 ( 1 ) ( E ϵ ) G n ( 1 ) ( E ) = n 1 E 1 ϵ E n 2 .
Finally, we have
r n ( 1 ) ( ϵ ; E ) = ϵ g n + 1 ( 1 ) ( ϵ ; E ) .
In this case the microcanonical ratio, g n + 1 ( 1 ) is normalized to 1 for an integral over ϵ between 0 and E.
For ideal gases, one considers S = ln c n + n ln E , delivering
1 T = n E ; and q T 2 = n ( n 1 ) E 2 .
Here, q is actually the measure of non-Poissonity,
q = 1 1 n + Δ n 2 n 2 .
For the negative binomial distribution (NBD) q = 1 + 1 / ( k + 1 ) , for the Poissonian exactly q = 1 . In hadronization statistics the Tsallis–Pareto distribution extracted from transverse momentum distributions and the multiplicity fluctuations event by event go hand in hand [41,42,43]. Such distributions may be a consequence of dynamical random processes, too, as it is investigated in the framework of the Local Growth Global Reset (LGGR) model recently on the master equation level [44], or earlier in the framework of the generalization of Boltzmann’s kinetic approach and the related H theorem [45,46,47].

6. Conclusions

In conclusion, we investigated the physical background for applying non-additive entropy in three steps: (i) we reviewed associative composition rules and the derivation of their asymptotic version valid in the thermodynamical limit, (ii) we have optimized which formal logarithm of the entropy, K ( S ) , is to be used in phase space occupation probability arguments, including but not restricted to the case for Tsallis entropy, and finally, (iii) we reviewed phase space ratios in high energy jets as an application for superstatistical fluctuation in the dimensionality of phase space volumes. The coupling of these three aspects in a particular chain concludes that even ideal gases in a finite heat bath environment and away from thermal equilibrium can show a certain ambiguity, best removed by using non-additive entropy formulas.
To show an example, a certain ambiguity for estimating the heat capacity from maximizing the mutual entropy between observed subsystem and heat bath is discussed in detail for ideal gases in the Appendix A. We demonstrate in Appendix B that using K ( S ) instead of S indeed clears the mismatch between statistical informatics—postulating mutual entropy maximum in equilibrium—and thermodynamics, obtaining the heat capacity of a subsystem independently of the heat bath, i.e., using the Universal Thermostat Independence (UTI) principle.

Funding

This research was funded by NKFIH grant number K123815.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Ideal Gas with Finite Heat-Bath

First, we describe how a finite heat capacity heat bath influences the heat capacity of an observed subsystem. The effect stems from maximizing the mutual entropy instead of the subsystem’s entropy alone.
We consider ideal gases, occupying phase space volumes according to the N-Ball in L p norm picture. N is the number of degrees of freedom, or particles, while dimensionality factorizes. For one-dimensional extreme relativistic particles, the radius is R ( E ) = E and one uses the diamond shape, L 1 -norm. For traditional, nonrelativistic particles the radius is R ( E ) = 2 m E and one uses the spherical, L 2 -norm.
In both cases the entropy is given as
S = ln ( a ( N ) ) + k N ln E .
We have k = 1 for one-dimensional jets, and k = 3 / 2 for traditional ideal gases in 3 dimensions. The first derivative wrsp the energy defines the β = 1 / T variable:
β S ( E ) = k N E = 1 / T .
The second derivative defines implicitly the heat capacity:
C d Q d T V = d E d T = d E d ( 1 / β ) = k N
with
S ( E ) = 1 C T 2 = β 2 k N .
For a subsystem connected with another system (may be called reservoir if large enough) not the individual, but the mutual entropy maximum describes the most probable energy for a subsystem. We have I 12 = S ( E 1 , N 1 ) + S ( E 2 , N 2 ) S ( E 1 + E 2 , N 1 + N 2 ) and when fixing E 2 and all N-s from that the first derivative,
I 12 E 1 = k N 1 E 1 k N 1 + N 2 E 1 + E 2 ,
and the second derivative against E 1 is given by
2 I 12 E 1 2 = k N 1 E 1 2 + k N 1 + N 2 ( E 1 + E 2 ) 2 .
Due to ρ ( E 1 ) e I 12 , the most probable energy value for the subsystem is obtained from the vanishing of the first derivative of I 12 . This occurs at β 1 = β 12 , according to the definition of β and Equation (A5). At this point from the second derivative, an effective, intercorrelated heat capacity for the subsystem appears. We have
2 I 12 E 1 2 max = k β 2 N 2 N 1 ( N 1 + N 2 ) ,
and from that the effective heat capacity
C 1 , I 12 = m a x = C 1 C 2 ( C 1 + C 2 ) = C 1 + C 1 2 C 2 .
Exactly the correction C 1 2 / C 2 is the effect of a finite heat bath reservoir which vanishes in the thermodynamical limit, C 2 .
A similar result can be derived when fixing the total energy, E = E 1 + E 2 , and then looking for the most probable subsystem energy, E 1 . We get
C 1 , I 12 = m a x = C 1 C 1 2 C .
In this case the total system has fixed parameters (energy, heat capacity). Again, in the thermodynamical limit the correction vanishes.

Appendix B. Universal Thermostat Independence

The above corrections, in one case positive, in another negative, make the concept of heat capacity ambiguous. In order to avoid this discrepancy, we can follow two strategies: (i) ignore the problem and restrict to the infinite capacity reservoir limit, or (ii) compensate for this leading effect near to the maximal probability. Choosing the second strategy is equivalent with admitting that the original additive concept of mutual entropy has to be generalized.
Following the second option, we consider the maximum of the mutual K-entropy instead of the original entropy, and the additive quantity associated to a possibly non-additive entropy, K ( S ) , is constructed in a way that the corresponding heat capacity appears infinite.
It is easy to achieve by choosing K ( S ) accordingly. All usual quantities, temperature, heat capacity acquire a K index by doing so and we obtain
1 T k = E K ( S ) = K ( S ) S E = K T 1 C K T K 2 = E 1 T K = 1 T 2 K + ( K ) 2 1 C .
Solving it for the K-capacity of heat, we have
1 C K = K ( K ) 2 + 1 C .
The universal thermostat independence requires that 1 / C K = 0 , which is a second order differential equation for K ( S ) , quoted as the UTI principle in [28,48],
K ( S ) ( K ( S ) ) 2 = 1 C .
With the boundary conditions K ( 0 ) = 0 (for the sake of keeping the third law of thermodynamics) and K ( 0 ) = 1 (just keeping the Boltzmann constant at its original value k B = 1 ), we arrive at a solution for a constant C, independent of S, as being
K ( S ) = C ln 1 S / C .
Applying now K-additivity with this formula, we require zero mutual K-entropy reflecting total independence of the energy states of reservoir and subsystem,
K ( S 1 ) + K ( S 2 ) K ( S 12 ) = 0 ,
we derive the Tsallis–Abe composition law. From Equations (A13) and (A14), one writes
C ln 1 S 1 / C C ln 1 S 2 / C = C ln 1 S 12 / C .
From here, a product rule follows,
1 S 1 C · 1 S 2 C = 1 S 12 C ,
which in turn simplifies to
S 12 = S 1 + S 2 1 C S 1 S 2 .
Comparing this with the Tsallis–Abe rule Equation (1) one obtains q = 1 1 / C . This is already a part of the result presented in Equation (21). For the total result, previously known temperature fluctuation can already be counted for, which may or may not under- or overcompensate this effect.

References

  1. Clausius, R. Théorie Mécanique de la Chaleur; Libraire Scientifique, Industrielle et Agricole, Série B, No. 2; Eugéne Lacroix: Paris, France, 1868. [Google Scholar]
  2. Boltzmann, L. Weitere Studien über das Wärmegleichgewicht unter Gasmolekülen. Wien. Ber. 1872, 66, 275. [Google Scholar]
  3. Boltzmann, L. Über die Beziehung einer allgemeine mechanischen Satzes zum zweiten Hauptsatze der Wärmetheorie. Sitzungsberichte K. Akad. Wiss. Wien 1877, 75, 67. [Google Scholar]
  4. Gibbs, J.W. Elementary Principles in Statistical Mechanics; C. Scriber’s Sons: New York, NY, USA, 1902. [Google Scholar]
  5. Shannon, C. A mathematical theory of communication. Bell. Syst. Tech. J. 1948, 27, 379–423; ibid 623–656. [Google Scholar] [CrossRef] [Green Version]
  6. Renyi, A. On measures of information and entropy. In Proceedings of the Fourth Berkeley Symposium on Mathematics, Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; Volume 1, pp. 547–561. [Google Scholar]
  7. Havrda, J.; Charvat, F. Quantification Method of Classification Processes. Concept of Structural Entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  8. Daróczy, Z. Generalized Information Functions. Inf. Control 1970, 16, 36. [Google Scholar] [CrossRef] [Green Version]
  9. Sharma, B.D.; Mittal, D.P. New nonadditive measures of inaccuracy. J. Math. Sci. 1975, 10, 122. [Google Scholar]
  10. Nielsen, F.; Nock, R. A closed-form expression for the Sharma-Mittal entropy of exponential families. J. Phys. A 2011, 45, 032003. [Google Scholar] [CrossRef] [Green Version]
  11. Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988, 52, 479. [Google Scholar] [CrossRef]
  12. Tsallis, C. Introduction to Non-Extensive Statistical Mechanics: Approaching a Complex World; Springer Science+Business Media LLC: New York, NY, USA, 2009. [Google Scholar]
  13. Tsallis, C. Nonadditive entropy: The concept and its use. EPJ A 2009, 40, 257–266. [Google Scholar] [CrossRef] [Green Version]
  14. Hanel, R.; Thurner, S. A comprehensive classification of complex statistical systems and an axiomatic derivation of their entropy and distribution functions. EPL 2011, 93, 20006. [Google Scholar] [CrossRef]
  15. Hanel, R.; Thurner, S. When do generalized entropies apply? How phase space volume determines entropy. EPL 2011, 96, 50003. [Google Scholar] [CrossRef] [Green Version]
  16. Landsberg, P.T. Is equilibrium always an entropy maximum? J. Stat. Phys. 1984, 35, 159. [Google Scholar] [CrossRef]
  17. Tisza, L. Generalized Thermodynamics; MIT Press: Cambridge, MA, USA, 1961. [Google Scholar]
  18. Maddox, J. When entropy does not seem extensive. Nature 1993, 365, 103. [Google Scholar] [CrossRef]
  19. Zapirov, R.G. Novije Meri i Metodi b Teorii Informacii; New Measures and Methods in Information Theory; Kazan State Tech University: Kazan, Russia, 2005; ISBN 5-7579-0815-7. [Google Scholar]
  20. Abe, S. Axioms and uniqueness theorem for Tsallis entropies. Phys. Lett. A 2000, 271, 74. [Google Scholar] [CrossRef]
  21. Santos, R.J.V. Generalization of Shannon’s theorem for Tsallis entropy. J. Math. Phys. 1997, 38, 4104. [Google Scholar] [CrossRef]
  22. Jensen, H.J.; Tempesta, P. Group Entropies: From Phase Space Geometry to Entropy Functionals via Group Theory. Entropy 2018, 20, 804. [Google Scholar] [CrossRef] [Green Version]
  23. Biro, T.S. Abstract composition rule for relativistic kinetic energy in the thermodynamical limit. EPL 2008, 84, 56003. [Google Scholar] [CrossRef]
  24. Biro, T.S. Is There a Temperature? Conceptual Challenges at High Energy, Acceleration and Complexity; Springer Series on Fundamental Theories of Physics 1014; Springer Science+Business Media LLC: New York, NY, USA, 2011. [Google Scholar]
  25. Biro, T.S.; Jakovac, A. Power-law tails from multiplicative noise. Phys. Rev. Lett. 2005, 94, 132302. [Google Scholar] [CrossRef] [Green Version]
  26. Biro, T.S.; Purcsel, G. Non-extensive Boltzmann-equation and hadronization. Phys. Rev. Lett. 2005, 95, 062302. [Google Scholar] [CrossRef] [Green Version]
  27. Biro, T.S.; Purcsel, G.; Urmossy, K. Non-extensive approach to quark matter. EPJ A 2009, 40, 325–340. [Google Scholar] [CrossRef]
  28. Biro, T.S.; Van, P.; Barnafoldi, G.G.; Urmossy, K. Statistical Power Law due to Reservoir Fluctuations and the Universal Thermostat Independence Principle. Entropy 2014, 16, 6497–6514. [Google Scholar] [CrossRef]
  29. Beck, C.; Cohen, E.G.D. Superstatistics. Physica A 2003, 322, 267–275. [Google Scholar] [CrossRef] [Green Version]
  30. Cohen, E.D.G. Superstatistics. Physica D 2004, 193, 35. [Google Scholar] [CrossRef]
  31. Beck, C. Recent developments in superstatistics. Braz. J. Phys. 2009, 38, 357. [Google Scholar] [CrossRef]
  32. Beck, C. Superstatistics in high-energy physics, Application to cosmic ray energy spectra and e+e- annihilation. EPJ A 2009, 40, 267–273. [Google Scholar] [CrossRef]
  33. Beck, C. Dynamical foundations of nonextensive statistical mechanics. Phys. Rev. Lett. 2001, 87, 180601. [Google Scholar] [CrossRef] [Green Version]
  34. Available online: https://en.wikipedia.org/wiki/Pareto_distribution#Generalized_Pareto_distributions (accessed on 26 October 2022).
  35. Landau, L.D.; Lifshitz, E.M. Course of Theoretical Physics; Statistical Physics; Elsevier Science: Amsterdam, The Netherlands, 2013; Volume 5, ISBN 9781483103372. [Google Scholar]
  36. Dirichlet, L.P.G. Sur une nouvelle méthode pour la détermination des intégrales multiples. J. Math. Pures Appl. 1839, 4, 164–168. [Google Scholar]
  37. Smith, D.J.; Vamanamurthy, M.K. How Small Is a Unit Ball? Math. Mag. 1989, 62, 101–167. [Google Scholar] [CrossRef]
  38. Wang, X. Volumes of Generalized Unit Balls. Math. Mag. 2005, 78, 390–395. [Google Scholar] [CrossRef]
  39. Available online: htps://en.wikipedia.org/wiki/Volume_of_an_n-ball (accessed on 26 October 2022).
  40. Available online: https://math.stockexhange.com/questions/301506/hypervolume-of-a-n-dimensional-ball-in-p-norm (accessed on 26 October 2022).
  41. Wilk, G.; Wlodarczyk, Z. On the interpretation of nonextensive parameter q in Tsallis statistics and Levy distributions. Phys. Rev. Lett. 2000, 84, 2770. [Google Scholar] [CrossRef] [Green Version]
  42. Wilk, G. Fluctuations, correlations and non-extensivity. Braz. J. Phys. 2007, 37, 714. [Google Scholar] [CrossRef] [Green Version]
  43. Wilk, G.; Wlodarczyk, Z. Power laws in elementary and heavy-ion collisions. A story of fluctuations and nonextensivity? EPJ A 2009, 40, 299–312. [Google Scholar] [CrossRef] [Green Version]
  44. Biro, T.S.; Neda, Z. Unidirectional random growth with resetting. Physica A 2018, 499, 335–361. [Google Scholar] [CrossRef] [Green Version]
  45. Kaniadakis, G. Non-linear kinetics underlying generalized statistics. Physica A 2001, 296, 405. [Google Scholar] [CrossRef] [Green Version]
  46. Kaniadakis, G. H-theorem and generalized entropies within the framework of nonlinear kinetics. Phys. Lett. A 2001, 288, 283. [Google Scholar] [CrossRef]
  47. Kaniadakis, G. Relativistic entropy and related Boltzmann kinetics. EPJ A 2009, 40, 275–287. [Google Scholar] [CrossRef] [Green Version]
  48. Biró, T.S.; Barnaföldi, G.G.; Ván, P. New entropy formula with fluctuating reservoir. Phys. A 2015, 417, 215–220. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Biró, T.S. Non-Additive Entropy Composition Rules Connected with Finite Heat-Bath Effects. Entropy 2022, 24, 1769. https://doi.org/10.3390/e24121769

AMA Style

Biró TS. Non-Additive Entropy Composition Rules Connected with Finite Heat-Bath Effects. Entropy. 2022; 24(12):1769. https://doi.org/10.3390/e24121769

Chicago/Turabian Style

Biró, Tamás Sándor. 2022. "Non-Additive Entropy Composition Rules Connected with Finite Heat-Bath Effects" Entropy 24, no. 12: 1769. https://doi.org/10.3390/e24121769

APA Style

Biró, T. S. (2022). Non-Additive Entropy Composition Rules Connected with Finite Heat-Bath Effects. Entropy, 24(12), 1769. https://doi.org/10.3390/e24121769

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop