Next Article in Journal
An Unified Approach to Limits on Power Generation and Power Consumption in Thermo-Electro-Chemical Systems
Previous Article in Journal
Comparing Surface and Mid-Tropospheric CO2 Concentrations from Central U.S. Grasslands
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropic Forms and Related Algebras

by
Antonio Maria Scarfone
Istituto Sistemi Complessi (ISC–CNR) c/o Dipartimento di Scienza Applicata e Tecnologia-Politecnico di Torino, Corso Duca degli Abruzzi 24, Torino, I-10129, Italy
Entropy 2013, 15(2), 624-649; https://doi.org/10.3390/e15020624
Submission received: 16 November 2012 / Revised: 16 January 2013 / Accepted: 30 January 2013 / Published: 7 February 2013

Abstract

:
Starting from a very general trace-form entropy, we introduce a pair of algebraic structures endowed by a generalized sum and a generalized product. These algebras form, respectively, two Abelian fields in the realm of the complex numbers isomorphic each other. We specify our results to several entropic forms related to distributions recurrently observed in social, economical, biological and physical systems including the stretched exponential, the power-law and the interpolating Bosons-Fermions distributions. Some potential applications in the study of complex systems are advanced.

1. Introduction

Complex systems have statistical properties that differ greatly from those of classical systems governed by the Boltzmann–Gibbs (BG) entropy [1,2,3,4,5,6]. Often, the probability distribution observed in complex systems deviates from the Gibbs exponential one, showing asymptotic behavior characterized by stretched exponential [7], power law [8] or log-oscillating [9] tails. To take into account these phenomena typically observed in social, economics, biological and physical systems [10,11,12,13,14,15], several entropic forms [16,17,18,19,20,21] have been introduced on the physical ground. They are derived by replacing the logarithmic function appearing in the BG entropy with its generalized version. In this way, the corresponding equilibrium distribution, obtained by maximizing the entropic form under appropriate constraints, assumes a generalized Gibbs expression with a deformed exponential instead of the standard exponential.
In [18,22], within the framework of the κ-deformed statistical mechanics, the idea of a generalized sum and product has been advanced with the purpose to extend some properties of standard logarithm and exponential to the corresponding κ-deformed functions. Shortly afterwards, generalized algebras and the related calculus have been proposed in the framework of the q-statistics based on Tsallis entropy [23,24]. These algebras found several applications running from the number theory [25,26,27], Laplace transformations [28], Fourier transformations [29,30,31], central limit theorem [32,33], combinatorial analysis [34,35,36,37,38], Gauss law of errors [39,40,41]. On the physical ground, generalized algebras have been applied in statistical mechanics [42] and statistical field theory [43]. The concept of generalized algebras has been employed constructively in [44,45] jointly to the introduction of the most general trace-form entropy compatible with the first three Khinchin axioms. In particular, [45] discussed the usefulness of a deformed product in studying the extensive property of generalized entropies, a question that will be further analyzed in the present work. Finally, in [46,47], generalized algebras has been used to account for statistical correlations originated from the fractal structures emerging in the phase space of interacting systems.
A question related to generalized algebras derived in the framework of statistical mechanics concerns the distributive propriety between deformed sum and deformed product. In the formalism of κ-statistics it has been shown in [22] that a distributive κ-sum and κ-product exist. These operations form an Abelian field in the domain of the real numbers. This means that κ-sum is associative, commutative, and admits the opposite and the neutral element in the field of the real numbers. In the same way, the κ-product is associative and commutative, and its inverse and identity exist and belong to the real field.
Further, [18] proposed a second Abelian algebra formed by a κ-sum and a κ-product different with the former. This algebra is not closed on the real field since the opposite element in the κ-sum is always a complex number. Thus, it forms an Abelian field only in the realm of the complex numbers.
The situation appears more complicated in the framework of the q-statistics where a q-sum and a q-product have been introduced almost at the same time, but independently, in [23,24]. These operations form separately two Abelian groups in R and in R + , respectively, although the q-product is not distributive with respect to the q-sum. Until today, the question related to the existence of a distributive algebra in the framework of q-statistics is still open as remarked in [48]. As will be shown, a possible solution to this problem is proposed in the following.
In this work we show that, starting from a well-defined complex function g ( z ) , a pair of Abelian fields defined in the complex plane and isomorphic to each other can be introduced. To make contact with statistical mechanics, we require that g ( z ) is the analytical extension of a real and monotonic function g ( x ) , with x U ( 0 , 1 ) , used in the definition of a generalized trace-form entropy. This is discussed next in Section 2, where we present the formalism and the notations used in this work. In Section 3, we introduce a systematic method to derive the algebraic structures related to a given generator g ( z ) . In Section 4, we detail our results to some relevant distributions emerging in the study of complex systems like the stretched exponential distribution, several power-law distributions and the interpolating Bose–Fermi quantum distribution. Potential applications concerning the Gauss law of errors and the study of the phase-space volume in an interacting system are investigated in Section 5. Final comments are reported in Section 6, and the proofs for the theorems stated in this paper are presented in the appendices.

2. Preliminary

Let us introduce a real function g ( x ) : U V , with U x min , x max R and V y min , y max R , where x min 0 , 1 x max , y min 0 y max . The function g ( x ) is monotonic increasing in U
d g ( x ) d x > 0 , x U
Without loss of generality, g ( x ) can be normalized in g ( 1 ) = 0 . We require that any possible divergence of g ( x ) , as x goes to zero, is mild enough to satisfy the condition
0 1 g ( x ) d x > -
Equation (1) assures the existence of a function f ( x ) : V U , the inverse of g ( x ) , with
f ( g ( x ) ) = g ( f ( x ) ) = x .
Clearly, Equation (1) induce the following conditions on f ( x ) :
d f ( x ) d x > 0
with f ( 0 ) = 1 and
y min 0 f ( x ) d x < +
From g ( x ) we introduce a function Λ ( x ) defined as
Λ ( x ) = 1 x 0 x g ( y ) d y - Λ 0 ,
with Λ 0 = 0 1 g ( x ) d x , which is strictly increasing, normalized in Λ ( 1 ) = 0 and fulfills the relation
d d x x Λ ( x ) = g ( x ) - Λ 0
We define a generalized trace-form entropy according to
S [ p ] = - i = 1 W p i Λ ( p i )
where p = { p i [ 0 , 1 ] } i = 1 , , W is a set of normalized probability functions.
By integrating Equation (7) on the interval ( 0 , p i ) and summing on the index i, we can rewrite the generalized entropy (8) in the form
S [ p ] = - i = 1 W 0 p i g ( x ) d x + Λ 0 i = 1 W p i
Accounting for the normalization of p i , this expression coincides with the entropy-generating algorithm proposed in [49].
Equation (8) introduces a class of positive definite, symmetric, expandable, decisive, maximal, concave entropic forms. It includes the BG entropy as a particular case for g ( x ) = ln ( x ) . Furthermore, as shown in [50], Equation (8) fulfills the Lesche stability condition, a fundamental property that should be satisfied by any well-established entropic function. Finally, the statistical mechanics theory based on these entropic forms satisfy the H-theorem when combined with the principle of microscopic reversibility [49].
Within the expression (8), the search of the maximum of S [ p ] , constrained by M + 1 linear relations
i = 1 W ϕ j ( x i ) p i = O j
with j = 0 , 1 , M , is realized throughout the following variational problem
δ δ p j S [ p ] - j = 0 M μ j i = 1 W ϕ j ( x i ) p i = 0
Quite often, the constraints ϕ j ( x ) are given by means of momenta of a set of real numbers x i representing the possible outcomes of a certain physical observables X occurring with probability p i . For instance: O 0 = 1 , for ϕ 0 ( x i ) = 1 , fixes the normalization; O 1 μ , for ϕ 1 ( x i ) = x i , fixes the mean value of X; O 2 σ 2 , for ϕ 2 ( x i ) = x i 2 , fixes the variance of X and so on.
Accounting for Equation (7), the result of the problem (11) reads
p i p ( x i ) = f ( x ˜ i )
with
x ˜ i = Λ 0 - j = 0 M μ j ϕ j ( x i )
Therefore, the distribution for the system described by the entropy (8) is Gibbs-like, where the function f ( x ) plays the role of a generalized exponential. The BG distribution is recovered for f ( x ) = exp ( x ) .
It is worthy to note that generalized trace-form entropies and the related distributions by means of a variational principle à la Jaynes have been discussed in [45,51,52]. In particular, [51,52] have derived the most general trace-form entropy compatible with the first three Khinchin axioms that result to be a sub-family of the most general class (9).
Finally, we remark that Equation (1)–Equation (5) assure the existence of a well-defined entropic form and its distribution, obtained according to the maximal entropy principle. Nevertheless, these conditions are not strictly relevant to the definition of the algebraic structures discussed in the next section. Actually, some of these conditions can be relaxed if one is not interested in the entropic aspect of the problem but just interested in the link between the algebras and the pertinent distribution.

3. Generalized Algebras

In general, the function g ( x ) and its inverse f ( x ) can be extended analytically to the whole complex plane with the exception of a limited set of points { z i } corresponding to the branch of g ( z ) . In this way, let us consider the complex function g ( z ) , with z C 0 C - { z i } , and its inverse function f ( z ) , which is univocally defined in Riemann cut plane C 0 . They assume real values when z U (resp. z V ).
Starting from g ( z ) and f ( z ) , we define a generalized sum and product forming a commutative and associative algebra in C 0 . They are given by
z z = f ln exp g ( z ) + exp g ( z )
z z = f g ( z ) + g ( z )
for any z , z C 0 .
Theorem 1. The algebraic structure A C 0 × C 0 C 0 , , form an Abelian field in C 0 .
This means that the generalized sum ⊕ fulfills:
(1). Commutativity: z z = z z ;
(2). Associativity: z ( z z ) = ( z z ) z ;
(3). Neutral element: z O = O z = z , where O = lim z - f ( z ) ;
(4). Opposite: z [ - z ] = [ - z ] z = O , where [ - z ] = f ( ln ( - exp g ( z ) ) ) ;
the generalized product ⊗ fulfills:
(5). Commutativity: z z = z z ;
(6). Associativity: z ( z z ) = ( z z ) z ;
(7). Identity element: z 1 = 1 z = z ;
(8). Inverse: z [ 1 / z ] = [ 1 / z ] z = 1 , where [ 1 / z ] = f ( - g ( z ) ) ;
and the product ⊗ is distributive w.r.t. the sum ⊕:
(9). Distributivity: z ( z z ) = ( z z ) ( z z ) .
(See Appendix A for a proof).
Remark that, in the domain of the real numbers, sum (14) is surely defined only if V I R , the opposite is always in C whilst the inverse is in R only if y min = - y max . Thus, limiting to the real field, the structure ( U × U U , ) form at least an Abelian semigroup.
The product ⊗ satisfies the relations
f ( z ) f ( z ) = f ( z + z )
g ( z z ) = g ( z ) + g ( z )
which reflect the well-know properties of the standard exponential and logarithm: exp ( z ) · exp ( z ) = exp ( z + z ) and ln ( z · z ) = ln ( z ) + ln ( z ) .
In the same way, posing G ( z ) = exp g ( z ) and F ( z ) = f ln ( z ) , with G F ( z ) = F G ( z ) = z , the sum ⊕ fulfills the relations
F ( z ) F ( z ) = F ( z + z )
G ( z z ) = G ( z ) + G ( z )
as dual of Equation (16) and Equation (17), which now read
F ( z ) F ( z ) = F ( z · z )
G ( z z ) = G ( z ) · G ( z )
A second pair of operations forming a commutative and associative algebra different from the previous one can be introduced by
z ˜ z = g f ( z ) · f ( z )
z ˜ z = g exp ln f ( z ) · ln f ( z )
for any z , z C 0 .
Theorem 2. The algebraic structure A ˜ C 0 × C 0 C 0 , ˜ , ˜ forms an Abelian field in C 0 .
This means that the generalized sum ˜ fulfills:
(1). Commutativity: z ˜ z = z ˜ z ;
(2). Associativity: z ˜ ( z ˜ z ) = ( z ˜ z ) ˜ z ;
(3). Neutral element: z ˜ 0 = 0 ˜ z = z ;
(4). Opposite: z ˜ [ - z ] = [ - z ] ˜ z = 0 , where [ - z ] = g ( 1 / f ( z ) ) ;
the generalized product ˜ fulfills:
(5). Commutativity: z ˜ z = z ˜ z ;
(6). Associativity: z ˜ ( z ˜ z ) = ( z ˜ z ) ˜ z ;
(7). Identity element: z ˜ I = I ˜ z = z , where I = g ( e ) ;
(8). Inverse: z ˜ [ 1 / z ] = [ 1 / z ] ˜ z = I , where [ 1 / z ] = g exp ( 1 / ln f ( z ) ) ;
and the product ˜ is distributive w.r.t. the sum ˜ :
(9). Distributivity: z ˜ ( z ˜ z ) = ( z ˜ z ) ˜ ( z ˜ z ) .
(See Appendix B for a proof).
This algebra is certain defined in R whenever U R + and V R .
The sum ˜ satisfies the relations
f ( z ˜ z ) = f ( z ) · f ( z )
g ( z ) ˜ g ( z ) = g ( z · z )
which, again, mimic the property of standard exponential and logarithm. Introducing the functions G ˜ ( z ) = g exp ( z ) and F ˜ ( z ) = ln f ( z ) , with G ˜ F ˜ ( z ) = F ˜ G ˜ ( z ) = z , the product ˜ fulfills the relations
F ˜ ( z ˜ z ) = F ˜ ( z ) · F ˜ ( z )
G ˜ ( z ) ˜ G ˜ ( z ) = G ˜ ( z · z )
as dual of Equation (24) and Equation (25), which now read as
F ˜ ( z ˜ z ) = F ˜ ( z ) + F ˜ ( z )
G ˜ ( z ) ˜ G ˜ ( z ) = G ˜ ( z + z )
We observe that Equation (24)–Equation (27) follow from Equation (16)–Equation (19), respectively, by posing
· , + ˜
· , + ˜
and vice versa.
This correspondence can be summarized in the following
Theorem 3. The algebraic structures A and A ˜ are homomorphic.
The homomorphism is established by the relations:
( z z ) g = w ˜ w f
( z z ) g = w ˜ w f
with z g = exp g ( z ) and z f = ln f ( z ) , where w and w are solutions of equation w f = z g . (See Appendix C for a proof).

4. Examples

4.1. Stretched-Exponential Distribution

As a first example let us consider the following entropic form [21]
S γ [ p ] = i = 1 W Γ 1 γ + 1 , - ln p i - Γ 1 γ + 1 i = 1 W p i
where Γ ( u , x ) = x t u - 1 exp ( - t ) d t is the incomplete gamma function of the second kind and Γ ( u ) Γ ( u , 0 ) is the gamma function [53]. Equation (34) follows from Equation (9) with
g ( x ) = - ln 1 x 1 / γ
with γ > 0 . This function assumes real values for x U 0 , 1 with y V - , 0 and the distribution takes the form of a stretched exponential
p i = exp - x ˜ i γ
The first algebraic structure related to entropy (34) is generated by the operations
z z = exp ln E γ ( z ) E γ ( z ) E γ ( z ) + E γ ( z ) γ - 1
z z = exp ln ( E γ ( z ) E γ ( z ) ) γ - 1
where E γ ( z ) = exp ( - ln z ) 1 / γ , the opposite [ - z ] = 1 / E 1 / γ ( - E γ ( z ) ) C and the inverse [ 1 / z ] = z I , with I = ( - 1 ) γ , in general, belonging to C .
The second algebraic structure A ˜ is given by
z ˜ z = z γ + z γ 1 / γ
z ˜ z = I z z
where I = ( - 1 ) 1 + 1 / γ and [ - z ] = - I z are, in general, in C .

4.2. Power-Law Like Distributions

A next example is given by the Sharma–Taneja–Mittal (STM) entropy [54,55,56]
S κ , r [ p ] = - i = 1 W p i 1 + r + κ - p i 1 + r - κ 2 κ
with - | κ | r | κ | if 0 | κ | < 1 / 2 and | κ | - 1 r 1 - | κ | if 1 / 2 | κ | < 1 .
This entropic form, reconsidered on the physical ground in [20,22], can be obtained from Equation (9) by posing
g ( x ) = λ ln { κ , r } x α - 1
where
ln { κ , r } ( x ) = x r + κ - x r - κ 2 κ
is the deformed logarithm. The existence of exp { κ , r } ( x ) , the inverse function of ln { κ , r } ( x ) , follows from its monotonicity in R although an explicit expression, in general, cannot be given. The two constants α and λ are given by
α = 1 + r - κ 1 + r + κ 1 / 2 κ , λ = 1 + r - κ ( r + κ ) / 2 κ 1 + r + κ ( r - κ ) / 2 κ
and are related by ( 1 + r ± κ ) α r ± κ = λ and 1 / λ = ln { κ , r } 1 / α . Therefore, function (42) is correctly normalized. The equilibrium distribution related to entropy (41) is
p ( x ˜ i ) = α exp { κ , r } - x ˜ i λ
and the algebras can be obtained from Equation (14)–Equation (15) and Equation (22)–Equation (23).
Hereinafter, we consider some particular cases:

q-Distribution

We look at the following entropic form
S 2 - q [ p ] = i = 1 W p i 2 - q - p i q - 1
with 0 < q < 2 , known as Tsallis entropy [16] in the ( 2 - q ) -formalism [57]. It belongs to the STM-family with r = | κ | and q = 1 - 2 | κ | and its equilibrium distribution reads
p i = α q 1 - ( 1 - q ) x ˜ i 1 1 - q
with α q = ( 2 - q ) 1 / ( q - 1 ) and λ q = 1 .
Entropy (46) follows from Equation (9) by posing
g ( x ) = x 1 - q - 1 a q
with a q = ( 1 - q ) / ( 2 - q ) . This function takes real values for U R + with V - 1 / a q , + for 0 < q < 1 and V - , - 1 / a q for 1 < q < 2 .
The first algebraic structure related to the q-distribution is generated by the operations
z z = z 1 - q + z 1 - q 2 + a q ln 2 cosh z 1 - q - z 1 - q 2 a q 1 1 - q
z z = z 1 - q + z 1 - q - 1 1 1 - q
where O C for 0 < q < 1 , O 0 for 1 < q < 2 and [ 1 / z ] = ( 2 - z 1 - q ) 1 / ( 1 - q ) .
The second algebraic structure follows from
z ˜ z = z + z + a q z z
z ˜ z = 1 a q exp 1 1 - q ln ( 1 + a q z ) ln ( 1 + a q z ) - 1
with [ - z ] = - z / ( 1 + a q z ) , I = ( e 1 - q - 1 ) / a q and [ 1 / z ] = ( e ( 1 - q ) 2 / ln ( 1 + a q z ) - 1 ) / a q .
As already stated, q-product (50) and q-sum (51) have been introduced in literature in [23,24] and recently they were widely employed in the framework of q-statistics [28,29,32,33]. It is now clear why operations (50) and (51) do not fulfil the distributive properties z ˜ ( z z ) z ˜ z z ˜ z since they belong to two different algebraic structures.
Often, q-algebra is applied directly to the functions
ln q ( z ) = z 1 - q - 1 1 - q
and
exp q ( z ) = [ 1 + ( 1 - q ) z ] 1 / ( 1 - q )
They fulfill the relations
ln q ( z q z ) = ln q ( z ) + ln q ( z )
exp q ( z + z ) = exp q ( z ) q exp q ( z )
and
ln q ( z · z ) = ln q ( z ) ˜ q ln q ( z )
exp q ( z ˜ q z ) = exp q ( z ) · exp q ( z )
Then, it could be useful to rewrite Equation (49)–Equation (52) with respect to ln q ( z ) and exp q ( z ) to make contact with the existent literature. They are
z q z = exp q ln exp ln q ( z ) + exp ln q ( z )
z q z = z 1 - q + z 1 - q - 1 1 1 - q
and
z ˜ q z = z + z + ( 1 - q ) z z
z ˜ q z = ln q exp ln exp q ( z ) · ln exp q ( z )

Student’s t-Distribution

In statistics the q-distribution recovers, for special values of q, the Student’ t-distribution. In fact, passing to the continuum, entropy (46) can be written in
S ν [ p ] = 1 ν - p ( x ) 1 - ν - p ( x ) d x
with ν = q - 1 .
Maximization of Equation (63), under the constraints of normalization and variance σ 2 = ( 2 - ν ) / ( 2 - 3 ν ) , with 0 < ν < 2 / 3 , gives the Student’s t-distribution
p ( x ) = ν π ( 2 - ν ) Γ 1 ν Γ 1 ν - 1 2 1 + ν 2 - ν x 2 - 1 / ν
which can be rewritten in the form
p ( x ˜ ) = 1 + ν x ˜ - 1 / ν
with x ˜ = ( 1 + μ 0 + μ 2 x 2 ) / ( 1 - ν ) . In this case, the related algebras follow readily from Equation (49)–Equation (52).

Cauchy–Lorentz distribution

In the ν 1 limit, Equation (64) reduces to the Cauchy–Lorentz distribution
p ( x ) = 1 π 1 1 + x 2
here rewritten in
p ( x ˜ ) = 1 1 - x ˜
with x ˜ = 1 - π ( 1 + x 2 ) < 0 . Although Equation (66) cannot be obtained from the variational problem (11) since, as known, the second moment of Cauchy distribution is not defined, we can still derive the corresponding algebra starting from the distribution, by posing
g ( z ) = 1 - 1 z
in the spirit of Equation (12).
We obtain
z z = z + z 2 z z - ln 2 cosh z - z 2 z z - 1
z z = z z z + z - z z
and
z ˜ z = z + z - z z
z ˜ z = 1 - exp - ln ( 1 - z ) ln ( 1 - z )
We observe that Equation (68) does not fulfill condition (2). However, in this case, since an entropic form related to the Cauchy distribution is not known, at least in the sense of a variational principle, the generator of the algebras is related just to the distribution.

Zipf–Pareto Distribution

Starting from Equation (43) we can derive the asymptotic behavior of the distribution functions, since
exp { κ , r } ( - x ) = 1 / exp { κ , - r } ( x ) x - 1 / ( k - r )
which holds for x 1 . Therefore, Equation (45) asymptotically behaves like the Zipf–Pareto distribution
p ( x ) x s
with s = 1 / ( r - k ) < 0 .
Formally, we can obtain the corresponding algebras, in the limit of large values of z and z , starting from the algebras related to the STM entropy. The corresponding operations are:
z z ln exp z 1 / s + exp z 1 / s s
z z ( z 1 / s + z 1 / s ) s
and
z ˜ z z z
z ˜ z exp s ln z ln z
respectively.
Remark that these operations are obtained in the limit of large values starting from the operations obtained in the STM formalism. Consequently, Equation (75)–Equation (78) cannot be considered as a continuous deformation of the standard operations of sum and product, although the algebras formed by these operations are still commutative, associative and distributed. (Standard algebra are recovered in the γ 1 limit for the stretched exponential and in the ( κ , r ) ( 0 , 0 ) limit for STM family.)

4.3. Interpolating Fermi–Dirac and Bose–Einstein Distribution

As a final example, let us consider the interpolating Bosons-Fermions distribution. Following [58], we start from the equilibrium distribution
p ( x ˜ i ) = 1 exp ( x ˜ i ) - κ
where κ ( - 1 , 1 ) is the interpolating parameter. In the κ ± 1 limit we obtain the well-known Bose–Einstein and Fermi–Dirac statistics. The Boltzmann–Gibbs distribution is also included for κ = 0 .
Equation (79) suggests considering the following generating function
g ( x ) = ln 1 + κ x x
Unfortunately, this function does not meet the correct normalization f ( 0 ) = 1 . This happens because in quantum many-body statistics, the normalization factor, the fugacity, is accounted inside to the exponential term and not as an overall factor like in the formalism presented in these pages.
To obtain the right normalization we scale opportunely the axes origin. In this way, the generator becomes
g ( x ) = ln ( 1 + κ ) x 1 + κ x
and the corresponding entropic form reads
S κ [ p ] = - i = 1 W p i ln p i - 1 + κ p i κ ln ( 1 + κ p i ) - 1 + κ κ ln ( 1 + κ )
which coincides with the one proposed in [58], modulo the additive constant factor.
Function (81) assumes real values for U - , - 1 / κ 0 , + when κ > 0 and U 0 , - 1 / κ when κ < 0 , with V - , + .
The first set of operations are given by
z z = z + z + 2 κ z z 1 + κ 2 z z
z z = ( 1 + κ ) z z 1 + κ ( z + z - z z )
with [ - z ] = - z / ( 1 + 2 κ z ) , [ 1 / z ] = ( 1 + κ z ) / [ ( 1 + 2 κ ) z - κ ] , while the latter are
z ˜ z = - ln κ - κ exp ( - z ) - κ exp ( - z ) ) + ( 1 + κ ) exp ( - z - z )
z ˜ z = ln ( 1 + κ ) - ln κ + exp ( - h ( z ) h ( z ) )
where h ( z ) = ln 1 + κ - κ exp ( z ) - z , [ - z ] = - z + ln ( 1 + κ - κ exp ( z ) ) - ln ( 1 - κ + κ exp ( - z ) ) , I = ln ( 1 + κ ) e / ( 1 + κ e ) and [ 1 / z ] = ln ( 1 + κ ) - ln ( κ + exp ( 1 / h ( z ) ) ) .

5. Applications

5.1. Gauss Law of Error

One of the most known and ubiquitous distributions in nature is surely the Gauss law of error, also known as normal distribution. Its universality is well explained by the central limit theorem, which establishes its role of “attractor” in the space of distribution in the limit of a large number of statistically independent and identically distributed events.
Recently [29,32], a possible generalization of the central limit theorem has been proposed in order to justify the recurrence of non-Gaussian distributions [39,40,41] in the limit of a large number of statistically dependent events.
In the following, we reconsider the derivation of Gauss law for generalized correlated measurements, where correlations are accounted for by means of a deformed product.
Let us consider n replicate measurements of a given observable. Let x i , with i = 1 , , n , the experimental values obtained. These are distributed around the “exact” value x e , although unknown, so that ϵ i = | x i - x e | represents the “error” or uncertainty in each measurement. The quantities ϵ i can be treated as the outcomes of a random variable with a discrete probability distribution function p i = p ( ϵ i ) .
From the experimental measurement, one obtains the mean value
x * = 1 n ( x 1 + x 2 + + x n )
and, according to the law of large numbers [59]
x e = lim n x *
We suppose that in a series of n subsequent measurements, each event is not necessarily statistically independent and correlations among events can be taken into account by means of a generalized product.
Following the maximal likelihood principle, we introduce the quantity
L ( x e ) = p ( ϵ 1 ) p ( ϵ 2 ) p ( ϵ n )
as a function of x e , since
ϵ i = | x i - x e |
and p ( ϵ i ) can be obtained by maximizing Equation (89) under the condition x * = x e .
However, it is more convenient to evaluate the extremum of g ( L ( x ) ) ; being g ( x ) a monotonic increasing function, its maximum coincides with that of L ( x ) . Thus, from Equation (17), we have
g L ( x e ) = i = 1 n g p ( ϵ i )
and extremal points are obtained from equation
i = 1 n d d y i g ( p ( y i ) ) | y i = x i - x * = 0
On the other hand, the algebraic equation i ϕ ( y i ) = 0 , constrained by the condition i y i = 0 , admits the solution ϕ ( y ) = a y for a given real constant a [41] so that, accounting for
i = 1 n y i i = 1 n ( x i - x * ) = 0
we get
d d y g ( p ( y ) ) = - 2 β y
with a - 2 β , a constant.
After integration, the distribution assumes the final form
p ( x i ) = f - μ - β ( x i - x ) 2
where the integration constant μ is fixed by the normalization.
This function is a generalized Gaussian distribution provided that β is positive definite.
Furthermore, the positivity of β is required to guarantee the maximum of L ( x ) (and not a minimum) at x x e . In fact, using Equation (91) and Equation (95), we obtain
L ( x e ) = f - n μ - β i = 1 n ( x i - x e ) 2
and maximum at x x e requires
d 2 L ( x ) d x 2 | x = x * = d 2 f ( y ) d y 2 d y d x 2 + d f ( y ) d y d 2 y d x 2 x = x * < 0
with y = - n μ - β i ( x i - x ) 2 .
Positivity of β follows from d f ( x ) / d x > 0 and observing that
d y d x | x = x * = 0
and
d 2 y d x 2 | x = x * = - 2 β

5.2. Extensivity of Microcanonical Entropy

In the microcanonical picture, the macrostate “visited” by a single particle is formed by a collection of W microstates all having the probability p = 1 / W to be inspected. In this simple case, Γ-space and μ-space coincide and BG entropy reduces to the Boltzmann–Planck entropy
S B = ln W
for a set of W identically distributed elements.
As known, Equation (100) becomes an extensive quantity when the system is composed by N identically distributed and uncorrelated particles, such that
p N = 1 W N 1 W N = ( p ) N
and the probability in the Γ-space factorizes into the product of the respective marginal probabilities of the corresponding μ-space. In this situation entropy (100) scales according to S B ( N ) = N S B ( 1 ) .
Clearly, the validity of Equation (101) is related to the assumption of statistical independence between the particles of the system. Otherwise, if the system is globally correlated, this property could be no longer valid and other entropic forms may become extensive depending on the nature of the correlations among the particles. In [46,47], it has been suggested that, in order to restore the extensive property of entropy, the q-product introduced in [23,24] can be used to describe the possible correlations in a system governed by the q-entropy.
Actually, this is a general statement in the microcanonical picture. In fact, let us consider a collection of W identically distributed events with equal probability p i = 1 / W , ( i = 1 , , W ) . Equation (8) becomes
S [ W ] = - i = 1 W 1 W Λ 1 W = - Λ 1 W
for a given monotonic increasing function Λ ( x ) , which is a generalized version of Boltzmann–Planck entropy.
By identifying the functions Λ ( x ) and its inverse E ( x ) with the algebraic generators f ( x ) and g ( x ) , respectively, we can introduce the product ⊗ according to
Λ ( x y ) = Λ ( x ) + Λ ( y )
We postulate the following composition rule in the Γ-space
W AB = 1 W A 1 W B - 1
Thus, entropy (102) becomes additive
S [ W AB ] = - Λ 1 W AB = - Λ 1 W A 1 W B = - Λ 1 W A - Λ 1 W B (105) = S [ W A ] + S [ W B ]
and, more in general, extensive
S [ W N ] = - Λ 1 W N = - Λ 1 W N = - N Λ 1 W (106) = N S [ W ]
where x N = x x x x , N times.
As an explicit example, let us consider the microcanonical version of entropy (46) that is
S 2 - q [ W ] = ln q ( W )
From the corresponding algebra we obtain
W AB = W A q - 1 + W B q - 1 - 1 1 / ( q - 1 )
and
W N = N W q - 1 - N + 1 1 / ( q - 1 )
making entropy (107) additive and extensive. Analogue relations have been already derived in [47].
Another handling example is given by κ-entropy [18]
S κ [ p ] = - i = 1 W p i 1 + κ - p i 1 - κ 2 κ
with | κ | < 1 , which is a one parameter subfamily of the STM entropy (41), with r = 0 .
In the microcanonical picture entropy (110) becomes [20]
S [ W ] = ln { κ } ( W )
where
ln { κ } ( x ) = x κ - x - κ 2 κ
is the κ-logarithm [18].
The algebraic operations obtained from the function ln κ ( z ) are, respectively
z z = exp 1 κ arcsinh h + ( z , z ) + κ ln 2 cosh 1 κ h - ( z , z )
z z = exp 1 κ arcsinh 2 h + ( z , z )
with
h ± ( z , z ) = 1 4 z κ ± z κ - ( z - κ ± z - κ )
and
z ˜ z = z 1 + κ 2 z 2 + z 1 + κ 2 z 2 ,
z ˜ z = 1 κ sinh 1 κ arcsinh ( κ z ) arcsinh ( κ z )
Remark that algebra ( R + × R + R + , , ) is a unitary semi-ring. Differently, algebra ( R × R R , ˜ , ˜ ) forms an Abelian field.
Using Equation (113) we obtain
W AB = 2 h + ( W A , W B ) + 1 + 4 h + ( W A , W B ) 2 1 / κ
and
W N = N κ Λ κ ( W ) + 1 + ( N κ Λ κ ( W ) ) 2 1 / κ
In the limit of W , we get
W N W N 1 / κ W
Thus, κ-entropy is asymptotically extensive whenever the volume of the phase space scales according to N 1 / κ .
We remark that operations (113)–(117) coincide with the ones introduced in [18]. There, author introduces κ-algebras in the real field starting from the relations
( x y ) { κ } = x { κ } + y { κ }
( x y ) { κ } = x { κ } · y { κ }
and
( x y ) { κ } = x { κ } + y { κ }
( x y ) { κ } = x { κ } · y { κ }
where the κ-numbers x { κ } and x { κ } are defined in
x { κ } = 1 κ arcsinh h ( κ x )
x { κ } = 1 κ h - 1 sinh ( κ x )
for a given generator h ( x ) . It is easy to verify as Equation (121)–Equation (124) are substantially equivalent to Equation (26), Equation (28) and Equation (18), Equation (20), respectively, where the correspondence arises from
g ( z ) = ln z { κ }
f ( z ) = exp z { κ }
Finally, it is worthy to note that, a scaling relation similar to Equation (120) has been considered in [60], where it has been discussed a possible application of κ-statistics to the Bekenstein–Hawking are law in the framework of black holes thermostatistics.

6. Conclusions

In the picture of a statistical mechanics based on a generalized trace-form entropy, we derived two algebraic structures whose operations of sum and product generalize the well-known proprieties of sum and product of ordinary logarithm and exponential. These deformed functions are often used to study statistical mechanics properties of systems characterized by non-Gibbsian distributions. We stated a link between the formalism of generalized algebras and that used in the formulation of generalized statistical mechanics since, as shown, the former are derived starting from a very general expression of a trace-form entropy. In this way, one can introduce a pair of algebraic structures useful for the mathematical manipulation of the emerging statistical-mechanics theory.
We presented several examples to illustrate how the method works. For each of them, we have listed the main quantities like the neutral element for the sum and the unity for the product. Often, these quantities take complex values, i.e., algebras form Abelian field in C , since, in general, they are not closed in R . However, complex algebra turns out to be relevant in several aspects, such as in the definition of generalized a q-Fourier transformation [29,30], given by
F q [ f ] ( ξ ) = supp f exp q ( i χ ξ ) q f ( χ ) d χ
and other [31,32,33]. Furthermore, applications of statistical mechanics to the study of non-equilibrium phenomena, like damping harmonic oscillator or nuclear decay, the frequencies of oscillation or the nuclear energy level are assumed to be complex quantities with the imaginary part related to the damping effect or to the decay width.
Algebras introduced in this paper are isomorphic to the standard algebra of complex numbers. This isomorphism has been discussed previously in [18,22] for the κ-case and more in general in [26,61]. In the present formalism, the isomorphism is realized according to Equation (18) and Equation (26), here rewritten as
x G + y G = ( x y ) G
x F · y F = ( x ˜ y ) F
On the physical ground, generalized algebras may be related to the interaction present in the system. This idea has been developed in the past by several authors ([42,62,63,64,65,66,67,68] to cite a few) and is founded on the possibility to interchange the role of statistical correlations with interactions. In fact, it is known that when a system is weakly interacting, the BG distribution describes well its mechanical statistics proprieties. It turns out that, if interactions can be neglected, i.e., the energy of the whole system factorizes in the sum of the energies of the single parts: U A + B = U A + U B , distribution factorizes too: p A + B = p A · p B . This is a consequence of the exponential form of BG distribution being exp ( x + y ) = exp ( x ) exp ( y ) . On the other hand, if interactions are not negligible U A + B = U A + U B + U int , BG distribution cannot be factorized into the product distribution. Correlations among the parts of the system can occur. A possibility to treat these systems is by introducing a different entropic form whose related distribution is able to capture the salient statistical features, characterized by the underlying algebraic structure.
Such interpretation can be summarized by means of Equation (16) and Equation (24), which, opportunely relabeled, becomes
p ( x ˜ i + y ˜ j ) = p ( x ˜ i ) ˜ p ( y ˜ j )
p ( x ˜ i y ˜ j ) = p ( x ˜ i ) p ( y ˜ j )
where suitable deformed sum and product can describe approximatively the interactions and the correlations between the parts of the systems.
Finally, let us stress again about the generality of the method proposed in this paper. Algebraic structures arise from the existence of a complex function g ( z ) and its inverse function univocally defined in the complex plane. Applications to statistical mechanics require that generator g ( z ) takes real values in a subset of the real axes including the interval [ 0 , 1 ] U , where probabilities make sense. Conditions on g ( x ) , for x U , can be imposed for physical reasons. In this paper we have discussed about the existence of an entropic form related to these algebras, although other conditions should be imposed on g ( x ) if necessary. For instance, if one is interested in studying the thermodynamical equilibrium, a concavity condition on g ( x ) may be imposed. However, these conditions are not pertinent to the construction of the algebraic structures. In fact, the proposed method can be successfully applied to introduce algebras starting from generalized distributions with or without a related entropic form. For instance, algebraic structure of the kind discussed in this paper can be derived for the three parameter distribution introduced in [69] or the one parameter distribution related to the generalized exponential exp κ ( x ) = exp ( arctan ( κ x ) / κ ) introduced in [70].

A. Proof of Theorem 1

For the sum ⊕ defined in Equation (14) we have
Commutativity :
z z = f ln exp g ( z ) + exp g ( z ) = f ln exp g ( z ) + exp g ( z ) = z z
Associativity :
z ( z z ) = z f ln exp g ( z ) + exp g ( z ) = f ln exp g ( z ) + exp g ( z ) + exp g ( z ) = f ln exp g ( z ) + exp g ( z ) z = ( z z ) z
Neutral element :
z O = f ln exp g ( z ) + exp g ( O ) = f ln exp g ( z ) = z
where O = lim z - f ( z ) ,
Opposite :
z [ - z ] = f ln exp g ( z ) + exp g ( [ - z ] ) = f ln exp g ( z ) - exp g ( z ) (A3) = f ( - ) = O
where [ - z ] = f ln - exp g ( z ) .
For the product ⊗ defined in Equation (15) we have
Commutativity :
z z = f g ( z ) + g ( z ) = f g ( z ) + g ( z ) = z z
Associativity :
z ( z z ) = z f g ( z ) + g ( z ) = f g ( z ) + g ( z ) + g ( z ) = f g ( z ) + g ( z ) z (A5) = ( z z ) z
Identity element :
z 1 = f g ( z ) + g ( 1 ) = f g ( z ) = z
Inverse :
x [ 1 / z ] = f g ( z ) + g ( [ 1 / z ] ) = f g ( z ) - g ( z ) = f ( 0 ) = 1
where [ 1 / z ] = f - g ( z ) .
Finally
Distributivity :
z ( z z ) = z f ln exp g ( z ) + exp g ( z ) = f g ( z ) + ln exp g ( z ) + exp g ( z ) = f ln exp g ( z ) ) + ln ( exp g ( z ) + exp g ( z ) = f ln exp g ( z ) · ( exp g ( z ) + exp g ( z ) ) = f ln exp g ( z ) · exp g ( z ) + exp g ( z ) · exp g ( z ) = f ln exp ( g ( z ) + g ( z ) ) + exp ( g ( z ) + g ( z ) ) = f ln exp ( g ( z z ) ) + exp ( g ( z z ) ) (A8) = ( z z ) ( z z )

B. Proof of Theorem 2

For the sum ˜ defined in Equation (22) we have
Commutativity :
z ˜ z = g f ( z ) · f ( z ) = g f ( z ) · f ( z ) = z ˜ z
Associativity :
z ˜ ( z ˜ z ) = z ˜ g f ( z ) · f ( z ) = g f ( z ) · f ( z ) · f ( z ) = g f ( z ) · f ( z ) ˜ z (B2) = ( z ˜ z ) ˜ z
Neutral element :
z ˜ 0 = g f ( z ) · f ( 0 ) = g f ( z ) = z
Opposite :
z ˜ [ - z ] = g f ( z ) · f ( [ - z ] ) = g ( 1 ) = 0
where [ - z ] = g 1 / f ( z ) .
For the product ˜ defined in Equation (23) we have
Commutativity :
z ˜ z = g exp ln f ( z ) · ln f ( z ) = g exp ln f ( z ) · ln f ( z ) = z ˜ z
Associativity :
z ˜ ( z ˜ z ) = z ˜ g exp ln f ( z ) · ln f ( z ) = g exp ln f ( z ) · ln f ( z ) · ln f ( z ) = g exp ln f ( z ) · ln f ( z ) ˜ z (B6) = ( z ˜ z ) ˜ z
Identity element :
z ˜ I = g exp ln f ( z ) · ln f ( I ) = g f ( z ) = z
where I = g ( e ) ,
Inverse :
z ˜ [ 1 / z ] = g exp ln f ( z ) · ln f ( [ 1 / z ] ) = g ( e ) = I
where [ 1 / z ] = g ( exp ( 1 / ln f ( z ) ) ) .
Finally,
Distributivity :
z ˜ ( z ˜ z ) = z ˜ g f ( z ) · f ( z ) = g exp ln f ( z ) · ln ( f ( z ) · f ( z ) ) = g exp ln f ( z ) · ln f ( z ) + ln f ( z ) · ln f ( z ) = g exp ln f ( z ) · ln f ( z ) · exp ln f ( z ) · ln f ( z ) = g f ( z ˜ z ) · f ( z ˜ z ) (B9) = ( z ˜ z ) ˜ ( z ˜ z )

C. Proof of Theorem 3

Starting from the functions g ( z ) and f ( z ) we define the g-numbers
z g = exp g ( z )
and the f-numbers
z f = ln f ( z )
Then, relations (14)–(15) and (22)–(23) can be rewritten in
z g + z g = ( z z ) g
z g · z g = ( z z ) g
and
z f + z f = ( z ˜ z ) f
z f · z f = ( z ˜ z ) f
The correspondence between g-numbers and f-numbers can be established through the solution of equation
z g = w f
Accounting for Equation (C3)–(C6) we have
( z z ) g = ( w ˜ w ) f
( z z ) g = ( w ˜ w ) f
for z g = w f and z g = w f .
In the case of f ( z ) exp ( z ) and g ( z ) ln ( z ) the two algebras collapse each other.

References

  1. Kaniadakis, G.; Scarfone, A.M.; Seno, F. New trends in modern statistical physics. Cent. Eur. J. Phys. 2012, 10, 539. [Google Scholar] [CrossRef]
  2. Kaniadakis, G.; Scarfone, A.M. Adavnces in modern condensed matter physics. Int. J. Mod. Phys. B 2012, 26, 1202001–1. [Google Scholar] [CrossRef]
  3. Kaniadakis, G.; Scarfone, A.M. Advances in modern statistical mechanics. Mod. Phys. Lett. B 2012, 26, 1202001–1. [Google Scholar] [CrossRef]
  4. Caldarelli, G.; Kaniadakis, G.; Scarfone, A.M. Progress in the physics of complex networks. Eur. J. Phys. Spec. Top. 2012, 212, 1–3. [Google Scholar] [CrossRef]
  5. Tsallis, C.; Carbone, A.; Kaniadakis, G.; Scarfone, A.M.; Malartz, K. Advances in statistical physics. Centr. Eur. J. Phys. 2009, 7, 385–386. [Google Scholar] [CrossRef]
  6. Abe, S.; Herrmann, H.; Quarati, P.; Rapisarda, A.; Tsallis, C. Complexity, metastability and nonextensivity. AIP Conf. Proc. 2007, 965, 9. [Google Scholar]
  7. Laherrère, J.; Sornette, D. Stretched exponential distributions in nature and economy: “Fat tails” with characteristic scales. Eur. Phys. J. B 1998, 2, 525–539. [Google Scholar] [CrossRef]
  8. Clauset, A.; Shalizi, C.R.; Newman, M.E.J. Power-law distributions in empirical data, SIAM review. 2009, 51, 661–703. [Google Scholar]
  9. Zaslavsky, G.M. Chaos, fractional kinetics, and anomalous transport. Phys. Rep. 2002, 371, 461–580. [Google Scholar] [CrossRef]
  10. Albert, R.; Barabási, A.-L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef]
  11. Carbone, A.; Kaniadakis, G.; Scarfone, A.M. Where do we stand on econophysics? Physica A 2007, 382, 11–14. [Google Scholar] [CrossRef]
  12. Carbone, A.; Kaniadakis, G.; Scarfone, A.M. Tails and Ties. Eur. Phys. J. B 2007, 57, 121–125. [Google Scholar] [CrossRef]
  13. Sornette, D. Discrete-scale invariance and complex dimensions. Phys. Rep. 1998, 297, 239–270. [Google Scholar] [CrossRef]
  14. Blossey, R. Computational Biology—A Statistical Mechanics Prospective; Chapman & Hall Press: London, UK, 2006. [Google Scholar]
  15. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer: New York, NY, USA, 2009. [Google Scholar]
  16. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  17. Abe, S. A note on the q-deformation-theoretic aspect of the generalized entropies in nonextensive physics. Phys. Lett. A 1997, 224, 326–330. [Google Scholar] [CrossRef]
  18. Kaniadakis, G. Statistical mechanics in the context of special relativity. Phys. Rev. E 2002, 66, 056125. [Google Scholar] [CrossRef] [PubMed]
  19. Kaniadakis, G.; Lissia, M.; Scarfone, A.M. Two-parameter deformations of logarithm, exponential, and entropy: A consistent framework for generalized statistical mechanics. Phys. Rev. E 2005, 71, 046128. [Google Scholar] [CrossRef] [PubMed]
  20. Scarfone, A.M.; Wada, T. Thermodynamic equilibrium and its stability for microcanonical systems described by the Sharma-Taneja-Mittal entropy. Phys. Rev. E 2005, 72, 026123. [Google Scholar] [CrossRef] [PubMed]
  21. Anteneodo, C.; Plastino, A.R. Maximum entropy approach to stretched exponential probability distributions. J. Phys. A: Math. Gen. 1999, 32, 1089–1097. [Google Scholar] [CrossRef]
  22. Kaniadakis, G.; Scarfone, A.M. A new one-parameter deformation of the exponential function. Physica A 2002, 305, 69–75. [Google Scholar] [CrossRef]
  23. Nivanen, L.; Le Mehaute, A.; Wang, Q.A. Generalized algebra within a nonextensive statistics. Rep. Math. Phys. 2003, 52, 437–444. [Google Scholar] [CrossRef]
  24. Borges, E.P. A possible deformed algebra and calculus inspired in nonextensive thermostatistics. Phisica A 2004, 340, 95–101. [Google Scholar] [CrossRef]
  25. Cardoso, P.G.S.; Borges, E.P.; Lobão, T.C.P.; Pinho, S.T.R. Nondistributive algebraic structures derived from nonextensive statistical mechanics. J. Math. Phys. 2008, 49, 093509. [Google Scholar] [CrossRef]
  26. Lobão, T.C.P.; Cardoso, P.G.S.; Pinho, S.T.R.; Borges, E.P. Some properties of deformed q-numbers. Braz. J. Phys. 2009, 39, 402–407. [Google Scholar] [CrossRef]
  27. El Kaabouchi, A.; Nivanen, L.; Wang, Q.A.; Badiali, J.P.; Le Méhauté, A. A mathematical structure for the generalization of conventional algebra. Centr. Eur. J. Phys. 2009, 7, 549–554. [Google Scholar] [CrossRef]
  28. Lenzi, E.K.; Borges, E.P.; Mendes, R.S. A q-generalization of Laplace transforms. J. Phys. A: Math. Gen. 1999, 32, 8551–8561. [Google Scholar] [CrossRef]
  29. Umarov, S.; Tsallis, C. On a representation of the inverse Fq-transform. Phys. Lett. A 2008, 372, 4874–4876. [Google Scholar] [CrossRef]
  30. Jauregui, M.; Tsallis, C. q-Generalization of the inverse Fourier transform. Phys. Lett. A 2011, 375, 2085–2088. [Google Scholar] [CrossRef]
  31. Jauregui, M.; Tsallis, C. New representations of and Dirac delta using the nonextensive-statistical-mechanics q-exponential function. J. Math. Phys. 2010, 51, 063304. [Google Scholar] [CrossRef]
  32. Umarov, S.; Tsallis, C.; Steinberg, S. On a q-central limit theorem consistent with nonextensive statistical mechanics. Milan J. Math. 2008, 76, 307–328. [Google Scholar] [CrossRef]
  33. Umarov, S.; Duarte Queirós, S.M. Functional differential equations for the q-Fourier transform of q-Gaussians. J. Phys. A 2010, 43, 095202. [Google Scholar] [CrossRef]
  34. Niven, R.K.; Suyari, H. The q-gamma and (q,q)-polygamma functions of Tsallis statistics. Physica A 2009, 388, 4045–4060. [Google Scholar] [CrossRef]
  35. Niven, R.K.; Suyari, H. Combinatorial basis and non-asymptotic form of the Tsallis entropy function. Eur. Phys. J. B 2008, 61, 75–82. [Google Scholar] [CrossRef]
  36. Suyari, H.; Wada, T. Multiplicative duality, q-triplet and (μ,ν,q)-relation derived from the one-to-one correspondence between the (μ,ν)-multinomial coefficient and Tsallis entropy Sq. Physica A 2008, 387, 71–83. [Google Scholar] [CrossRef]
  37. Suyari, H. Mathematical structures derived from the q-multinomial coefficient in Tsallis statistics. Physica A 2006, 368, 63–82. [Google Scholar] [CrossRef]
  38. Oikonomou, Th. Tsallis, Renyi and nonextensive Gaussian entropy derived from the respective multinomial coefficients. Physica A 2007, 386, 119–134. [Google Scholar] [CrossRef]
  39. Scarfone, A.M.; Suyari, H.; Wada, T. Gauss’ law of error revisited in the framework of Sharma-Taneja-Mittal information measure. Centr. Eur. J. Phys. 2009, 7, 414–420. [Google Scholar] [CrossRef]
  40. Wada, T.; Suyari, H. kappa-generalization of Gauss’ law of error. Phys. Lett. A 2006, 348, 89–93. [Google Scholar] [CrossRef]
  41. Suyari, H.; Tsukada, M. Law of error in Tsallis statistics. IEEE Trans. Inf. Th. 2005, 51, 753–757. [Google Scholar] [CrossRef]
  42. Lavagno, A.; Scarfone, A.M.; Swamy, P.N. Basic-deformed thermostatistics. J. Phys. A: Math. Theor. 2007, 40, 8635–8654. [Google Scholar] [CrossRef]
  43. Olemskoi, A.I.; Borysov, S.S.; Shuda, I.A. Statistical field theories deformed within different calculi. Eur. Phys. J. B 2010, 77, 219–231. [Google Scholar] [CrossRef]
  44. Hanel, R.; Thurner, S. Generalized Boltzmann factors and the maximum entropy principle: Entropies for complex systems. Physica A 2007, 380, 109–114. [Google Scholar] [CrossRef]
  45. Hanel, R.; Thurner, S. When do generalized entropies apply? How phase space volume determines entropy. Eur. Phys. Lett. 2011, 96, 50003. [Google Scholar] [CrossRef]
  46. Tsallis, C. Occupancy of phase space, extensivity of S(q), and q-generalized central limit theorem. Physica A 2006, 365, 7–16. [Google Scholar] [CrossRef]
  47. Tsallis, C. Is the entropy Sq extensive or nonextensive? Astrophys. Space Sci. 2006, 305, 261–271. [Google Scholar] [CrossRef]
  48. Tsallis, C. Some open points in nonextensive statistical mechanics. Int. J. Bif. Chaos 2012, 22, 1230030. [Google Scholar] [CrossRef]
  49. Abe, S. Generalized entropy optimized by a given arbitrary distribution. J. Phys. A: Math. Gen. 2003, 36, 8733–8738. [Google Scholar] [CrossRef]
  50. Abe, S.; Kaniadakis, G.; Scarfone, A.M. Stabilities of generalized entropies. J. Phys. A: Math. Gen. 2004, 37, 10513–10519. [Google Scholar] [CrossRef]
  51. Hanel, R.; Thurner, S. A comprehensive classification of complex statistical systems and an axiomatic derivation of their entropy and distribution functions. Eur. Phys. Lett. 2011, 93, 20006. [Google Scholar] [CrossRef]
  52. Thurner, S.; Hanel, R. Generalized-generalized entropies and limit distributions. 2009, 39, 413–416. [Google Scholar] [CrossRef]
  53. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series and Products; Academic Press: San Diego, USA, 2000. [Google Scholar]
  54. Sharma, B.D.; Taneja, I.J. Entropy of type (α,β) and other generalized additive measures in information theory. Metrika 1975, 22, 205–215. [Google Scholar] [CrossRef]
  55. Sharma, B.D.; Mittal, D.P. New nonadditive measures of inaccuracy. J. Math. Sci. 1975, 10, 122–133. [Google Scholar]
  56. Mittal, D.P. On some functional equations concerning entropy, directed divergence and inaccuracy. Metrika 1975, 22, 35–46. [Google Scholar] [CrossRef]
  57. Wada, T.; Scarfone, A.M. Connections between Tsallis’ formalisms employing the standard linear average energy and ones employing the normalized q-average energy. Phys. Lett. A 2005, 335, 351–362. [Google Scholar] [CrossRef]
  58. Kaniadakis, G.; Lavagno, A.; Quarati, P. Kinetic approach to fractional exclusion statistics. Nucl. Phys. B 1996, 466, 527–537. [Google Scholar] [CrossRef]
  59. Feller, W. An Introduction to Probability Theory and Its Applications; John Wiley & Sons, Inc.: New York, NY, USA, 1966. [Google Scholar]
  60. Kaniadakis, G. Statistical mechanics in the context of special relativity. II. Phys. Rev. E 2005, 72, 036108. [Google Scholar] [CrossRef] [PubMed]
  61. Kalogeropoulos, N. Distributivity and deformation of the reals from Tsallis entropy. Physica A 2012, 391, 1120–1127. [Google Scholar] [CrossRef]
  62. Zhang, J.-Z.; Osland, P. Perturbative aspects of q-deformed dynamics. Eur. Phys. J. C 2001, 20, 393–396. [Google Scholar] [CrossRef]
  63. Kaniadakis, G.; Quarati, P.; Scarfone, A.M. Kinetical foundations of non-conventional statistics. Physica A 2002, 305, 76–83. [Google Scholar] [CrossRef]
  64. Wang, Q.A. Nonextensive statistics and incomplete information. Eur. Phys. J. B 2002, 26, 357–368. [Google Scholar] [CrossRef]
  65. Biró, T.S.; Kaniadakis, G. Two generalizations of the Boltzmann equation. Eur. J. Phys. B 2006, 50, 3–6. [Google Scholar] [CrossRef]
  66. Biró, T.S.; Purcsel, G. Equilibration of two power-law tailed distributions in a parton cascade model. Phys. Lett. A 2008, 372, 1174–1179. [Google Scholar] [CrossRef]
  67. Scarfone, A.M. Intensive variables in the framework of the non-extensive thermostatistics. Phys. Lett. A 2010, 374, 2701–2706. [Google Scholar] [CrossRef]
  68. Lenzi, E.K.; Scarfone, A.M. Extensive-like and intensive-like thermodynamical variables in generalized thermostatistics. Physica A 2012, 391, 2543–2555. [Google Scholar] [CrossRef]
  69. Kaniadakis, G. Maximum entropy principle and power-law tailed distributions. Eur. Phys. J. B 2009, 70, 3–13. [Google Scholar] [CrossRef]
  70. Kaniadakis, G. Relativistic entropy and related Boltzmann kinetics. Eur. Phys. J. A 2009, 40, 275–287. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Scarfone, A.M. Entropic Forms and Related Algebras. Entropy 2013, 15, 624-649. https://doi.org/10.3390/e15020624

AMA Style

Scarfone AM. Entropic Forms and Related Algebras. Entropy. 2013; 15(2):624-649. https://doi.org/10.3390/e15020624

Chicago/Turabian Style

Scarfone, Antonio Maria. 2013. "Entropic Forms and Related Algebras" Entropy 15, no. 2: 624-649. https://doi.org/10.3390/e15020624

APA Style

Scarfone, A. M. (2013). Entropic Forms and Related Algebras. Entropy, 15(2), 624-649. https://doi.org/10.3390/e15020624

Article Metrics

Back to TopTop