Next Article in Journal
Emergent Quantum Mechanics: David Bohm Centennial Perspectives
Next Article in Special Issue
Characterizing Complex Networks Using Entropy-Degree Diagrams: Unveiling Changes in Functional Brain Connectivity Induced by Ayahuasca
Previous Article in Journal
The Correlation Production in Thermodynamics
Previous Article in Special Issue
Transients as the Basis for Information Flow in Complex Adaptive Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Geometric Duality of ϕ-Deformed Exponential Families

1
Section for Science of Complex Systems, CeMSIIS, Medical University of Vienna, Spitalgasse 23, A-1090 Vienna, Austria
2
Complexity Science Hub Vienna, Josefstädter Strasse 39, A-1080 Vienna, Austria
3
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
4
IIASA, Schlossplatz 1, A-2361 Laxenburg, Austria
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(2), 112; https://doi.org/10.3390/e21020112
Submission received: 24 December 2018 / Revised: 11 January 2019 / Accepted: 16 January 2019 / Published: 24 January 2019
(This article belongs to the Special Issue Information Theory in Complex Systems)

Abstract

:
In the world of generalized entropies—which, for example, play a role in physical systems with sub- and super-exponential phase space growth per degree of freedom—there are two ways for implementing constraints in the maximum entropy principle: linear and escort constraints. Both appear naturally in different contexts. Linear constraints appear, e.g., in physical systems, when additional information about the system is available through higher moments. Escort distributions appear naturally in the context of multifractals and information geometry. It was shown recently that there exists a fundamental duality that relates both approaches on the basis of the corresponding deformed logarithms (deformed-log duality). Here, we show that there exists another duality that arises in the context of information geometry, relating the Fisher information of ϕ -deformed exponential families that correspond to linear constraints (as studied by J.Naudts) to those that are based on escort constraints (as studied by S.-I. Amari). We explicitly demonstrate this information geometric duality for the case of ( c , d ) -entropy, which covers all situations that are compatible with the first three Shannon–Khinchin axioms and that include Shannon, Tsallis, Anteneodo–Plastino entropy, and many more as special cases. Finally, we discuss the relation between the deformed-log duality and the information geometric duality and mention that the escort distributions arising in these two dualities are generally different and only coincide for the case of the Tsallis deformation.

1. Introduction

Entropy is one word for several distinct concepts [1]. It was originally introduced in thermodynamics, then in statistical physics, information theory, and last in the context of statistical inference. One important application of entropy in statistical physics, and in statistical inference in general, is the maximum entropy principle, which allows us to estimate probability distribution functions from limited information sources, i.e., from data [2,3]. The formal concept of entropy was generalized to also account for power laws that occur frequently in complex systems [4]. Literally dozens of generalized entropies have been proposed in various contexts, such as relativity [5], multifractals [6], or black holes [7]; see [8] for an overview. All these generalized entropies, whenever they fulfil the first three Shannon–Khinchin axioms (and violate the composition axiom) are special cases of the ( c , d ) -entropy asymptotically [9]. Generalized entropies play a role in non-multinomial, sub-additive systems (whose phase space volume grows sub-exponentially with the degrees of freedom) [10,11] and in systems, whose phase space grows super-exponentially [12]. All generalized entropies, for sub-, and super-exponential systems, can be treated within a single, unifying framework [13].
With the advent of generalized entropies, depending on context, two types of constraint are used in the maximum entropy principle: traditional linear constraints (typically moments), E = i p i E i , motivated by physical measurements, and the so-called escort constraints, E u = i u ( p i ) E i / i u ( p i ) , where u is some nonlinear function. Originally, the latter were introduced with multifractals in mind [4]. Different types of constraint arise from different applications of relative entropy. While for physics-related contexts (such as thermodynamics), linear constraints are normally used, in other applications, such as non-linear dynamical systems or information geometry, it might be more natural to consider escort constraints. The question about their correct use and the appropriate form of constraints has caused a heated debate in the past decade [14,15,16,17,18,19]. To introduce escort distributions in the maximum entropy principle in a consistent way, two approaches have been discussed. The first [20] appears in the context of deformed entropies that are motivated by superstatistics [21]. It was later observed in [22] that this approach is linked to other deformed entropies with linear constraints through a fundamental duality (deformed-log duality), such that both entropies lead to the same functional form of MaxEnt distributions. The second way to obtain escort distributions was studied by Amari et al. and is motivated by information geometry and the theory of statistical estimation [23,24]. There, escort distributions represent natural coordinates on a statistical manifold [24,25].
In this paper, we show that there exists another duality relation between this information geometric approach with escort distributions and an approach that uses linear constraints. The relation can be given a precise information geometric meaning on the basis of the Fisher information. We show this within the framework of ϕ -deformations [26,27,28]. We establish the duality relation for both cases in the relevant information geometric quantities. As an example, we explicitly show the duality relation for the class of ( c , d ) -exponentials, introduced in [9,10]. Finally, we discuss the relation between the deformed-log duality and the information geometric duality and show that these have fundamental differences. Each type of duality is suitable for different applications. We hope that this paper helps to avoid confusion in the use of escort distributions in the various contexts.
Let us start with reviewing central concepts of (non-deformed) information geometry, in particular relative entropy and its relation to the exponential family of distributions through the maximum entropy principle. Let us consider a probability simplex, S n , with n independent probabilities, p i , and probability p 0 . Its value is not independent, but determined by the normalization condition, p 0 = 1 i p i . Further, consider a parametric family of distributions p ( θ ) with parameter vector θ M , where M is a parametric space. In this paper, we focus on probabilities over discrete sample spaces, only. For the continuous case see the mathematical formulation of Pistone and Sempi [29]. For the sake of simplicity, we sometimes do not explicitly write the parameter vector θ and consider p i as the independent parameters of the distribution. It is easy to show that the choice of p i determines the choice of the parameters θ i .
Relative entropy, or Kullback–Leibler divergence, is defined as:
D K L ( p q ) = i p i log p i / q i .
For the uniform distribution q = u n , i.e., q i = 1 / n , we have:
D K L ( p u n ) = log n S ( p ) ,
where S ( p ) is the Shannon entropy, S ( p ) = i p i log p i . Let us consider a set of linear constraints, j p j E i j = E i , and denote the configuration vector as E i . Shannon entropy is maximized under this set of linear constraints and the normalization condition, by functions belonging to the exponential family of probability distributions, which can be written as:
p i ( θ ) p ( E i ; θ ) = exp Ψ ( θ ) + θ · E i = exp Ψ ( θ ) + j θ j E i j .
Ψ ( θ ) guarantees normalization. Fisher information defines the metric on the parametric manifold M by taking two infinitesimally-separated points, θ 0 and θ = θ 0 + δ θ , and by expanding D K L ( p ( θ 0 ) p ( θ ) ) ,
g i j F ( θ ) = 2 D K L ( p ( θ 0 ) p ( θ ) ) θ i θ j θ = θ 0 .
For the exponential family of distributions, it is a well-known fact that Fisher information is equal to the inverse of the probability in Equation (3):
g i j F ( θ ) g i j F ( p ( θ ) ) = 1 p i δ i j + 1 p 0 .

2. Deformed Exponential Family

We briefly recall the definition of ϕ -deformed logarithms and exponentials as introduced by Naudts [26]. The deformed logarithm is defined as:
log ϕ ( x ) = 1 x d y 1 ϕ ( y ) ,
for some positive, strictly-increasing function, ϕ ( x ) , defined on ( 0 , + ) . Then, log ϕ is an increasing, concave function with log ϕ ( 1 ) = 0 . log ϕ ( x ) is negative on ( 0 , 1 ) and positive on ( 1 , + ) . Naturally, the derivative of log ϕ ( x ) is 1 / ϕ ( x ) . The inverse function of log ϕ ( x ) exists; we denote it by exp ϕ ( x ) . Finally, the ϕ -exponential family of probability distributions is defined as a generalization of Equation (3):
p i ( θ ) = exp ϕ Ψ ( θ ) + j θ j E i j .
We can express Ψ ( θ ) in the form:
Ψ ( θ ) = log ϕ ( p i ( θ ) ) θ · E i ,
which allows us to introduce dual coordinates to θ . This is nothing but the Legendre transform of Ψ ( θ ) , which is defined as:
φ ( η ) = η · θ Ψ ( θ ) ,
where:
η = Ψ ( θ ) .
Because:
θ j p i ( θ ) = exp ϕ Ψ ( θ ) + θ · E i θ j Ψ ( θ ) + E i j
holds, and using i θ j p i ( θ ) = 0 , we obtain that:
η j = i exp ϕ ( Ψ ( θ ) + θ · E i ) E i j i exp ϕ ( Ψ ( θ ) + θ E i ) = E j · P ϕ ,
where P ϕ is the so-called escort distribution. With exp ϕ ( log ϕ ( x ) ) = ϕ ( x ) , the elements of P ϕ are given by:
P j ϕ = ϕ ( p j ) i ϕ ( p i ) = ϕ ( p j ) h ϕ ( p ) ,
where we define h ϕ ( p ) i ϕ ( p i ) . The Legendre transform provides a connection between the exponential family and the escort family of probability distributions, where the coordinates are obtained in the form of escort distributions. This generalizes the results for the ordinary exponential family of distributions, where the dual coordinates form a mixture family, which can be obtained as the superposition of the original distribution. The importance of dual coordinates in information geometry comes from the existence of a dually-flat geometry for the pair of coordinates. This means that there exist two affine connections with vanishing coefficients (Christoffel symbols). For the exponential family of distributions, the connection determined by the exponential distribution is called e-connection, and the dual connection leading to a mixture family is called m-connection [25]. For more details, see, e.g., [24]. We next look at generalizations of the Kullback–Leibler divergence and the Fisher information for the case of ϕ -deformations.

3. Deformed Divergences, Entropies, and Metrics

For the ϕ -deformed exponential family of distributions, we have to define the proper generalizations of the relevant quantities, such as the entropy, divergence, and metric. A natural approach is to start with the deformed Kullback–Leibler divergence, denoted by D ϕ ( p q ) . ϕ -entropy can then be defined as:
S ϕ ( p ) D ϕ ( p u n ) ,
where ∼ means that the relation holds up to a multiplicative constant depending only on n. Similarly, the ϕ -deformed Fisher information is:
g ϕ , i j ( θ ) = 2 D ϕ ( p ( θ 0 ) p ( θ ) ) θ i θ j θ = θ 0 .
There is now more than one way to generalize the ordinary Kullback–Leibler divergence. The first is Csiszár’s divergence: [30]
I f ( p q ) = i q i f ( p i / q i ) ,
where f is a convex function. For f ( x ) = x ln x , we obtain the Kullback–Leibler divergence. Note, however, that the related information geometry based on the generalized Fisher information is trivial, because we have:
g i j f ( p ) = f ( 1 ) g i j F ( p ) ,
i.e., the rescaled Fisher information metric; see [27]. The second possibility is to use the divergence of Bregman type, usually defined as:
D f ( p q ) = f ( p ) f ( q ) f ( q ) · ( p q ) ,
where the symbol “·” denotes the scalar product. This type of divergence can be understood as the first-order Taylor expansion of f around q , evaluated at p . Let us next discuss the two possible types of the Bregman divergence, which naturally correspond to the ϕ -deformed family of distributions. For both, the ϕ -exponential family of distributions is obtained from the maximum entropy principle of the corresponding ϕ -entropy, however, under different constraints. Note that the maximum entropy principle is just a special version of the more general minimal relative entropy principle, which minimizes the divergence functional D ( p q ) w.r.t. p , for some given prior distribution q .

3.1. Linear Constraints: Divergence a là Naudts

One generalization of the Kullback–Leibler divergence was introduced by Naudts [26] by considering f ( p ) = i 1 p i log ϕ ( x ) d x + ( 1 p i ) , which leads to:
D ϕ N ( p q ) = j q j p j d x log ϕ ( x ) log ϕ ( q j ) .
The corresponding entropy can be expressed as:
S ϕ N ( p ) = j 0 p j d x ln ϕ ( x ) .
S ϕ N ( p ) is maximized by the ϕ -exponential family of distributions under linear constraints. The Lagrange functional is:
L ϕ ( p ) = S ϕ N ( p ) Ψ i p i j θ j i p i E i j ,
which leads to:
log ϕ ( p i ) Ψ j θ j E i j = 0 ,
and we get:
Ψ ( θ ) = i p i log ϕ ( p i ) j θ j E j = log ϕ ( p ) j θ j E j ,
which is just Equation (8), averaged over the distribution p i . Note that Equation (23) provides the connection to thermodynamics, because Ψ ( θ ) is a so-called Massieu function. For a canonical ensemble, i.e., one constraint on the average energy E , parameter θ plays the role of an inverse temperature, and Ψ can be related to the free energy, F ( θ ) = θ Ψ ( θ ) . Thus, the term log ϕ ( p ) can be interpreted as the thermodynamic entropy, which is determined from Equation (23). This is a consequence of the Legendre structure of thermodynamics.
The corresponding MaxEnt distribution can be written as:
p i ( θ ) = exp ϕ log ϕ ( p ) j θ j E j E i j = exp ϕ Ψ ( θ ) + θ · E i .
Finally, the Fisher information metric can be obtained in the following form:
g ϕ , i j N ( p ) = log ϕ ( p i ) δ i j + log ϕ ( p 0 ) = 1 ϕ ( p i ) δ i j + 1 ϕ ( p 0 ) .

3.2. Escort Constraints: Divergence a là Amari

Amari et al. [23,24] used a different divergence introduced in [31], which is based on the choice of f ( p ) = i P i θ log ϕ ( p i ) . This choice is motivated by the fact that the corresponding entropy is just the dual function of Ψ ( θ ) , i.e., φ ( η ) . This is easy to show, because:
φ ( η ) = η · θ Ψ ( θ ) = j P j ϕ ( θ j E i j Ψ ( θ ) ) = j P j ϕ log ϕ ( p j ) .
Thus, the divergence becomes:
D ϕ A ( p q ) = 1 h ϕ ( p ) j ϕ ( p j ) ( log ϕ ( p j ) log ϕ ( q j ) ) ,
and the corresponding entropy can be expressed from Equation (26) as:
S ϕ A ( p ) = 1 h ϕ ( p ) j ϕ ( p j ) log ϕ ( p j ) ,
so it is a dual function of Ψ ( θ ) . For this reason, the entropy is called “canonical”, because it is obtained by the Legendre transform from the Massieu function Ψ . Interestingly, the entropy is maximized by the ϕ -exponential family of distributions under escort constraints. The Lagrange function is:
L ϕ ( p ) = S ϕ A ( p ) Ψ i p i j θ j i P i ϕ E i j .
After a straightforward calculation, we get:
Ψ ( θ ) = i ϕ ( p i ( θ ) ) ,
and the corresponding MaxEnt distribution can be expressed as:
p i ( θ ) = exp ϕ log ϕ ( p ) ϕ j θ j E j ϕ E i j = exp ϕ Ψ ( θ ) + θ · E i ,
where:
Ψ ( θ ) = log ϕ ( p ) ϕ j θ j E j ϕ .
Here, · ϕ denotes the average under the escort probability measure, P ϕ . Interestingly, in the escort constraints scenario, the “MaxEnt” entropy is the same as the “thermodynamic” entropy in the case of linear constraints. We call this entropy, S ϕ A ( p ) , the dual entropy. Finally, one obtains the corresponding metric:
g ϕ , i j A ( p ) = 1 h ϕ ( p ) log ϕ ( p i ) log ϕ ( p i ) δ i j + log ϕ ( p 0 ) log ϕ ( p 0 ) = 1 h ϕ ( p ) ϕ ( p j ) ϕ ( p j ) δ i j + ϕ ( p 0 ) ϕ ( p 0 ) .
Note that the metric can be obtained from Ψ ( θ ) as: g ϕ , i j A ( θ ) = 2 Ψ ( θ ) θ i θ j , which is the consequence of the Legendre structure of escort coordinates [24]. For a summary of the ϕ -deformed divergence, entropy, and metric, see Table 1.

3.3. Cramér–Rao Bound of Naudts Type

One of the important applications of the Fisher metric is the so-called Cramér–Rao bound, which is the lower bound for the variance of an unbiased estimator. The generalization of the Cramér–Rao bound for two families of distributions was given in [26,27]. Assume these two families of distributions to be denoted by p ( θ ) and P ( θ ) , with their corresponding expectation values, · p ( θ ) and · P ( θ ) . Let c k denote the estimator of the family p ( θ ) that fulfills c k p ( θ ) = θ k f ( θ ) , for some function f, and let us consider a mild regularity condition 1 P ( θ ) θ k p ( θ ) P ( θ ) = 0 . Then:
c k c l P ( θ ) c k P ( θ ) c l P ( θ ) 2 f ( θ ) θ k θ l 2 1 I k l ( θ ) ,
where:
I k l ( θ ) = i 1 P i ( θ ) p i ( θ ) θ k p i ( θ ) θ l .
If p ( θ ) = p ϕ ( θ ) is the ϕ -exponential family of distributions, in Equation (34), equality holds for the escort distribution P ( θ ) = P ϕ ( θ ) , [28]. It is easy to see that for this case, i.e., for the ϕ -exponential family and the corresponding escort distribution, the following is true:
I ϕ ; k l N ( p ) = h ϕ ( p ) g ϕ ; k l N ( p ) .
This provides a connection between the Cramér–Rao bound and the ϕ -deformed Fisher metric. In the next section, we show that the Cramér–Rao bound can be also estimated for the case of the Fisher metric of the “Amari type”.

4. The Information Geometric “Amari–Naudts” Duality

In the previous section, we have seen that there are at least two natural ways to generalize divergence, such that the ϕ -exponential family of distributions maximizes the associated entropy functional, however under different constraint types. These two ways result in two different geometries on the parameter manifold. The relation between the metric g ϕ , i j A and g ϕ , i j N can be expressed by the operator, T:
g i j A ( p ) = T ( g i j N ( p ) ) ,
where
T ( g ( x ) ) = N g log g ( x ) ,
with the normalization factor, N g = i 1 / g ( p i ) . Note that the operator acts locally on the elements of the metric. In order to establish the connection to the Cramér–Rao bound, let us focus on the transformation of g A .

4.1. Cramér–Rao Bound of the Amari Type

The metric of the “Amari case” can be seen as a conformal transformation [32] of the metric that is obtained in the “Naudts case”, for a different deformation of the logarithm. Two metric tensors are connected by a conformal transformation if they have the same form, except for the global conformal factor, Ω ( p ) , which depends only on the point p. Our aim is to connect the Amari metric with the Cramér–Rao bound and obtain another type of bound for the estimates that are based on escort distributions. To this end, let us consider a general metric of the Naudts type, corresponding to χ -deformation, and a metric of the Amari type, corresponding to ξ -deformation. They are connected through the conformal transformation, which acts globally on the whole metric. The relation can be expressed as:
g χ , i j N ( p ) = Ω ( p ) g ξ , i j A ( p ) .
By using previous results in this relation, we obtain:
1 χ ( p i ) = Ω ( p ) h ξ ( p ) ξ ( p i ) ξ ( p i ) ,
from which we see that Ω ( p ) = h ξ ( p ) and log χ ( x ) = log ( ξ ( x ) ) , i.e.,
ξ ( x ) = exp ( log χ ( x ) ) log ξ ( x ) = 1 x exp ( log χ ( y ) ) d y .
Note that log χ might not be concave because:
d 2 d x 2 log χ ( x ) = ξ ( x ) ξ ( x ) ξ ( x ) 2 ξ ( x ) 2 .
Concavity exists, if ξ ( x ) ξ ( x ) 2 ξ ( x ) . To now make the connection with the Cramér–Rao bound, let us take χ ( x ) = ϕ ( x ) , so ξ ( x ) = exp log ϕ ( x ) , and:
I ϕ ; k l A ( p ) = h exp ( log ϕ ) ( p ) g exp ( log ϕ ) ; k l A ( p ) .
As a consequence, there exist two types of Cramér–Rao bound for a given escort distribution, which might be used to estimate the lower bound of the variance of an unbiased estimator, obtained from two types of Fisher information.

4.2. Example: Duality of ( c , d ) -Entropy

We demonstrate the “Amari–Naudts” duality on the general class of ( c , d ) -entropies [9,10], which include all deformations associated with statistical systems that fulfil the first three Shannon–Khinchin axioms. These include most of the popular deformations, including Tsallis q-exponentials [4] and stretched exponentials studied in connection with the entropies by Anteneodo and Plastino [33]. The generalized ( c , d ) -logarithm is defined as:
log ( c , d ) ( x ) = r r x c 1 1 ( 1 ( 1 c ) r ) d r log x d ,
where c and d are the scaling exponents [8,9] and r is a free scale parameter (that does not influence the asymptotic behavior). The associated ϕ -deformation is:
ϕ ( c , d ) ( x ) = x r log c , d ( x ) ( c r + r 1 ) log ( x ) + d r ( c 1 ) ( ( c 1 ) r + 1 ) log ( x ) + d .
The inverse function of log ( c , d ) , the deformed ( c , d ) -exponential, can be expressed in terms of the Lambert W-function, which is the solution of equation, W ( z ) e W ( z ) = z . The deformed ( c , d ) -exponential is:
exp ( c , d ) ( x ) = exp d 1 c W B ( 1 x / r ) 1 / d W ( B ) ,
where B = ( 1 c ) r 1 ( 1 c ) r exp ( 1 c ) r 1 ( 1 c ) r . The corresponding entropy that is maximized by ( c , d ) -exponentials (see [8] for their properties) is the ( c , d ) -entropy:
S ( c , d ) ( p ) = r A d e A i Γ ( 1 + d , A c ln p i ) r c ,
where A = c d r 1 ( 1 c ) r . This is an entropy of “Naudts type”, since it is maximized with ( c , d ) -exponentials under linear constraints. We can immediately write the metric as:
g ( c , d ) , i j N ( p ) = r log c , d ( p i ) p i ( c 1 ) ( ( c 1 ) r + 1 ) log ( p i ) + d ( c r + r 1 ) log ( p i ) + d r δ i j + r log c , d ( p 0 ) p 0 ( c 1 ) ( ( c 1 ) r + 1 ) log ( p i ) + d ( c r + r 1 ) log ( p 0 ) + d r .
The corresponding entropy of “Amari type”, i.e., maximized with ( c , d ) -exponentials under escort constraints:
i ϕ ( c , d ) ( p i ) E i j ϕ ( c , d ) ( p j ) = E ( c , d ) ,
is:
S ( c , d ) A ( p ) = 1 h ( c , d ) ( P ) i p i log c , d p i r log c , d ( p i ) ( c r + r 1 ) log ( p i ) + d r ( c 1 ) ( ( c 1 ) r + 1 ) log ( p i ) + d ,
and its metric is:
g ( c , d ) , i j A ( p ) = 1 p i 2 c ( d 1 ) ( ( c 1 ) r + 1 ) ( ( c 1 ) r + 1 ) log ( p i ) d r ( c 1 ) 2 r + c 1 ( c 1 ) d r c d r + ( c 1 ) ( ( c 1 ) r + 1 ) log ( p i ) + d + d r δ i j + 1 p 0 2 c ( d 1 ) ( ( c 1 ) r + 1 ) ( ( c 1 ) r + 1 ) log ( p 0 ) d r ( c 1 ) 2 r + c 1 ( c 1 ) d r c d r + ( c 1 ) ( ( c 1 ) r + 1 ) log ( p 0 ) + d + d r .
The metric of the Amari type of the ( c , d ) -entropy was already discussed in [34] based on ( c , d ) -logarithms. However, as demonstrated above, the metric can be found without using the inverse ϕ -deformed logarithms, which in the case of ( c , d ) -logarithms lead to Lambert W-functions. The Fisher metric of Naudts and Amari type and the corresponding Cramér–Rao bound are shown in Figure 1. The scaling parameter is set (following [9]) to r = 1 / ( 1 c + c d ) , for d 0 , and r = exp ( d ) / ( 1 c ) , for d < 0 . The Fisher metric of both types is displayed in Figure 2 as a function of the parameters c and d for a given point, P = ( 1 / 3 , 2 / 3 ) . We see that both types of metric have a singularity for ( c , d ) = ( 1 , 0 ) . This point corresponds to distributions with compact support. For one-dimensional distributions, the singularity corresponds to the transition between distributions with support on the real line and distributions with support on a finite interval.
Interestingly, for ( c , d ) = ( q , 0 ) , the metric simplifies to:
g ( q , 0 ) , i j A ( p ) = 2 q p i δ i j + 2 q p 0 ,
which corresponds to the Tsallis q-exponential family of distributions. Therefore, g ( q , 0 ) , i j A ( p ) is just a conformal transformation of the Fisher information metric for the exponential family of distributions, as shown in [24]. Note, that only for Tsallis q-exponentials, the relation between S q N ( p ) and S q A ( p ) can be expressed as (see also Table 2):
S q A ( p ) = f ( S q N ( p ) ) ,
where f ( x ) = ( 2 q ) / x and q = 2 q . This is nothing but the well-known additive duality q 2 q of Tsallis entropy [11]. Interestingly, q-escort distributions form a group with ϕ q ( ϕ q ( x ) ) = ( ϕ q · q ( x ) ) and ϕ q 1 ( x ) = ϕ 1 / q ( x ) , where q 1 / q is the multiplicative duality [35].
This is not the case for more general deformations, because typically, the inverse does not belong to the class of escort distributions. Popular deformations belonging to the ( c , d ) -family, as the Tsallis q-exponential family or the stretched exponential family, are summarized in Table 2.

5. Connection to the Deformed-Log Duality

A different duality of entropies and their associated logarithms under linear and escort averages has been discussed in [22]. There, two approaches were discussed. The first uses the generalized entropy of trace form under linear constraints. It was denoted by:
S H T ( p ) = i s H T ( p i ) = i 0 p i log H T ( x ) d x .
It corresponds to the Naudts case, log H T ( x ) = log ϕ H T N ( x ) . The second approach, originally introduced by Tsallis and Souza [20], uses the trace form entropy:
S T S ( p ) = i s T S ( p i ) = i 0 p i log T S ( x ) d x .
Note that in [20], the notion of the deformed logarithm is not used (as in Equation (55)). However, it is again an entropy of the Naudts type with the deformed logarithm log T S ( x ) = log ϕ T S N ( x ) . Equation (55) is maximized under the escort constraints:
E T S = j u ( p j ) E j j u ( p j ) ,
where u ( p j ) = p j + ν s T S ( p j ) . The linear constraints are recovered for ν = 0 . This form is dictated by the Shannon–Khinchin axioms, as discussed in the next section. Let us assume that the maximization of both approaches—Equation (54) under linear, and Equation (55) under escort constraints—leads to the same MaxEnt distribution. One can then show that there exists the following duality (deformed-log duality) between log H T ( x ) and log T S (x):
log T S ( x ) = 1 1 log H T ( x ) + ν .
Let us focus on specific ϕ -deformations, so that log H T ( x ) = log ϕ H T ( x ) log ϕ N ( x ) . Then, log T S ( x ) is also a ϕ -deformation, with:
ϕ T S ( x ) = ϕ H T ( x ) 1 + ν log ϕ H T ( x ) 2 = ϕ H T ( x ) 1 + ν 1 x 1 ϕ H T ( y ) d y 2 .
It is straightforward to calculate the metric corresponding to the entropy S T S ( p ) :
g ϕ , i j T S ( p ) = 1 ϕ T S ( p i ) δ i j + 1 ϕ T S ( p 0 ) = T ν ( g ϕ , i j H T ( p ) ) ,
where:
T ν ( g ( x ) ) = g ( x ) 1 + ν 1 x g ( y ) d y 2 .
Thus, the Tsallis–Souza approach results in yet another information matrix. We may also start from the other direction and look at the situation when the escort distribution for the information geometric approach is the same as the escort distribution for the Tsallis–Souza approach. In this case, we get:
ϕ ( x ) = x + ν s T S ( x ) = x + 0 x T ν ( log ϕ ( y ) ) d y .
We find that the entropy must be expressed as:
S ϕ ( p ) = i ϕ ( p i ) p i ν .
Note that for ϕ ( x ) = x q and ν = 1 q , we obtain the Tsallis entropy:
S ϕ ( p ) = i p i q p i 1 q ,
which corresponds to S T S ( p ) for q = 2 q , which is nothing but the mentioned Tsallis additive duality. It turns out that Tsallis entropy is the only case where the deformed-log duality and the information geometric duality result in the same class of functionals. In general, the two dualities have different escort distributions.

6. Discussion

We discussed the information geometric duality of entropies that are maximized by ϕ -exponential distributions under two types of constraint: linear constraints, which are known from contexts such as thermodynamics, and escort constraints, which appear naturally in the theory of statistical estimation and information geometry. This duality implies two different entropy functionals: S N ( p ) = i 0 p i d x log ϕ ( x ) and S A ( p ) = log ϕ ( P ) ϕ . For ϕ ( x ) = x , they both boil down to Shannon entropy. The connection between the entropy of Naudts type and the one of Amari type can be established through the corresponding Fisher information through the Cramér–Rao bound. Contrary to the deformed-log duality introduced in [22], the information theoretic duality introduced here cannot be established within the framework of ϕ -deformations, since S A ( p ) is not a trace form entropy. We demonstrated the duality between the Naudts approach with linear constraints and the Amari approach with escort constraints, within the example of ( c , d ) -entropies, which include a wide class of popular deformations, including Tsallis and Anteneodo–Plastino entropy as special cases. Finally, we compared the information geometric duality to the deformed-log duality and showed that they are fundamentally different and result in other types of Fisher information.
Let us now discuss the role of information geometric duality and possible applications in information theory and thermodynamics. Recall that the Shannon entropic functional is determined by the four Shannon–Khinchin (SK) axioms. In many contexts, at least three of the axioms should hold:
  • (SK1) Entropy is a continuous function of the probabilities p i only and should not explicitly depend on any other parameters.
  • (SK2) Entropy is maximal for the equi-distribution p i = 1 / W .
  • (SK3) Adding a state W + 1 to a system with p W + 1 = 0 does not change the entropy of the system.
Originally, the Shannon–Khinchin axioms contain four axioms. The fourth describes the “composition rule” for entropy of a joint system S ( A + B ) = S ( A ) + S ( B | A ) ). The only entropy satisfying all four SK axioms is Shannon entropy. However, Shannon entropy is not sufficient to describe the statistics of complex systems [10] and can lead to paradoxes in applications in thermodynamics [12]. Therefore, instead of imposing the fourth axiom in situations where it does not apply, it is convenient to consider a weaker requirement, such as generic scaling relations of entropy in the thermodynamic limit [9,13]. It is possible to show that the only type of duality satisfying the first three Shannon–Khinchin axioms is the deformed-log duality of [22]. Moreover, entropies that are neither trace-form, nor sum-form (sum-form entropies are in the form S ( p ) = f ( i g ( p i ) ) ) might be problematic from the view of information theory and coding. For example, it is then not possible to introduce a conditional entropy consistently [36] because the corresponding conditional entropy cannot be properly defined. This is related to the fact that the Kolmogorov definition of conditional probability is not generally valid for escort distributions [37]. Additional issues arise from the theory of statistical estimation, since only entropies of the form S ( p ) = f ( i g ( p i ) ) , i.e., sum-form entropies, can fulfil the consistency axioms [38]. From this point of view, the deformed-log duality using the class of Tsallis–Souza escort distributions can play a role in thermodynamical applications [39], because the corresponding entropy fulfills the SK axioms. On the other hand, the importance of escort distributions considered by Amari and others is in the realm of information geometry (e.g., dually-flat geometry or generalized Cramér–Rao bound), and their applications in thermodynamics might be limited. Finally, for the case of the Tsallis q-deformation, both dualities, the information geometric and the deformed-log duality, reduce to the well-known additive duality q 2 q .

Author Contributions

All authors contributed to the conceptualization of the work, the discussion of the results, and their interpretation. J.K. took the lead in technical computations. J.K. and S.T. wrote the manuscript.

Funding

This work was supported by the Austrian Science Fund FWFunder I 3073.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thurner, S.; Corominas-Murtra, B.; Hanel, R. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle. Phys. Rev. E 2017, 96, 032124. [Google Scholar] [CrossRef] [PubMed]
  2. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  3. Harremoës, P.; Topsøe, F. Maximum entropy fundamentals. Entropy 2001, 3, 191–226. [Google Scholar] [CrossRef]
  4. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  5. Kaniadakis, G. Statistical mechanics in the context of special relativity. Phys. Rev. E 2002, 66, 056125. [Google Scholar] [CrossRef] [PubMed]
  6. Jizba, P.; Arimitsu, T. The world according to Rényi: Thermodynamics of multifractal systems. Ann. Phys. 2004, 312, 17–59. [Google Scholar] [CrossRef]
  7. Tsallis, C.; Cirto, L.J. Black hole thermodynamical entropy. Eur. Phys. J. C 2013, 73, 2487. [Google Scholar] [CrossRef] [Green Version]
  8. Thurner, S.; Hanel, R.; Klimek, P. Introduction to the Theory of Complex Systems; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  9. Hanel, R.; Thurner, S. A comprehensive classification of complex statistical systems and an axiomatic derivation of their entropy and distribution functions. Europhys. Lett. 2011, 93, 20006. [Google Scholar] [CrossRef]
  10. Hanel, R.; Thurner, S. When do generalized entropies apply? How phase space volume determines entropy. Europhys. Lett. 2011, 96, 50003. [Google Scholar] [CrossRef] [Green Version]
  11. Tsallis, C.; Gell-Mann, M.; Sato, Y. Asymptotically scale-invariant occupancy of phase space makes the entropy Sq extensive. Proc. Natl. Acad. Sci. USA 2005, 102, 15377–15382. [Google Scholar] [CrossRef]
  12. Jensen, H.J.; Pazuki, R.H.; Pruessner, G.; Tempesta, P. Statistical mechanics of exploding phase spaces: Ontic open systems. J. Phys. A 2018, 51, 375002. [Google Scholar] [CrossRef]
  13. Korbel, J.; Hanel, R.; Thurner, S. Classification of complex systems by their sample-space scaling exponents. New J. Phys. 2018, 20, 093007. [Google Scholar] [CrossRef]
  14. Tsallis, C.; Mendes, R.S.; Plastino, A.R. The role of constraints within generalized nonextensive statistics. Physica A 1998, 261, 534–554. [Google Scholar] [CrossRef]
  15. Abe, S. Geometry of escort distributions. Phys. Rev. E 2003, 68, 031101. [Google Scholar] [CrossRef] [PubMed]
  16. Ohara, A.; Matsuzoe, H.; Amari, S.I. A dually flat structure on the space of escort distributions. J. Phys. Conf. Ser. 2010, 201, 012012. [Google Scholar] [CrossRef] [Green Version]
  17. Bercher, J.-F. A simple probabilistic construction yielding generalized entropies and divergences, escort distributions and q-Gaussians. Physica A 2012, 391, 4460–4469. [Google Scholar] [CrossRef] [Green Version]
  18. Hanel, R.; Thurner, S.; Tsallis, C. On the robustness of q-expectation values and Renyi entropy. Europhys. Lett. 2009, 85, 20005. [Google Scholar] [CrossRef]
  19. Hanel, R.; Thurner, S.; Tsallis, C. Limit distributions of scale-invariant probabilistic models of correlated random variables with the q-Gaussian as an explicit example. Eur. Phys. J. B 2009, 72, 263–268. [Google Scholar] [CrossRef] [Green Version]
  20. Tsallis, C.; Souza, A.M.C. Constructing a statistical mechanics for Beck-Cohen superstatistics. Phys. Rev. E 2003, 67, 026106. [Google Scholar] [CrossRef] [Green Version]
  21. Beck, C.; Cohen, E.D.G. Superstatistics. Physica A 2003, 322, 267–275. [Google Scholar] [CrossRef] [Green Version]
  22. Hanel, R.; Thurner, S.; Gell-Mann, M. Generalized entropies and logarithms and their duality relations. Proc. Natl. Acad. Sci. USA 2012, 109, 19151–19154. [Google Scholar] [CrossRef] [Green Version]
  23. Amari, S.I.; Cichocki, A. Information geometry of divergence functions. Bull. Pol. Acad. Sci. Tech. Sci. 2010, 58, 183–195. [Google Scholar] [CrossRef] [Green Version]
  24. Amari, S.I.; Ohara, A.; Matsuzoe, H. Geometry of deformed exponential families: Invariant, dually-flat and conformal geometries. Physica A 2012, 391, 4308–4319. [Google Scholar] [CrossRef]
  25. Ay, N.; Jost, J.; Le, H.V.; Schwachhöfer, L. Information Geometry; Springer: Berlin, Germany, 2017. [Google Scholar]
  26. Naudts, J. Deformed exponentials and logarithms in generalized thermostatistics. Physica A 2002, 316, 323–334. [Google Scholar] [CrossRef] [Green Version]
  27. Naudts, J. Continuity of a class of entropies and relative entropies. Rev. Math. Phys. 2004, 16, 809–822. [Google Scholar] [CrossRef]
  28. Naudts, J. Generalised Thermostatistics; Springer Science & Business Media: Berlin, Germany, 2011. [Google Scholar]
  29. Pistone, G.; Sempi, C. An infinite-dimensional geometric structure on the space of all the probability measures equivalent to a given one. Ann. Stat. 1995, 23, 1543–1561. [Google Scholar] [CrossRef]
  30. Csiszar, I. Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems. Ann. Stat. 1991, 19, 2032–2066. [Google Scholar] [CrossRef]
  31. Vigelis, R.F.; Cavalcante, C.C. On ϕ-Families of probability distributions. J. Theor. Probab. 2013, 26, 870–884. [Google Scholar] [CrossRef]
  32. Ohara, A. Conformal flattening for deformed information geometries on the probability simplex. Entropy 2018, 20, 186. [Google Scholar] [CrossRef]
  33. Anteneodo, C.; Plastino, A.R. Maximum entropy approach to stretched exponential probability distributions. J. Phys. A 1999, 32, 1089. [Google Scholar] [CrossRef]
  34. Ghikas, D.P.K.; Oikonomou, F.D. Towards an information geometric characterization/classification of complex systems. I. Use of generalized entropies. Physica A 2018, 496, 384–398. [Google Scholar] [CrossRef] [Green Version]
  35. Tsallis, C. Generalization of the possible algebraic basis of q-triplets. Eur. Phys. J. Spec. Top. 2017, 226, 455–466. [Google Scholar] [CrossRef]
  36. Ilić, V.M.; Stanković, M.S. Generalized Shannon–Khinchin axioms and uniqueness theorem for pseudo-additive entropies. Physica A 2014, 411, 138–145. [Google Scholar] [CrossRef] [Green Version]
  37. Jizba, P.; Korbel, J. On the uniqueness theorem for pseudo-additive entropies. Entropy 2017, 19, 605. [Google Scholar] [CrossRef]
  38. Uffink, J. Can the maximum entropy principle be explained as a consistency requirement? Stud. Hist. Philos. Sci. B 1995, 26, 223–261. [Google Scholar] [CrossRef] [Green Version]
  39. Hanel, R.; Thurner, S.; Gell-Mann, M. Generalized entropies and the transformation group of superstatistics. Proc. Natl. Acad. Sci. USA 2011, 108, 6390–6394. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Fisher metric for p = ( p , 1 p ) corresponding to various ( c , d ) -deformations ( ( c , d ) = ( 1 , 1 ) , ( 1 , 1 / 2 ) , ( 1 / 2 , 0 ) ) for (a) the Naudts type, (b) the Amari type, and (c) the Cramér–Rao bound corresponding to the metric.
Figure 1. Fisher metric for p = ( p , 1 p ) corresponding to various ( c , d ) -deformations ( ( c , d ) = ( 1 , 1 ) , ( 1 , 1 / 2 ) , ( 1 / 2 , 0 ) ) for (a) the Naudts type, (b) the Amari type, and (c) the Cramér–Rao bound corresponding to the metric.
Entropy 21 00112 g001
Figure 2. Fisher metric for ( c , d ) -deformations as a function of c and d of Naudts type (a) and Amari type (b). The metric is evaluated at a point p = ( 1 / 3 , 2 / 3 ) .
Figure 2. Fisher metric for ( c , d ) -deformations as a function of c and d of Naudts type (a) and Amari type (b). The metric is evaluated at a point p = ( 1 / 3 , 2 / 3 ) .
Entropy 21 00112 g002
Table 1. ϕ -deformation of divergence, entropy, and Fisher information corresponding to the ϕ -exponential family under linear and escort constraints. For the ordinary logarithm, ϕ ( x ) = x , the two entropies become the Shannon entropy, and the divergence is Kullback–Leibler.
Table 1. ϕ -deformation of divergence, entropy, and Fisher information corresponding to the ϕ -exponential family under linear and escort constraints. For the ordinary logarithm, ϕ ( x ) = x , the two entropies become the Shannon entropy, and the divergence is Kullback–Leibler.
ϕ -DeformationLinear ConstraintsEscort Constraints
divergence D ϕ ( p q ) j q j p j d x log ϕ ( x ) log ϕ ( q j ) j ϕ ( p j ) ( log ϕ ( p j ) log ϕ ( q j ) ) k ϕ ( p k )
entropy S ϕ ( p ) i 0 p i log ϕ ( x ) d x i ϕ ( p i ) log ϕ ( p i ) / k ϕ ( p k )
metric g i j ϕ ( p ) 1 ϕ ( p i ) δ i j + 1 ϕ ( p 0 ) 1 k ϕ ( p k ) ϕ ( p i ) ϕ ( p i ) δ i j + ϕ ( p 0 ) ϕ ( p 0 )
Table 2. Two important special cases of ( c , d ) -deformations and related quantities: Power laws (Tsallis) and stretched exponentials.
Table 2. Two important special cases of ( c , d ) -deformations and related quantities: Power laws (Tsallis) and stretched exponentials.
Tsallis q-Exponential [4]Stretched η -Exponential [33]
ϕ ( x ) x q x η log ( x ) 1 1 / η
log ϕ ( x ) x 1 q 1 1 q log ( x ) 1 / η
exp ϕ ( x ) 1 + ( 1 q ) x 1 / ( 1 q ) exp ( x η )
χ ϕ ( x ) x q x η log ( x ) ( η 1 ) + η log ( x )
S ϕ N ( p ) 1 q 1 i p i 2 q 2 q 1 i Γ η + 1 η , log p i
S ϕ A ( p ) 1 q 1 1 i p i q 1 i p i log p i i p i ( log p i ) 1 1 / η

Share and Cite

MDPI and ACS Style

Korbel, J.; Hanel, R.; Thurner, S. Information Geometric Duality of ϕ-Deformed Exponential Families. Entropy 2019, 21, 112. https://doi.org/10.3390/e21020112

AMA Style

Korbel J, Hanel R, Thurner S. Information Geometric Duality of ϕ-Deformed Exponential Families. Entropy. 2019; 21(2):112. https://doi.org/10.3390/e21020112

Chicago/Turabian Style

Korbel, Jan, Rudolf Hanel, and Stefan Thurner. 2019. "Information Geometric Duality of ϕ-Deformed Exponential Families" Entropy 21, no. 2: 112. https://doi.org/10.3390/e21020112

APA Style

Korbel, J., Hanel, R., & Thurner, S. (2019). Information Geometric Duality of ϕ-Deformed Exponential Families. Entropy, 21(2), 112. https://doi.org/10.3390/e21020112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop