Next Article in Journal / Special Issue
Evaluating the Spectrum of Unlocked Injection Frequency Dividers in Pulling Mode
Previous Article in Journal
Theoretical Foundations and Mathematical Formalism of the Power-Law Tailed Statistical Distributions
Previous Article in Special Issue
Analysis and Visualization of Seismic Data Using Mutual Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Generalized Entropy Measure Leading to the Pathway Model with a Preliminary Application to Solar Neutrino Data

by
Arak M. Mathai
1,2 and
Hans J. Haubold
3,4,*
1
Centre for Mathematical Sciences, Arunapuram P.O., Palai, Kerala 686574, India
2
Department of Mathematics and Statistics, McGill University, 805 Sherbrooke Street West, Montreal, Quebec H3A2K6, Canada
3
Centre for Mathematical Sciences, Arunapuram P.O., Palai, Kerala 686574, India
4
Office for Outer Space Affairs, United Nations, Vienna International Centre, P.O. Box 500, Vienna A-1400, Austria
*
Author to whom correspondence should be addressed.
Entropy 2013, 15(10), 4011-4025; https://doi.org/10.3390/e15104011
Submission received: 4 September 2013 / Accepted: 22 September 2013 / Published: 25 September 2013
(This article belongs to the Special Issue Dynamical Systems)

Abstract

:
An entropy for the scalar variable case, parallel to Havrda-Charvat entropy, was introduced by the first author, and the properties and its connection to Tsallis non-extensive statistical mechanics and the Mathai pathway model were examined by the authors in previous papers. In the current paper, we extend the entropy to cover the scalar case, multivariable case, and matrix variate case. Then, this measure is optimized under different types of restrictions, and a number of models in the multivariable case and matrix variable case are obtained. Connections of these models to problems in statistical and physical sciences are pointed out. An application of the simplest case of the pathway model to the interpretation of solar neutrino data by applying standard deviation analysis and diffusion entropy analysis is provided.

1. Introduction

Classical Shannon entropy has been generalized in many directions [1,2]. An α-generalized entropy, parallel to Havrda-Charvat entropy, introduced by the first author, is found to be quite useful in deriving pathway models [3], including Tsallis statistics [4] and superstatistics [5,6]. It is also connected to Kerride’s measure of inaccuracy [7]. For the continuous case, let f ( X ) be a density function associated with a random variable X, where X could be a real or complex scalar, vector or matrix variable. In the present paper we consider only the real cases for convenience. Let
M α ( f ) = X [ f ( X ) ] 2 α d X 1 α 1 , α 1
Note that when α 1 , M α ( f ) S ( f ) = X f ( X ) ln f ( X ) d X where S ( f ) is Shannon’s entropy [7] and in this sense (1.1) is a α-generalized entropy measure. The corresponding discrete case is available as
i = 1 k p i 2 α 1 α 1 , p i > 0 , i = 1 , . . . , k , p 1 + . . . p k = 1 , α 1
Characterization properties and applications of (1.1) may be seen from [7]. Note that
X [ f ( X ) ] 2 α d X = X [ f ( X ) ] 1 α f ( X ) d X = E [ f ( X ) ] 1 α
Thus there is a parallelism with Kerridge’s measure of inaccuracy. The α-generalized Kerridge’s measure of inaccuracy [9] is given by
x P ( x ) [ Q ( x ) ] 1 α 1 α 1 = E [ Q ( x ) ] 1 α 1 α 1 , α 1
When α 1 , Equation (1.2) goes to Kerridge’s measure of inaccuracy given by
K ( P , Q ) = x P ( x ) ln Q ( x ) d x
where x is a scalar variable, P ( x ) is the true density and Q ( x ) is a hypothesized or assigned density for the true density P ( x ) . Then a measure of inaccuracy in taking Q ( x ) for the true density P ( x ) is given by Equation (1.3) and its α-generalized form is given by Equation (1.2).
Earlier works on Shannon’s measure of entropy, measure of directed divergence, measure of inaccuracy and related items and applications in natural sciences may be seen in [7] and the references therein. A measure of entropy, parallel to the one of Havrda-Charvat entropy was introduced by Tsallis in 1988 [4,8,9], given by
T α ( f ) = x [ f ( x ) ] α d x 1 1 α , α 1
Tsallis statistics or non-extensive statistical mechanics is derived by optimizing (1.4) by putting restrictions in an escort density associated with f ( x ) of Equation (1.4). Let g ( x ) = [ f ( x ) ] α m , m = x [ f ( x ) ] α d x < . If T α ( f ) is optimized over all non-negative functional f, subject to the conditions that f ( x ) is a density and the expected value in the escort density is a given quantity, that is x x g ( x ) d x = a given quantity, then the Euler equation to be considered, if we optimize by using calculus of variations, is that
f [ { f ( x ) } α λ 1 f ( x ) + λ 2 x { f ( x ) } α ] = 0
where λ 1 and λ 2 are Lagrangian multipliers. That is,
α [ f ( x ) ] α 1 λ 1 + λ 2 x α [ f ( x ) ] α 1 = 0
Then
f ( x ) = c [ 1 + λ 2 x ] 1 α 1 , c = ( λ 1 α ) 1 α 1
Taking λ 2 = a ( α 1 ) for α > 1 , a > 0 we have Tsallis statistics as
f ( x ) = c [ 1 + a ( α 1 ) x ] 1 α 1 , α > 1 , a > 0
For α < 1 , writing α 1 = ( 1 α ) the density in Equation (1.5) changes to
f x ( x ) = c 1 [ 1 a ( 1 α ) x ] 1 1 α , α < 1 , a > 0
where 1 a ( 1 α ) x > 0 and c 1 can act as a normalizing constant if f 1 ( x ) is to be taken as a statistical density. Tsallis statistics in Equation (1.5) led to the development of none-extensive statistical mechanics. We will show later that Equation (1.5) comes directly from the entropy of Equation (1.1) without going through any escort density. Let us optimize Equation (1.1) subject to the conditions that f ( x ) is a density, x f ( x ) d x = 1 , and that the expected value of x in f ( x ) is a given quantity, that is, x x f ( x ) d x = a given quantity. Then, if we use calculus of variations, the Euler equation is of the form
f [ { f ( x ) } 2 α λ 1 f ( x ) + λ 2 x f ( x ) ] = 0
where λ 1 and λ 2 are Lagrangian multipliers. Then we have
f 1 ( x ) = c 1 [ 1 a ( 1 α ) x ] 1 1 α , α < 1 , a > 0
by taking λ 2 λ 1 = a ( 1 α ) , a > 0 , α < 1 , and c 1 is the corresponding normalizing constant to make f 1 ( x ) a statistical density. Now, for α > 1 , write 1 α = ( α 1 ) , then directly from Equation (1.6), without going through any escort density, we have
f 2 ( x ) = c 2 [ 1 + a ( α 1 ) x ] 1 α 1 , α > 1 , a > 0
which is Tsallis statistics for α > 1 . Thus, both the cases α < 1 and α > 1 follow directly from Equation (1.1).
Now, let us look into optimizing (1.1) over all non-negative integrable functionals, f ( x ) 0 for all x, x f ( x ) d x < , such that two moment-type relations are imposed on f, of the form
x x γ ( 1 α ) f ( x ) d x = given ,   and x x γ ( 1 α ) + δ f ( x ) d x = given
Then the Euler equation becomes
f [ { f ( x ) } 2 α λ 1 x γ ( 1 α ) f ( x ) + λ 2 x γ ( 1 α ) + δ f ( x ) ] = 0
which leads to
f 1 * ( x ) = c 1 * x γ [ 1 a ( 1 α ) x δ ] 1 1 α , a > 0 , α < 1 , δ > 0 , γ > 0
for 1 a ( 1 α ) x δ > 0 , by taking λ 2 λ 1 = a ( 1 α ) , a > 0 , α < 1 , where c 1 * can act as the normalizing constant. Equation (1.9) is a special case of the pathway model of [3] for the real scalar positive random variable x > 0 . For γ = 0 , δ = 1 in Equation (1.9) we obtain Tsallis statistics of Equation (1.6) for the case α < 1 . When α > 1 write 1 α = ( α 1 ) for α > 1 then Equation (1.9) becomes
f 2 * ( x ) = c 2 * x γ [ 1 + a ( α 1 ) x δ ] 1 α 1 , α > 1 , a > 0 , x > 0 , δ > 0
When α 1 both f 1 * ( x ) of Equation (1.9) and f 2 * ( x ) of Equation (1.10) go to
f 3 * ( x ) = c 3 * x γ e a x δ , a > 0 , δ > 0 , x > 0
Equation (1.10) for α > 1 , x > 0 is superstatistics [5,6].

2. A Generalized Measure of Entropy

Let X be a scalar, a p × 1 vector of scalar random variables or a p × n , p n matrix of rank n of scalar random variables and let f ( X ) be a real-valued scalar function such that f ( X ) 0 for all X and X f ( X ) d X = 1 where d X stands for the wedge product of the differentials in X. For example, if X is m × n , X = ( x i j ) then
d X = i = 1 m j = 1 n d x i j
where ∧ stands for the wedge product of differentials, d x d y = d y d x d x d x = 0 . Then f ( X ) is a density of X. When X is p × n , p n we have a rectangular matrix variate density. For convenience we have taken X of full rank n p . When n = 1 we have a multivariate density and when n = 1 , p = 1 we have a univariate density. Consider the generalized entropy of Equation (1.1) for this matrix variate density, denoted by f ( X ) , then
M α ( f ) = X [ f ( X ) ] 2 α d X 1 α 1 , α 1
Let n = 1 . Let us consider the situation of the ellipsoid of concentration being a preassigned quantity. Let X be p × 1 vector random variable. Let V = E [ ( X E ( X ) ) ( X E ( X ) ) ] > O (positive definite) where E denotes expected value. For convenience let us denote E ( X ) = μ . Then ρ = E [ ( X μ ) V 1 ( X μ ) ] is the ellipsoid of concentration. Let us optimize (2.1) subject to the constraint that f ( X ) 0 is a density and that the ellipsoid of concentration over all functional f is a constant, that is, X f ( X ) d X = 1 and X [ ( X μ ) V 1 ( X μ ) ] δ f ( X ) d X = given, where δ > 0 is a fixed parameter. If we are using calculus of variation then the Euler equation is given by
f [ { f ( X ) } 2 α λ 1 f ( X ) + λ 2 [ ( X μ ) V 1 ( X μ ) ] δ f ( X ) ] = 0
where λ 1 and λ 2 are Lagrangian multipliers. Solving the above equation we have
f 1 ( X ) = C 1 [ 1 a ( 1 α ) { ( X μ ) V 1 ( X μ ) } δ ] 1 1 α
for α < 1 , a > 0 where we have taken λ 2 λ 1 = a ( 1 α ) , a > 0 , α < 1 and ( λ 1 2 α ) 1 1 α = C 1 . This C 1 can act as the normalizing constant to make f ( X ) in Equation (2.2) a statistical density. Note that for α > 1 , we have from Equation (2.2)
f 2 ( X ) = C 2 [ 1 + a ( α 1 ) { ( X μ ) V 1 ( X μ ) } δ ] 1 α 1 , α > 1 , a > 0
and when α 1 , f 1 and f 2 go to
f 3 ( X ) = C 3 e a [ ( X μ ) V 1 ( X μ ) ] δ
Equation (2.4) for δ = 1 is the multivariate Gaussian density. If Y = V 1 2 ( X μ ) , where V 1 2 is the positive definite square root of the positive definite matrix V 1 , then d Y = | V | 1 2 d X and the density of Y, denoted by g ( Y ) , is given by
g ( Y ) = C 4 e a ( y 1 2 + . . . + y p 2 ) δ , < y j < , j = 1 , . . . , p , Y = ( y 1 , . . . , y p )
and C 4 is the normalizing constant. This normalizing constant can be evaluated in two different ways. One method is to use polar coordinate transformation, see Theorem 1.25 of [10]. Let
y 1 = r sin θ 1 sin θ 2 . . . sin θ p 1 y 2 = r sin θ 1 . . . sin θ p 2 cos θ p 1 = y p 1 = r sin θ 1 cos θ 1 y p = r cos θ 1
where r > 0 , 0 < θ j π , j = 1 , . . . , p 2 , 0 < θ p 1 2 π and the Jacobian is given by
d y 1 . . . d y p = r p 1 { j = 1 p 1 | sin θ j | p j 1 } d r d θ 1 . . . d θ p 1
Under this transformation the exponent ( y 1 2 + . . . + y p 2 ) δ = ( r 2 ) δ . Hence we integrate out the sine functions. The integral over θ p 1 goes from 0 to 2 π and gives the value 2 π , and others from 0 to π. These, in general, can be evaluated by using type-1 beta integrals by putting sin θ = u and u 2 = v . That is,
0 π sin θ d θ = 2 0 π / 2 sin θ d θ = 2 0 1 u ( 1 u 2 ) 1 2 d u = 0 1 v 1 1 ( 1 v ) 1 2 d v = Γ ( 1 ) Γ ( 1 / 2 ) Γ ( 3 / 2 ) 0 π ( sin θ ) 2 d θ = Γ ( 3 / 2 ) Γ ( 1 / 2 ) Γ ( 4 / 2 ) = 0 π ( sin θ ) p 2 d θ = Γ ( p 1 2 ) Γ ( 1 / 2 ) Γ ( p 2 )
Taking the product we have
2 π ( π ) p 2 Γ ( p 2 ) = 2 π p / 2 Γ ( p / 2 )
Hence the total integral is equal to
1 = C 4 | V | 1 2 2 π p / 2 Γ ( p / 2 ) 0 r p 1 e a r 2 δ d r , δ > 0
Put x = a r 2 δ and integrate out by using a gamma integral to get
C 4 = δ Γ ( p 2 ) a p 2 δ | V | 1 2 π p / 2 Γ ( p 2 σ )
That is, the density is given by
f 3 ( X ) = δ a p 2 δ Γ ( p / 2 ) | V | 1 / 2 π p / 2 Γ ( p 2 δ ) e a [ ( X μ ) V 1 ( X μ ) ] δ , δ > 0 , a > 0 , V > O
From the above steps the following items are available: The density of Y = V 1 2 ( X μ ) is available as
g ( Y ) = δ a p 2 δ Γ ( p 2 ) π p / 2 Γ ( p 2 δ ) e a ( Y Y ) δ
The density of u = Y Y = y 1 2 + . . . + y p 2 , denoted by g 1 ( u ) , is given by
g 1 ( u ) = δ a p 2 δ Γ ( p 2 δ ) u p 2 1 e a u δ , δ > 0 , u > 0
and the density of r > 0 , where r 2 = u = Y Y , denoted by g 2 ( r ) , is given by
g 2 ( r ) = 2 δ a p 2 δ Γ ( p 2 δ ) r p 1 e a r 2 δ , r > 0 , δ > 0

2.1. Another Method

Another direct way of deriving the densities of X , Y = V 1 2 ( X μ ) , u = Y Y , r = u is the following: From [3] see the transformation in Stiefel manifold where a matrix of the form n × p , n p of rank p is transformed into S = X X which is a p × p matrix, where the differential elements, after integrating out over the Stiefel manifold, are connected by the relation, see also Theorem 2.16 and Remark 2.13 of [10],
d X = π n p 2 Γ p ( n 2 ) | S | n 2 p + 1 2 d S
where | S | denotes the determinant of S and Γ p ( α ) is the real matrix-variate gamma given by
Γ p ( α ) = π p ( p 1 ) 4 Γ ( α ) Γ ( α 1 2 ) . . . Γ ( α p 1 2 ) , ( α ) > p 1 2
Applications of the above result in various disciplines may be seen from [11,12,13,14]. In our problem, we can connect d Y of Equation (2.8) to d u of Equation (2.9) with the help of Equation (2.11) by replacing n by p and p by 1 in the n × p matrix. That is, from Equation (2.11)
d Y = π p / 2 Γ ( p / 2 ) u p 2 1 d u
The total integral of f 3 ( X ) of Equation (2.3) is given by
1 = X f 3 ( X ) d X = C 3 | V | 1 / 2 π p / 2 Γ ( p / 2 ) u = 0 u p 2 1 e a u δ d u , a > 0 , δ > 0
Put v = a u δ and integrate out by using a gamma integral to get
C 3 = δ a p 2 δ Γ ( p / 2 ) | V | 1 / 2 π p / 2 Γ ( p 2 δ )
and we get the same result as in (2.7), thereby the same expressions for g ( Y ) in Equation (2.8), g 1 ( u ) in Equation (2.9) and g 2 ( r ) in Equation (2.10).

3. A Generalized Model

If we optimize (2.1) over all integrable functions f ( X ) 0 for all X, subject to the two moment-like restrictions E [ ( X μ ) V 1 ( X μ ) ] γ ( 1 α ) = fixed and E [ ( X μ ) V 1 ( X μ ) ] δ + γ ( 1 α ) = fixed, then the corresponding Euler equation becomes
f [ { f ( X ) } 2 α λ 1 [ ( X μ ) V 1 ( X μ ) ] γ ( 1 α ) + λ 2 [ ( X μ ) V 1 ( X μ ) ] δ + γ ( 1 α ) ] = 0
and the solution is available as
f ( X ) = C * [ ( X μ ) V 1 ( X μ ) ] γ [ 1 a ( 1 α ) { ( X μ ) V 1 ( X μ ) } δ ] 1 1 α
for α < 1 , a > 0 , V > O , δ > 0 , γ > 0 and for convenience we have taken λ 2 λ 1 = a ( 1 α ) , a > 0 , α < 1 , where C * can act as the normalizing constant if f ( X ) is to be treated as a statistical density. Otherwise f ( X ) can be a very versatile model in model building situations. If C * is the normalizing constant then it can be evaluated by using the following procedure: Put Y = V 1 2 ( X μ ) d Y = | V | 1 2 d X . The total integral is 1, that is,
1 = X f ( X ) d X = C * | V | 1 2 Y [ Y Y ] γ [ 1 a ( 1 α ) ( Y Y ) δ ] 1 1 α d Y
Let u = Y Y , then d Y = π p / 2 Γ ( p / 2 ) u p 2 1 d u from Equation (2.13). Then for a > 0 , α < 1 , δ > 0 we can integrate out by using a type-1 beta integral by putting z = a ( 1 α ) u δ for α < 1 . Then the normalizing constant, denoted by C 1 * , is available as
C 1 * = δ [ a ( 1 α ) ] γ δ + p 2 δ Γ ( p / 2 ) Γ ( 1 1 α + 1 + γ δ + p 2 δ ) | V | 1 / 2 π p / 2 Γ ( γ δ + p 2 δ ) Γ ( 1 + 1 1 α )
for δ > 0 , γ + p 2 > 0 . Hence the density of the p × 1 vector X is given by
f 1 ( X ) = C 1 * [ ( X μ ) V 1 ( X μ ) ] γ [ 1 a ( 1 α ) [ ( X μ ) V 1 ( X μ ) ] δ ] 1 1 α
for V > O , a > 0 , δ > 0 , γ + p 2 > 0 , X = ( x 1 , . . . , x p ) , μ = ( μ 1 , . . . , μ p ) , < x j < , < μ j < , j = 1 , . . . , p . For α < 1 we may say that f ( X ) in Equation (3.3) is a generalized type-1 beta form. Then the density of Y, denoted by g ( Y ) , is given by
g ( Y ) = | V | 1 / 2 C 1 * ( Y Y ) γ [ 1 a ( 1 α ) ( Y Y ) δ ] 1 1 α
for a > 0 , α < 1 and C 1 * is defined in Equation (3.2). Note that the density of u = Y Y , denoted by g 1 ( u ) , is available, as
g 1 ( u ) = C 1 ˜ u γ + p 2 1 [ 1 a ( 1 α ) u δ ] 1 1 α
where
C 1 ˜ = δ [ a ( 1 α ) ] γ δ + p 2 δ Γ ( 1 1 α + 1 + γ δ + p 2 δ ) Γ ( γ δ + p 2 δ ) Γ ( 1 1 α + 1 )
for δ > 0 , γ + p 2 > 0 . Note that for α > 1 in Equation (3.1) the model switches into a generalized type-2 beta form. Write 1 α = ( α 1 ) for α > 1 . Then the model in Equation (3.2) switches into the following form:
f 2 ( X ) = C 2 * [ ( X μ ) V 1 ( X μ ) ] γ [ 1 + a ( α 1 ) [ ( X μ ) V 1 ( X μ ) ] δ ] 1 α 1
for δ > 0 , a > 0 , V > O , α > 1 . The normalizing constant C 2 * can be computed by using the following procedure. Put z = a ( α 1 ) u δ , δ > 0 , α > 1 . Then integrate out by using a type-2 beta integral to get
C 2 * = δ [ a ( α 1 ) ] γ δ + p 2 δ Γ ( p / 2 ) Γ ( 1 α 1 ) | V | 1 / 2 π p / 2 Γ ( γ δ + p 2 δ ) Γ ( 1 α 1 γ δ p 2 δ )
for γ + p / 2 > 0 , 1 α 1 γ δ p 2 δ > 0 . When α 1 then both f 1 ( X ) of Equation (3.3) and f 2 ( X ) of Equation (3.5) go to the generalized gamma model given by
f 3 ( X ) = C 3 * [ ( X μ ) V 1 ( X μ ) ] γ e a [ ( X μ ) V 1 ( X μ ) ] δ
where
C 3 * = δ Γ ( p / 2 ) a γ δ + p 2 δ | V | 1 / 2 π p / 2 Γ ( γ δ + p 2 δ ) , δ > 0 , γ + p 2 > 0
It is not difficult to show that when α 1 both C 1 * C 3 * and C 2 * C 3 * . This can be seen by using Stirling’s formula
Γ ( z + η ) 2 π z z + η 1 2 e z
for | z | and η is a bounded quantity. Observe that
lim α 1 1 1 α =   and lim α 1 + 1 α 1 =
and we can apply Stirling’s formula by taking z = 1 1 α in one case and z = 1 α 1 in the other case. Thus, from f 1 ( X ) we can switch to f 2 ( X ) to f 3 ( X ) or through the same model we can go to three different families of functions through the parameter α and hence α is called the pathway parameter and the model above belongs to the pathway model in [3].

4. Generalization to the Matrix Case

Let X be a p × n , n p rectangular matrix of full rank p. Let A > O be p × p and B > O be n × n positive definite constant matrices. Let A 1 / 2 and B 1 / 2 denote the positive definite square roots of A and B respectively. Consider the matrix
I a ( 1 α ) A 1 / 2 X B X A 1 / 2 > O
where a > 0 , α < 1 . Let f ( X ) be a real-valued function of X such that f ( X ) 0 for all X and f ( X ) is integrable, X f ( X ) d X < . If we assume that the expected value of the determinant of the above matrix is fixed over all functional f, that is
E | I a ( 1 α ) A 1 / 2 X B X A 1 / 2 | = fixed
then, if we optimize the entropy (2.1) under the restriction (4.1) the Euler equation is,
f [ { f ( X ) } 2 α λ | I a ( 1 α ) A 1 / 2 X B X A 1 / 2 | f ( X ) ] = 0
Equation such as the one in Equation (4.1) can be connected to the volume of a certain parallelotope or random geometrical objects. Solving it we have
f ( X ) = C ^ | I a ( 1 α ) A 1 / 2 X B X A 1 / 2 | 1 1 α
where C ^ is a constant. A more general form is to put a restriction of the form that the expected value of | A 1 / 2 X B X A 1 / 2 | γ ( 1 α ) | I a ( 1 α ) A 1 / 2 X B X A 1 / 2 | is a fixed quantity over all functional f. Then
f ( X ) = C 1 ^ | A 1 / 2 X B A 1 / 2 | γ | I a ( 1 α ) A 1 / 2 X B A 1 / 2 | 1 1 α
for α < 1 , a > 0 , A > O , B > O and X is p × n , n p of full rank p and a prime denotes the transpose. The model in Equation (4.3) can switch around to three functional forms, one family for α < 1 , a second family for α > 1 and a third family for α 1 . In fact Equation (4.3) contains all matrix variate statistical densities in current use in physical and engineering sciences. For evaluating the normalizing constants for all the three cases, the first step is to make the transformation
Y = A 1 / 2 X B 1 / 2 d Y = | A | n / 2 | B | p / 2 d X
see [10] for the Jacobian of this transformation. After this stage, all the steps in the previous sections are applicable and we use matrix variate type-1 beta, type-2 beta, and gamma integrals to do the final evaluation of the normalizing constants. Since the steps are parallel the details are omitted here.

5. Standard Deviation Analysis and Diffusion Entropy Analysis

Scale invariance has been found to hold for complex systems and the correct evaluation of the scaling exponents is of fundamental importance to assess if universality classes exist. Diffusion is typically quantified in terms of a relationship between fluctuation of a variable x and time t. A widely used method of analysis of complexity rests on the assessment of the scaling exponent of the diffusion process generated by a time series. According to the prescription of Peng et al. [15], the numbers of a time series are interpreted as generating diffusion fluctuations and one shifts the attention from the time series to the probability density function (pdf) p ( x , t ) , where x denotes the variable collecting the fluctuations and t is the diffusion time. In this case, if the time series is stationary, the scaling property of the pdf of the diffusion process takes the form
p ( x , t ) = 1 t δ F x t δ
where δ is a scaling exponent. Diffusion may scale linearly with time, leading to ordinary diffusion, or it may scale nonlinearly with time, leading to anomalous diffusion. Anomalous diffusion processes can be classified as Gaussian or Lévy, depending on whether the central limit theorem (CLT) holds. CLT entails ordinary statistical mechanics. That is, it entails a Gaussian form for F in Equation (5.1) composing a random walk without temporal correlations (i.e., δ = 0 ). Due to the CLT, the probability function p ( x , t ) describing the probabilities of x ( t ) has a finite second moment < x 2 > , and when the second moment diverges, x ( t ) no longer falls under the CLT and instead indicated that the generalized central limit theorem applies. Failures of CLT mean that instead of statistical mechanics, nonextensive statistical mechanics may be utilized [8,9].
Scafetta and Grigolini [16] established that Diffusion Entropy Analysis (DEA), a method of statistical analysis based on the Shannon entropy (see Equation (1.1)) of the diffusion process, determines the correct scaling exponent δ even when the statistical properties, as well as the dynamic properties, are anomalous. The other methods usually adopted to detect scaling, for example the Standard Deviation Analysis (SDA), are based on the numerical evaluation of the variance. Consequently, these methods detect a power index, denoted H by Mandelbrot [17] in honor of Hurst, which might depart from the scaling δ of Equation (5.1). These variance methods (cf. Fourier analysis and wavelet analysis; see [18,19] produce correct results in the Gaussian case, where H = δ , but fail to detect the correct scaling of the pdf, for example, in the case of Lévy flight, where the variance diverges, or in the case of Lévy walk, where δ and H do not coincide, being related by δ = 1 / ( 3 2 H ) . The case H = δ = 0 . 5 is that of a completely uncorrelated random process. The case δ = 1 is that of a completely regular process undergoing ballistic motion. Figure 1, Figure 2, Figure 3 and Figure 4 clearly show that the diffusion entropy development over time for solar neutrinos does neither meet the first nor the latter case. The Shannon entropy, Equation (1.1) for the diffusion process at time t, is defined by
S ( t ) = p ( x , t ) ln [ p ( x , t ) ] d x
If the scaling condition of Equation (5.1) holds true, it is easy to prove that
S ( t ) = A + δ ln ( t )
where
A d y F ( y ) ln [ F ( y ) ]
and y = x / t δ . Numerically, the scaling coefficient δ can be evaluated by using fitting curves with the form Equation (5.3) that on a linear-log scale is a straight line. Even though time series extracted from complex environments may not show a pure scaling behavior as in Equation (5.3) but, instead, patterns with oscillations due to periodicities, one can still observe how diffusion entropy grows linearly with time and one can estimate the diffusion exponent with reasonable accuracy.
Figure 1. Standard Diffusion Analysis of the boron solar neutrino data from SuperKamiokande I and II. The green line coincides with a straight line with the slope δ = 0 . 5 . The red line reflects the approximated straight slope of the real data with δ = 0 . 65 . The exact result of the SDA is shown by the blue line and indicates a change in the diffusion entropy over time from δ > 0 . 5 to δ = 0 . 5 .
Figure 1. Standard Diffusion Analysis of the boron solar neutrino data from SuperKamiokande I and II. The green line coincides with a straight line with the slope δ = 0 . 5 . The red line reflects the approximated straight slope of the real data with δ = 0 . 65 . The exact result of the SDA is shown by the blue line and indicates a change in the diffusion entropy over time from δ > 0 . 5 to δ = 0 . 5 .
Entropy 15 04011 g001
Figure 2. Diffusion Entropy Analysis of the boron solar neutrino data from SuperKamiokande I and II. The green line coincides with a straight line with the slope δ = 0 . 5 . The red line reflects the approximated straight slope of the real data with δ = 0 . 88 . In comparison with Figure 1, the green and red lines are remarkable different from each other and indicate strong anomalous diffusion. The exact result of the DEA is shown by the blue line and indicates a development over time from periodic modulation to asymptotic saturation.
Figure 2. Diffusion Entropy Analysis of the boron solar neutrino data from SuperKamiokande I and II. The green line coincides with a straight line with the slope δ = 0 . 5 . The red line reflects the approximated straight slope of the real data with δ = 0 . 88 . In comparison with Figure 1, the green and red lines are remarkable different from each other and indicate strong anomalous diffusion. The exact result of the DEA is shown by the blue line and indicates a development over time from periodic modulation to asymptotic saturation.
Entropy 15 04011 g002
Figure 3. Standard Diffusion Analysis of the hep solar neutrino data from SuperKamiokande I and II. The green line coincides with a straight line with the slope δ = 0 . 5 . The red line reflects the approximated straight slope of the real data with δ = 0 . 35 . Note the remarkable difference between the boron analysis results δ > 0 . 5 and the hep analysis results shown in this Figure. with δ < 0 . 5 . This is an indication of superdiffusion in the first case and subdiffusion in the second case. The exact result of the SDA is shown by the blue line and indicates a change in the diffusion entropy over time from δ > 0 . 5 to δ < 0 . 5 .
Figure 3. Standard Diffusion Analysis of the hep solar neutrino data from SuperKamiokande I and II. The green line coincides with a straight line with the slope δ = 0 . 5 . The red line reflects the approximated straight slope of the real data with δ = 0 . 35 . Note the remarkable difference between the boron analysis results δ > 0 . 5 and the hep analysis results shown in this Figure. with δ < 0 . 5 . This is an indication of superdiffusion in the first case and subdiffusion in the second case. The exact result of the SDA is shown by the blue line and indicates a change in the diffusion entropy over time from δ > 0 . 5 to δ < 0 . 5 .
Entropy 15 04011 g003
Figure 1, Figure 2, Figure 3 and Figure 4, respectively, are showing diffusion entropy as a function of time for two different time series. Figure 1, Figure 2, Figure 3 and Figure 4 show the numerical results of Standard Deviation Analysis and Diffusion Entropy Analysis for solar neutrino data taken by the SuperKamiokande experiments I (SK-I, 1996–2001, 1496 days, 5.0–20.0 MeV) and II (SK-II, 2002–2005, 791 days, 8.0–20.0 MeV). SuperKamiokande [20] is a 50 kiloton water Cherenkov detector located at the Kamioka Observatory of the Institute for Cosmic Ray Research, University of Tokyo. It was designed to study solar neutrino oscillations and carry out searches for the decay of the nucleon. The SuperKamiokande experiment began in 1996 and in the ensuing decade of running has produced extremely important results in the fields of atmospheric and solar neutrino oscillations, along with setting stringent limits on the decay of the nucleon and the existence of dark matter and astrophysical sources of neutrinos. Perhaps most crucially, Super-Kamiokande for the first time definitely showed that neutrinos have mass and undergo flavor oscillations.
An additional feature of the S ( t ) behavior over time in Figure 2 and Figure 4 are distinct oscillations characteristic for processes with periodic modulation and asymptotic saturation. They appear for large δ. At the current stage of research the origin of these oscillations is an open problem [21].
Figure 4. Diffusion Entropy Analysis of the hep solar neutrino data from SuperKamiokande I and II. The green line coincides with a straight line with the slope δ = 0 . 5 . The red line reflects the approximated straight slope of the real data with δ = 0 . 8 . In comparison with Figure 3, the green and red lines are remarkable different from each other similar to the boron data analysis and indicate strong anomalous diffusion. The exact result of the DEA is shown by the blue line and indicates a development over time from periodic modulation to asymptotic saturation similar to the boron analysis results.
Figure 4. Diffusion Entropy Analysis of the hep solar neutrino data from SuperKamiokande I and II. The green line coincides with a straight line with the slope δ = 0 . 5 . The red line reflects the approximated straight slope of the real data with δ = 0 . 8 . In comparison with Figure 3, the green and red lines are remarkable different from each other similar to the boron data analysis and indicate strong anomalous diffusion. The exact result of the DEA is shown by the blue line and indicates a development over time from periodic modulation to asymptotic saturation similar to the boron analysis results.
Entropy 15 04011 g004

6. Conclusions

An α-generalized entropy measure, parallel to Havrda-Charvat entropy and related to Tsallis entropy, for the scalar, multivariable, and matrix case, respectively, was introduced. This entropy measure was optimized under different types of restrictions leading to generalized type-1 beta family of densities, generalized type-2 beta family of densities, and generalized gamma family of densities. The pathway model, through its α parameter, established links between many entropic, distributional and differential models utilized in the literature. The pathway model provides the ways and means to switch from the Gaussian form of densities to heavy-tailed densities, and, through appropriate normalizing constants, to statistical densities. The simplest case in the pathway model, Shannon entropy, is used for the numerical treatment of diffusion entropy analysis and compared to standard deviation analysis for solar neutrino data from SuperKamiokande. Such a procedure will be extended to other entropy measures of the pathway model in the future. Results of evaluating the simplest case, Shannon entropy, already shows that the solar neutrino data show non-Gaussian signature and contain a signal of modulation with subsequent saturation. This is a clear indication of the superiority of diffusion entropy analysis, focusing on the time development of the probability density function, in contrast to standard deviation analysis, focusing on the time development of the variance. Consequences of this results for so-called solar modeling will be discussed elsewhere.

Acknowledgements

The authors would like to thank the Department of Science and Technology, Government of India, for financial assistance for this work under project No.SR/S4/MS:287/05. The authors are also grateful to Haubold, A., Columbia University New York, for the numerical analysis of the solar neutrino data with Standard Deviation Analysis and Diffusion Entropy Analysis.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Greven, A.; Keller, G.; Warnecke, G. Entropy; Princeton University Press: Princeton, NJ, USA, 2003. [Google Scholar]
  2. Penrose, R. Cycles of Time: An Extraordinary New View of the Universe; The Bodley Head: London, UK, 2010. [Google Scholar]
  3. Mathai, A.M. A pathway to matrix variate gamma and normal densities. Linear Algebra Appl. 2005, 396, 317–328. [Google Scholar] [CrossRef]
  4. Tsallis, C. Possible generalizations of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  5. Beck, C. Stretched exponentials from superstatistics. Physica A 2006, 365, 96–101. [Google Scholar] [CrossRef]
  6. Beck, C.; Cohen, E.G.D. Superstatistics. Physica A 2003, 322, 267–275. [Google Scholar] [CrossRef]
  7. Mathai, A.M.; Rathie, P.N. Basic Concepts in Information Theory and Statistics: Axiomatic Foundations and Applications; Wiley Eastern: New Delhi, India; Wiley Halsted: New York, NY, USA, 1975. [Google Scholar]
  8. Gell-Mann, M.; Tsallis, C. Nonextensive Entropy: Interdisciplinary Applications; Oxford University Press: New York, NY, USA, 2004. [Google Scholar]
  9. Tsallis, C. Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World; Springer: New York, NY, USA, 2009. [Google Scholar]
  10. Mathai, A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument; World Scientific Publishing: New York, NY, USA, 1997. [Google Scholar]
  11. Mathai, A.M. Some properties of Mittag-Leffler functions and matrix-variate analogues: A statistical perspective. Fract. Calc. Appl. Anal. 2010, 13, 113–132. [Google Scholar]
  12. Mathai, A.M.; Haubold, H.J. Pathway model, superstatistics, Tsallis statistics and a generalized measure of entropy. Physica A 2007, 375, 110–122. [Google Scholar] [CrossRef]
  13. Mathai, A.M.; Haubold, H.J. Special Functions for Applied Scientists; Springer: New York, NY, USA, 2008. [Google Scholar]
  14. Mathai, A.M.; Provost, S.B.; Hayakawa, T. Bilinear Forms and Zonal Polynomials; Springer: New York, NY, USA, 1995. [Google Scholar]
  15. Peng, C.K.; Buldyrev, S.V.; Havlin, S.; Simons, M.; Stanley, H.E.; Goldberger, A.L. Mosaic organization of DNA nucleotides. Phys. Rev. E 1995, 49, 1685–1689. [Google Scholar] [CrossRef]
  16. Scafetta, N.; Grigolini, P. Scaling detection in time series: Diffusion entropy analysis. Phys. Rev. E 2002. [Google Scholar] [CrossRef] [PubMed]
  17. Mandelbrot, B.B. The Fractal Geometry of Nature; W.H. Freeman and Company: New York, NY, USA, 1983. [Google Scholar]
  18. Haubold, H.J.; Mathai, A.M. A heuristic remark on the periodic variation in the number of solar neutrinos detected on Earth. Astrophys. Space Sci. 1995, 228, 113–134. [Google Scholar] [CrossRef] [Green Version]
  19. Sakurai, K.; Haubold, H.J.; Shirai, T. The variation of the solar neutrino fluxes over time in the Homestake, GALLEX(GNO), and the Super-Kamiokande experiments. Space Radiat. 2008, 5, 207–216. [Google Scholar]
  20. SuperKamiokande. Available online: http://www-sk.icrr.u-tokyo.ac.jp/sk/index-e.html (accessed on 20 September 2013).
  21. Sebastian, N.; Joseph, D.P.; Nair, S.S. Overview of the pathway idea in statistical and physical sciences. arXiv:1307.793 [math-ph].

Share and Cite

MDPI and ACS Style

Mathai, A.M.; Haubold, H.J. On a Generalized Entropy Measure Leading to the Pathway Model with a Preliminary Application to Solar Neutrino Data. Entropy 2013, 15, 4011-4025. https://doi.org/10.3390/e15104011

AMA Style

Mathai AM, Haubold HJ. On a Generalized Entropy Measure Leading to the Pathway Model with a Preliminary Application to Solar Neutrino Data. Entropy. 2013; 15(10):4011-4025. https://doi.org/10.3390/e15104011

Chicago/Turabian Style

Mathai, Arak M., and Hans J. Haubold. 2013. "On a Generalized Entropy Measure Leading to the Pathway Model with a Preliminary Application to Solar Neutrino Data" Entropy 15, no. 10: 4011-4025. https://doi.org/10.3390/e15104011

APA Style

Mathai, A. M., & Haubold, H. J. (2013). On a Generalized Entropy Measure Leading to the Pathway Model with a Preliminary Application to Solar Neutrino Data. Entropy, 15(10), 4011-4025. https://doi.org/10.3390/e15104011

Article Metrics

Back to TopTop