Next Article in Journal
Research on Three-Phase Asynchronous Motor Fault Diagnosis Based on Multiscale Weibull Dispersion Entropy
Next Article in Special Issue
New Construction of Asynchronous Channel Hopping Sequences in Cognitive Radio Networks
Previous Article in Journal
DAT-MT Accelerated Graph Fusion Dependency Parsing Model for Small Samples in Professional Fields
Previous Article in Special Issue
Secrecy Capacity Region of the AWGN MAC with External Eavesdropper and Feedback
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Jeffreys Divergence and Generalized Fisher Information Measures on Fokker–Planck Space–Time Random Field

School of Mathematics, Tianjin University, Tianjin 300350, China
Entropy 2023, 25(10), 1445; https://doi.org/10.3390/e25101445
Submission received: 24 August 2023 / Revised: 1 October 2023 / Accepted: 11 October 2023 / Published: 13 October 2023
(This article belongs to the Special Issue Coding and Entropy)

Abstract

:
In this paper, we present the derivation of Jeffreys divergence, generalized Fisher divergence, and the corresponding De Bruijn identities for space–time random field. First, we establish the connection between Jeffreys divergence and generalized Fisher information of a single space–time random field with respect to time and space variables. Furthermore, we obtain the Jeffreys divergence between two space–time random fields obtained by different parameters under the same Fokker–Planck equations. Then, the identities between the partial derivatives of the Jeffreys divergence with respect to space–time variables and the generalized Fisher divergence are found, also known as the De Bruijn identities. Later, at the end of the paper, we present three examples of the Fokker–Planck equations on space–time random fields, identify their density functions, and derive the Jeffreys divergence, generalized Fisher information, generalized Fisher divergence, and their corresponding De Bruijn identities.

1. Introduction

Information entropy and Fisher information are quantities to measure random information, and entropy divergence is derived from information entropy to measure the difference between two probability distributions. Formally, we can construct straightforward definitions of entropy divergence and Fisher information for the case of a space–time random field found on classical definitions. The density function, in their definitions, can be obtained in many different ways. In this paper, the density function of a space–time random field is obtained by Fokker–Planck equations. The traditional Fokker–Planck equation is a partial differential equation that describes the probability density function of a random process [1]. It describes the density function’s time-varying change rule. However, the Fokker–Planck equations for random fields, especially for space–time random fields, do not yet possess a distinct form. The classical equation needs to be generalized because the variable varies from time to space–time.
In this paper, we mainly obtain the relation between Jeffreys divergence and generalized Fisher information measure for space–time random field generated by Fokker–Planck equations. Jeffreys divergence is a symmetric entropy divergence, which is generalized from Kullback–Leibler divergence (KL divergence). Jeffreys divergence is a measure in information theory and statistics that evaluates the variation between anticipated and real probability distributions. However, if there is no overlap between the two distributions, the outcome will be infinite, which is a limitation of this approach. To prevent infinite results, we examine how Jeffreys divergence relates to generalized Fisher information for a space–time random field with slight variations in space–time parameters.
Moreover, the classical De Bruijn identity describes the relationship between differential entropy and the Fisher information of the Gaussian channel [2], and it can be generalized to other cases [3,4,5,6,7]. With gratitude to their works and following their ideas, we obtain De Bruijn identities on Jeffreys divergence and generalized Fisher information of space–time random fields, whose density functions satisfy Fokker–Planck equations.

1.1. Space–Time Random Field

The random field was first studied by Kolmogorov [8,9,10], and it was gradually improved by Yaglom [11,12,13] in the middle of the last century. A random field with n N + variables can be expressed as
X ( t 1 , t 2 , , t n )
where ( t 1 , t 2 , , t n ) R n . We call (1) a generalized random field or a multiparameter stochastic process. In some practical applications, we often use the concept of space–time random field. The space–time random field on a d-dimensional space is expressed as
X ( t , x )
where ( t , x ) R + × R d are the space–time variables. It has many applications in statistics, finance, signal processing, stochastic partial differential equations, and other fields [14,15,16,17,18,19,20,21,22,23,24,25,26,27].

1.2. Kramers–Moyal Expansion and Fokker–Planck Equation

In the literature of stochastic processes, Kramers–Moyal expansion refers to a Taylor series of the master equation, named after Kramers and Moyal [28,29]. The Kramers–Moyal expansion is an infinite order partial differential equation
t p ( u , t ) = n = 1 ( 1 ) n n ! n u n K n ( u , t ) p ( u , t )
where p ( u , t ) is the density function and
K n ( u , t ) = R u u n W ( u | u , t ) d u
is the n-order conditional moment. Here, W ( u | u , t ) is the transition probability rate. The Fokker–Planck equation is obtained by keeping only the first two terms of the Kramers–Moyal expansion. In statistical mechanics, the Fokker–Planck equation is usually used to describe the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in the famous Brownian motion, and this equation is commonly employed for determining the density function of an Itô stochastic differential equation [1].

1.3. Differential Entropy and De Bruijn Identity

The entropy of a continuous distribution was proposed by Shannon in 1948, known as differential entropy [30]:
h X = R d p ( x ) log p ( x ) d x
where h ( · ) represents the differential entropy and p ( · ) is the probability density function of X. However, differential entropy is not easy to calculate and seldom exists. There are related studies on the entropy of stochastic processes and continuous systems [31,32,33,34]. If we consider a classical one-dimensional Gaussian channel model
Y t = X + t G
where X is the input signal, G is standard Gaussian noise, t 0 is the strength, and Y t is the output, we can obtain that the density of Y t satisfies the following Fokker–Planck equation:
t p ( y , t ) = 1 2 2 y 2 p ( y , t )
Furthermore, the differential entropy of Y t can be calculated, and then its derivative with respect to t can be obtained as
d h Y t ( t ) d t = 1 2 F I Y t ( t )
where
F I Y t ( t ) = R log p ( y , t ) y 2 p ( y , t ) d y
is the Fisher information of Y t . The Equation (8) here is the De Burijn identity. The de Bruijn identity connects the differential entropy h ( · ) and the Fisher information F I ( · ) , which shows that they are different aspects of the concept of “information”.

1.4. Entropy Divergence

In information theory and statistics, an entropy divergence is a statistical distance generated from information entropy to measure the difference between two probability distributions. There are various divergences generated by information entropy, such as Kullback–Leibler divergence [35], Jeffreys divergence [36], Jensen-Shannon divergence [37], and Rényi divergence [38]. These measures are applied in a variety of fields such as finance, economics, biology, signal processing, pattern recognition, and machine learning [39,40,41,42,43,44,45,46,47,48,49]. In this paper, we mainly focus on the Jeffreys divergence of two distributions, formed as
J D P , Q = R p ( u ) q ( u ) log p ( u ) q ( u ) d μ ( u )
where μ is a measure of u.

2. Notations, Definitions, and Propositions

2.1. Notations and Assumptions

In this paper, we use the subsequent notations and definitions
Given a probability space ( Ω , F , P ) , two real valued space–time random fields are denoted as X ( ω ; t , x ) , Y ( ω ; s , y ) or X ( t , x ) , Y ( s , y ) , where ω Ω and ( t , x ) ( s , y ) R + × R d , d N + are space–time variables.
The probability density functions of P and Q are denoted as p and q. With u R , p ( u ; t , x ) is the density value at ( t , x ) of X and q ( u ; s , y ) is the density value at ( s , y ) of Y.
Unless there are specific restrictions on the ranges of variables, suppose that our density functions p ( u ; t , x ) and q ( u , s , y ) belongs to C 2 , 1 , 1 ( R × R + × R d , R ) . This means that p ( u ; t , x ) and q ( u ; s , y ) are partial differentiable twice with respect to u and once with respect to ( t , x ) or ( s , y ) , respectively.
Vectors that differ only from the k-th coordinate of x = ( x 1 , x 2 , , x k , , x d ) are denoted x ˜ ( k ) = ( x 1 , x 2 , , x k , , x d ) , where the k-th coordinates are x k and x k , k = 1 , 2 , , d .

2.2. Definitions

To obtain the generalized De Bruijn identities between Jeffreys divergence and Fisher divergence, we need to introduce some new definitions and propositions.
The primary and most important measure of information is the Kullback–Leibler divergence for random fields. Definition 1 is easily obtained as follows.
Definition 1.
The Kullback–Leibler divergence between two space–time random fields X ( t , x ) and Y ( s , y ) , ( t , x ) , ( s , y ) R + × R d , with density functions p ( u ; t , x ) and q ( u ; s , y ) , is defined as
K L P ( t , x ) Q ( s , y ) = R p ( u ; t , x ) log p ( u ; t , x ) q ( u ; s , y ) d u
Similar to the classical Kullback–Leibler divergence, Kullback–Leibler divergence on random fields is not symmetrical, i.e.,
K L P ( t , x ) Q ( s , y ) K L Q ( s , y ) P ( t , x )
Following the classical definition of Jeffreys divergence on two random variables, we mainly consider Jeffreys divergence for random fields in this paper.
Definition 2.
The Jeffreys divergence between space–time random fields X ( t , x ) and Y ( s , y ) , ( t , x ) , ( s , y ) R + × R d , with density function p ( u ; t , x ) and q ( u ; s , y ) is defined as
J D P ( t , x ) , Q ( s , y ) = K L P ( t , x ) Q ( s , y ) + K L Q ( s , y ) P ( t , x )
Here, we replace ∥ with, in the distortion measure to emphasize the symmetric property.
Another significant measure of information is Fisher information. In this paper, we consider the generalized Fisher information of the space–time random field.
Definition 3.
The Generalized Fisher information of the space–time random field X ( t , x ) , ( t , x ) R + × R d , with density function p ( u ; t , x ) defined by nonnegative function f ( · ) , is formed as
F I f P ( t , x ) = R f ( u ) u log p ( u ; t , x ) 2 p ( u ; t , x ) d u
In this case, where f is equal to 1, F I 1 ( P ( t , x ) ) represents the typical Fisher information. In addition to Equation (14), there are similar forms of generalized Fisher information
F I f ( t ) P ( t , x ) = R f ( u ) t log p ( u ; t , x ) 2 p ( u ; t , x ) d u
and
F I f ( x k ) P ( t , x ) = R f ( u ) x k log p ( u ; t , x ) 2 p ( u ; t , x ) d u
for k = 1 , 2 , , d .
Obviously, (15) and (16) are generalized Fisher information on space–time variables. Regarding the generalized Fisher information (14), we can come to a following simple proposition.
Proposition 1.
For the arbitrary positive continuous function f ( · ) , suppose the generalized Fisher information of continuous random variable X
F I f ( X ) : = R f ( u ) d log p X ( u ) d u 2 p X ( u ) d u
is well defined, where p X ( u ) represents the probability density. Then, we have the generalized Fisher information inequality
1 F I f ( X + Y ) 1 F I f ( X ) + 1 F I f ( Y )
when f 1 , F I 1 ( X ) represents the Fisher information in the standard case.
Proof. 
Denote Z = X + Y , p X , p Y , and p Z represent densities, i.e.,
p Z ( z ) = R p X ( x ) p Y ( z x ) d x
and derivative function
p Z ( z ) = R p X ( x ) p Y ( z x ) d x
If p X , p Y , and p Z never vanish,
p Z ( z ) p Z ( z ) = R p X ( x ) p Y ( z x ) p Z ( z ) d x = R p X ( x ) p Y ( z x ) p Z ( z ) p X ( x ) p X ( x ) d x = E p X ( x ) p X ( x ) | Z = z
is the conditional expectation of p X ( x ) p X ( x ) for given z. Similarly, we can obtain
p Z ( z ) p Z ( z ) = E p Y ( y ) p Y ( y ) | Z = z
and μ , λ R , we also find that
E μ p X ( x ) p X ( x ) + λ p Y ( y ) p Y ( y ) | Z = z = ( μ + λ ) p Z ( z ) p Z ( z )
Then, we have
( μ + λ ) p Z ( z ) p Z ( z ) 2 = E μ p X ( x ) p X ( x ) + λ p Y ( y ) p Y ( y ) | Z = z 2 E μ p X ( x ) p X ( x ) + y p Y ( y ) p Y ( y ) 2 | Z = z
with equality only if
μ p X ( x ) p X ( x ) + λ p Y ( y ) p Y ( y ) = ( μ + λ ) p Z ( z ) p Z ( z )
with probability 1 whenever z = x + y and we have
f ( z ) ( μ + λ ) p Z ( z ) p Z ( z ) 2 f ( z ) E μ p X ( x ) p X ( x ) + y p Y ( y ) p Y ( y ) 2 | Z = z
Averaging both sides over the distribution of z
E f ( z ) ( μ + λ ) p Z ( z ) p Z ( z ) 2 E f ( z ) E μ p X ( x ) p X ( x ) + y p Y ( y ) p Y ( y ) 2 | Z = z = μ 2 E f ( z ) E p X ( x ) p X ( x ) 2 | Z = z + λ 2 E E f ( z ) p Y ( y ) p Y ( y ) 2 | Z = z
i.e.,
( μ + λ ) 2 F I f ( X + Y ) μ 2 F I f ( X ) + λ 2 F I f ( Y )
Let μ = 1 F I f ( X ) and λ = 1 F I f ( Y ) , we obtain
1 F I f ( X + Y ) 1 F I f ( X ) + 1 F I f ( Y )
According to Definition 3, we can obtain relevant definitions on the generalized Fisher information measure.
Definition 4.
The generalized Cross–Fisher information for space–time random fields X ( t , x ) and Y ( s , y ) , ( t , x ) , ( s , y ) R + × R d , with density functions p ( u ; t , x ) and q ( u ; s , y ) , defined by the nonnegative function f ( · ) , is defined as
C F I f ( P ( t , x ) , Q ( s , y ) ) = R f ( u ) u log q ( u ; s , y ) 2 p ( u ; t , x ) d u
Similar to the concept of cross-entropy, it is easy to verify that (30) is symmetrical about P and Q.
Definition 5.
The generalized Fisher divergence for space–time random fields X ( t , x ) and Y ( s , y ) , for ( t , x ) , ( s , y ) R + × R d , with density functions p ( u ; t , x ) and q ( u ; s , y ) , defined by nonnegative function f ( · ) , is defined as
F D f P ( t , x ) Q ( s , y ) = R f ( u ) u log p ( u ; t , x ) u log q ( u ; s , y ) 2 p ( u ; t , x ) d u
In particular, when f 1 , F D 1 P ( t , x ) Q ( s , y ) represents the typical Fisher divergence.
Obviously, the generalized Fisher divergence between two random fields is not a symmetrical measure of information. We need to create a new formula to expand on (31) in order to achieve symmetry.
Definition 6.
The generalized Fisher divergence for space–time random fields X ( t , x ) and Y ( s , y ) , ( t , x ) , ( s , y ) R + × R d , with density functions p ( u ; t , x ) and q ( u ; s , y ) , defined by nonnegative functions f ( · ) and g ( · ) , is defined as
F D ( f , g ) P ( t , x ) Q ( s , y ) = R f ( u ; t , x ) u log p ( u ; t , x ) g ( u ; s , y ) u log q ( u ; s , y ) × u log p ( u ; t , x ) u log q ( u ; s , y ) p ( u ; t , x ) + q ( u , s , y ) d u
In particular, if f equals g, the generalized Fisher divergence for random fields using a single function is denoted as F D ( f , f ) P ( t , x ) Q ( s , y ) .
In general, F D ( f , g ) P ( t , x ) Q ( s , y ) is asymmetric with respect to P and Q, i.e.,
F D ( f , g ) P ( t , x ) Q ( s , y ) F D ( f , g ) Q ( s , y ) P ( t , x )
If we suppose that f and g are functions only related to P and Q, i.e.,
f ( u ; t , x ) = T p ( t , x ) ( u ) g ( u ; s , y ) = T q ( s , y ) ( u )
where T is an operator; the generalized Fisher divergence F D ( f , g ) P ( t , x ) Q ( s , y ) can be rewritten as
F D ( f , g ) P ( t , x ) Q ( s , y ) = R T p ( t , x ) ( u ) u log p ( u ; t , x ) T q ( s , y ) ( u ) u log q ( u ; s , y ) × u log p ( u ; t , x ) u log q ( u ; s , y ) p ( u ; t , x ) + q ( u , s , y ) d u
and we can easily obtain
F D ( f , g ) P ( t , x ) Q ( s , y ) = F D ( g , f ) Q ( t , x ) P ( s , y )
In this case, we call (35) symmetric Fisher divergence for random fields generated by operator T and denote it as
s F D T P ( t , x ) , Q ( s , y ) = R T p ( t , x ) ( u ) u log p ( u ; t , x ) T q ( s , y ) ( u ) u log q ( u ; s , y ) × u log p ( u ; t , x ) u log q ( u ; s , y ) p ( u ; t , x ) + q ( u , s , y ) d u
Notice that
A a B b = 1 2 × 2 A a B b = 1 2 A a A b + A b B b + A a B a + B a B b = 1 2 A + B a b + A B a + b
for A, B, a, b R ; then, we can rewrite (37) as
s F D T P ( t , x ) , Q ( s , y ) = 1 2 R T p ( t , x ) ( u ) + T q ( s , y ) ( u ) u log p ( u ; t , x ) u log q ( u ; s , y ) 2 × p ( u ; t , x ) + q ( u , s , y ) d u + 1 2 R T p ( t , x ) ( u ) T q ( s , y ) ( u ) u log p ( u ; t , x ) 2 u log q ( u ; s , y ) 2 × p ( u ; t , x ) + q ( u , s , y ) d u = 1 2 F D T p ( t , x ) + T q ( s , y ) P ( t , x ) Q ( s , y ) + F D T p ( t , x ) + T q ( s , y ) Q ( s , y ) P ( t , x ) + 1 2 F I T p ( t , x ) T q ( s , y ) P ( t , x ) + F I T p ( t , x ) T q ( s , y ) Q ( s , y ) + 1 2 C F I T p ( t , x ) T q ( s , y ) Q ( s , y ) , P ( t , x ) C F I T p ( t , x ) T q ( s , y ) P ( t , x ) , Q ( s , y )
Lemma 1
(Kramers–Moyal expansion [28,29]). Suppose that the random process X ( t ) has any order moment; then, the probability density function p ( u , t ) satisfies the Kramers–Moyal expansion
t p ( u , t ) = n = 1 ( 1 ) n n ! n u n K n ( u , t ) p ( u , t )
where
K n ( u , t ) = R u u n W ( u | u , t ) d u
is the n-order conditional moment and W ( u | u , t ) is the transition probability rate.
Lemma 2
(Pawula theorem [50,51]).If the limit on conditional moment of random process X ( t )
lim Δ t 0 1 Δ t E X ( t + Δ t ) X ( t ) n | X ( t ) = x
exists for all n N + , and the limit value equals 0 for some even number, then the limit values are 0 for all n 3 .
The Pawula theorem states that there are only three possible cases in the Kramers–Moyal expansion:
(1)
The Kramers–Moyal expansion is truncated at n = 1 , meaning that the process is deterministic;
(2)
The Kramers–Moyal expansion stops at n = 2 , with the resulting equation being the Fokker–Planck equation, and describes diffusion processes;
(3)
The Kramers–Moyal expansion contains all the terms up to n = .
In this paper, we only focus on the case of the Fokker–Planck equation.

3. Main Results and Proofs

In this section, we establish the Fokker–Planck equations for continuous space–time random field. Additionally, we present the relationship theorem between Jeffreys divergence and Fisher information, as well as the De Bruijn identities connection between Jeffreys divergence and Fisher divergence.
Theorem 1.
The probability density function p ( u ; t , x ) of the continuous space–time random field X ( t , x ) , u R , ( t , x ) R + × R d satisfies the following Fokker–Planck equations:
t p ( u ; t , x ) = 1 2 2 u 2 b 0 ( u ; t , x ) p ( u , t , x ) u a 0 ( u ; t , x ) p ( u ; t , x ) x k p ( u ; t , x ) = 1 2 2 u 2 b k ( u ; t , x ) p ( u , t , x ) u a k ( u ; t , x ) p ( u ; t , x ) k = 1 , 2 , , d
where
a 0 ( u ; t , x ) = lim Δ t 0 1 Δ t M 1 ( u ; t , Δ t , x ) b 0 ( u ; t , x ) = lim Δ t 0 1 Δ t M 2 ( u ; t , Δ t , x ) a k ( u ; t , x ) = lim Δ x k 0 1 Δ x k M ˜ 1 ( u ; t , x , Δ x k ) b k ( u ; t , x ) = lim Δ x k 0 1 Δ x k M ˜ 2 ( u ; t , x , Δ x k ) k = 1 , 2 , , d
here,
M n ( u ; t , Δ t , x ) = E X ( t + Δ t , x ) X ( t , x ) n | X ( t , x ) = u M ˜ n ( u ; t , x , Δ x k ) = E X ( t , x + Δ x k e k ) X ( t , x ) n | X ( t , x ) = u
are n-order conditional moments and e k = ( 0 , 0 , , 1 , , 0 ) R d are standard orthogonal basis vectors, k = 1 , 2 , , d .
Proof. 
Δ t 0 , we can obtain the difference of density function in the time variable
p ( u ; t + Δ t , x ) p ( u ; t , x ) = n = 1 + ( 1 ) n n ! n u n M n ( u ; t , Δ t , x ) p ( u ; t , x )
where
M n ( u ; t , Δ t , x ) = E X ( t + Δ t , x ) X ( t , x ) n | X ( t , x ) = u
is the n-order conditional moment. Then, the partial derivative of the density function with respect to t is
t p ( u ; t , x ) = lim Δ t 0 1 Δ t n = 1 + ( 1 ) n n ! n u n M n ( u ; t , Δ t , x ) p ( u ; t , x )
The Pawula theorem implies that if the Kramers–Moyal expansion stops after the second term, we obtain the Fokker–Planck equation about the time variable t
t p ( u ; t , x ) = 1 2 2 u 2 b 0 ( u ; t , x ) p ( u , t , x ) u a 0 ( u ; t , x ) p ( u ; t , x )
where
a 0 ( u ; t , x ) = lim Δ t 0 1 Δ t M 1 ( u ; t , Δ t , x ) b 0 ( u ; t , x ) = lim Δ t 0 1 Δ t M 2 ( u ; t , Δ t , x )
Similarly, we may consider the increment Δ x k of the spatial variable x k , and we can obtain the Fokker–Planck equations about x k as
x k p ( u ; t , x ) = 1 2 2 u 2 b k ( u ; t , x ) p ( u , t , x ) u a k ( u ; t , x ) p ( u ; t , x )
where
a k ( u ; t , x ) = lim Δ x k 0 1 Δ x k M ˜ 1 ( u ; t , x , Δ x k ) b k ( u ; t , x ) = lim Δ x k 0 1 Δ x k M ˜ 2 ( u ; t , x , Δ x k )
here,
M ˜ n ( u ; t , x , Δ x k ) = E X ( t , x + Δ x k e k ) X ( t , x ) n | X ( t , x ) = u
e k = ( 0 , 0 , , 1 , , 0 ) R d are standard orthogonal basis vectors, k = 1 , 2 , , d . □
The Fokker–Planck equations are partial differential equations that describe the probability density function of the space–time random field, similar to the classical Fokker–Planck equation. Solving a system of partial differential equations for general Fokker–Planck equations proves to be challenging. Fortunately, in Section 4 we present three distinct categories of space–time random fields in detail, along with their corresponding Fokker–Planck equations, and deduce their probability density functions.
Next, we examine the relationship between Jeffreys divergence and Fisher information in a single space–time random field when there are different time or spatial variables.
Theorem 2.
Suppose that p ( u ; t , x ) > 0 is a continuous differential density function of the space–time random field X ( t , x ) , the partial derivatives u p ( u ; t , x ) , t p ( u ; t , x ) , x k p ( u ; t , x ) are continuous bounded functions, and the integrals in the proof are well-defined, k = 1 , 2 , , d , u R , ( t , x ) R + × R d . Then, we have
lim t s 0 J D P ( t , x ) , P ( s , x ) | t s | 2 = F I 1 t X ( t , x ) lim x k x k 0 J D P ( t , x ) , P ( t , x ˜ ( k ) ) | x k x k | 2 = F I 1 x k X ( t , x ) k = 1 , 2 , , d
Proof. 
For fixed x R , and s t > 0 ,
J D P ( t , x ) , P ( s , x ) = K L P ( t , x ) | | P ( s , x ) + K L P ( s , x ) | | P ( t , x ) = R log p ( u ; t , x ) log p ( u , s , x ) p ( u ; t , x ) p ( u , s , x ) d u
then we can obtain
lim t s 0 J D P ( t , x ) , P ( s , x ) | t s | 2 = lim t s 0 R log p ( u ; t , x ) log p ( u , s , x ) t s p ( u ; t , x ) p ( u , s , x ) t s d u
Notice that
lim t s 0 log p ( u ; t , x ) log p ( u , s , x ) t s = t log p ( u ; t , x ) lim t s 0 p ( u ; t , x ) p ( u , s , x ) t s = t p ( u ; t , x )
exist, and we obtain
lim t s 0 J D P ( t , x ) , P ( s , x ) | t s | 2 = R t log p ( u ; t , x ) t p ( u ; t , x ) d u = R t log p ( u ; t , x ) 2 p ( u ; t , x ) d u = F I 1 t X ( t , x )
Similarly, for fixed t and x k x k , we can obtain the identity on Jeffreys divergence and Fisher information for space coordinates
lim x k x k 0 J D P ( t , x ) , P ( t , x ˜ ( k ) ) | x k x k | 2 = F I 1 x k X ( t , x )
for k = 1 , 2 , , d . □
Theorem 2 states that as the space–time variable difference approaches zero, the Fisher information of the space–time random field is the limit of the ratio of Jeffreys divergence at different locations to the square of space–time variable difference. It is noteworthy that Theorem 2 specifically addresses Jeffreys divergence only in cases where a single space–time random field is situated in distinct space–time positions, and where the difference between space–time variables approaches to 0. This ensures that Jeffreys divergence will not be infinite.
Theorem 3.
Suppose that p ( u ; t , x ) and q ( u ; t , x ) are continuous differentiable density functions of space–time random fields X ( t , x ) and Y ( t , x ) such that
lim u 1 2 u b k ( 1 ) ( u ; t , x ) p ( u ; t , x ) a k ( 1 ) ( u ; t , x ) p ( u ; t , x ) log p ( u ; t , x ) q ( u ; t , x ) q ( u ; t , x ) p ( u ; t , x ) = 0 lim u 1 2 u b k ( 2 ) ( u ; t , x ) q ( u ; t , x ) a k ( 2 ) ( u ; t , x ) q ( u ; t , x ) log q ( u ; t , x ) p ( u ; t , x ) p ( u ; t , x ) q ( u ; t , x ) = 0
where a k , b k are the forms in (44) and (45), and ( t , x ) R + × R d , k = 0 , 1 , 2 , , d . Then, the Jeffreys divergence J D P ( t , x ) , Q ( t , x ) satisfies generalized De Bruijn identities
t J D P ( t , x ) , Q ( t , x ) = 1 2 F D b 0 ( 1 ) , b 0 ( 2 ) P ( t , x ) Q ( t , x ) R 0 P ( t , x ) Q ( t , x ) x k J D P ( t , x ) , Q ( t , x ) = 1 2 F D b k ( 1 ) , b k ( 2 ) P ( t , x ) Q ( t , x ) R k P ( t , x ) Q ( t , x ) k = 1 , 2 , , d
where
R 0 P ( t , x ) Q ( t , x ) = R 1 2 u u 2 b 0 ( 1 ) b 0 ( 2 ) u a 0 ( 1 ) a 0 ( 2 ) p + q d u R k P ( t , x ) Q ( t , x ) = R 1 2 u u 2 b k ( 1 ) b k ( 2 ) u a k ( 1 ) a k ( 2 ) p + q d u k = 1 , 2 , , d
here, we omit ( u ; t , x ) in the integrals for convenience.
Proof. 
By Definition 2, we have
J D P ( t , x ) , Q ( t , x ) = K L ( P ( t , x ) Q ( t , x ) ) + K L ( Q ( t , x ) P ( t , x ) ) = R p log p q d u + R q log q p d u = R p log p q + q log q p d u
where p : = p ( u ; t , x ) , q : = q ( u ; t , x ) are density functions of X ( t , x ) and Y ( t , x ) ; here, we omit ( u ; t , x ) .
Notice that
u p q = 1 q u p p q u q u q p = 1 p u q q p u p
i.e.,
p q u q = u p q u p q q p u p = u q p u q p
and
u log p u log q p + q = 1 p u p 1 q u q p + q = u p u q + q p u p p q u q = p q u p p u q p 2 + q q u p p u q q 2 = p u q p q u p q
then,
t J D P ( t , x ) , Q ( t , x ) = R t p log p q + q t p q + t q log q p + p t q p d u = R t p log p q + t p p q t q + t q log q p + t q q p t p d u = R log p q q p t p + log q p p q t q d u = R log p q q p 1 2 u u 2 b 0 ( 1 ) p u a 0 ( 1 ) p d u + R log q p p q 1 2 u u 2 b 0 ( 2 ) q u a 0 ( 2 ) q d u = R 1 2 u b 0 ( 1 ) p a 0 ( 1 ) p q p u p q u q p d u R 1 2 u b 0 ( 2 ) q a 0 ( 2 ) q p q u q p u p q d u = R 1 2 u b 0 ( 1 ) p a 0 ( 1 ) p 1 p q u p q p u q p d u R 1 2 u b 0 ( 2 ) q a 0 ( 2 ) q 1 q p u q p q u p q d u = R 1 2 u b 0 ( 1 ) p a 0 ( 1 ) p 1 p 1 2 u b 0 ( 2 ) q a 0 ( 2 ) q 1 q × u log p u log q p + q d u = R 1 2 b 0 ( 1 ) u log p 1 2 b 0 ( 2 ) u log q + 1 2 u b 0 ( 1 ) b 0 ( 2 ) a 0 ( 1 ) a 0 ( 2 ) × u log p u log q p + q d u = R 1 2 b 0 ( 1 ) u log p 1 2 b 0 ( 2 ) u log q u log p u log q p + q d u R 1 2 u b 0 ( 1 ) b 0 ( 2 ) a 0 ( 1 ) a 0 ( 2 ) u log p u log q p + q d u = 1 2 F D b 0 ( 1 ) , b 0 ( 2 ) P ( t , x ) Q ( t , x ) R 0 P ( t , x ) Q ( t , x )
where
F D b 0 ( 1 ) , b 0 ( 2 ) P ( t , x ) Q ( t , x ) = R b 0 ( 1 ) u log p b 0 ( 2 ) u log q u log p u log q p + q d u
and
R 0 P ( t , x ) Q ( t , x ) = R 1 2 u u 2 b 0 ( 1 ) b 0 ( 2 ) u a 0 ( 1 ) a 0 ( 2 ) p + q d u
Similarly, for k = 1 , 2 , , d , we can obtain the generalized De Bruijn identities about the spatial variable x k
x k J D P ( t , x ) , Q ( t , x ) = 1 2 F D b k ( 1 ) , b k ( 2 ) P ( t , x ) Q ( t , x ) R k ( P ( t , x ) , Q ( t , x ) )
where
R k P ( t , x ) Q ( t , x ) = R 1 2 u u 2 b k ( 1 ) b k ( 2 ) u a k ( 1 ) a k ( 2 ) p + q d u
then we obtain the conclusion. □
Unlike Theorem 3, Theorem 4 focuses on the Jeffreys divergence between two separate space–time random fields X ( t , x ) and Y ( t , x ) , both at the same position ( t , x ) , and establishes the identities of the connection between the Jeffreys divergence and the Fisher divergence of X ( t , x ) and Y ( t , x ) . This is known as the De Bruijn identities. To prevent Jeffreys divergence from becoming infinite, it is necessary for the difference between the probability density functions of X ( t , x ) and Y ( t , x ) to be small. In Section 4, we obtain Jeffreys divergence and Fisher divergence using the same type of Fokker–Planck equations but with different parameters. This allows for the selection of only the appropriate parameters.

4. Three Fokker–Planck Random Fields and Their Corresponding Information Measures

In this section, we present three types of Fokker–Planck equations and derive their corresponding density functions and information measures, which are Jeffreys divergence, generalized Fisher information, and Fisher divergence. With these quantities, the results corresponding to the applications of Theorems 2 and 3 are obtained. On the one hand, we calculate the ratio of Jeffreys divergence to the square of space–time variation on the identical Fokker–Planck space–time random field at various space–time points, in comparison to generalized Fisher information. On the other hand, we derive the De Burijn identities for Jeffreys divergence and generalized Fisher divergence from Fokker–Planck equations on a single space–time random field at the corresponding space–time location, under same type but with different parameters.
First, we present a theorem regarding simple type Fokker–Planck equations of the random field.
Theorem 4.
Suppose the functions in the Fokker–Planck Equations (43) for the continuous random field X ( t , x ) are formulated as follows:
a 0 ( u ; t , x ) = a 0 ( t , x ) b 0 ( u ; t , x ) = b 0 ( t , x ) > 0 a k ( u ; t , x ) = a k ( t , x ) b k ( u ; t , x ) = b k ( t , x ) > 0 k = 1 , 2 , , d
where a 0 , a k , b 0 , and b k are continuously differentiable functions independent of u and two continuously differentiable functions, α ( t , x ) and β ( t , x ) , exist such that
d α ( t , x ) = a 0 d t + a 1 d x 1 + + a d d x d d β ( t , x ) = b 0 d t + b 1 d x 1 + + b d d x d
the initial density function is p ( u ; t , x ) = δ u u 0 ( x ) as prod ( t , x ) = 0 ; then, the density function of X ( t , x ) is presented as follows:
p ( u ; t , x ) = 1 2 π β ( t , x ) e u u 0 ( x ) α ( t , x ) 2 2 β ( t , x )
Proof. 
It can be easily inferred that the Fokker–Planck equations are simple parabolic equations, and their solution can be obtained through Fourier transform
p ( u ; t , x ) = 1 2 π 0 t b 0 ( s , x ) d s e u u 0 ( x ) 0 t a 0 ( s , x ) d s 2 2 0 t b 0 ( s , x ) d s p ( u ; t , x ) = 1 2 π 0 x k b k ( t , x ) d x k e u u 0 ( x ) 0 x k a k ( t , x ) d x k 2 2 0 x k b k ( t , x ) d x k
Recall that there are two functions α ( t , x ) and β ( t , x ) such that
d α ( t , x ) = a 0 ( t , x ) d t + a 1 ( t , x ) d x 1 + + a d ( t , x ) d x d d β ( t , x ) = b 0 ( t , x ) d t + b 1 ( t , x ) d x 1 + + b d ( t , x ) d x d
we can obtain the probability density function
p ( u ; t , x ) = 1 2 π β ( t , x ) e u u 0 ( x ) α ( t , x ) 2 2 β ( t , x )
Actually, numerous examples exist in which the Fokker–Planck equations comply with Theorem 4. Let B ( t , x ) be the ( 1 + d , 1 ) Brownian sheet [52,53], that is, a centered continuous Gaussian process that is indexed by ( 1 + d ) real, positive parameters and takes its values in R . Moreover, its covariance structure is given by
E B ( t , x ) B ( s , y ) = t s k = 1 d x k y k
for ( t , x 1 , x 2 , , x d ) , ( s , y 1 , y 2 , , y d ) R + × R + d , where · · represents the minimum of two numbers. We can easily obtain
E B 2 ( t , x ) = prod ( t , x )
where prod ( t , x ) = t x 1 x 2 x d is the coordinate product of ( t , x ) and the density function is
p ( 1 ) ( u ; t , x ) = 1 2 π prod ( t , x ) e u 2 2 prod ( t , x )
Moreover, the Fokker–Planck equations of Brownian sheet are
t p ( 1 ) ( u ; t , x ) = prod ( x ) 2 2 u 2 p ( 1 ) ( u , t , x ) x k p ( 1 ) ( u ; t , x ) = prod ( t , x ) 2 x k 2 u 2 p ( 1 ) ( u , t , x ) k = 1 , 2 , , d
with the initial condition p ( u ; t , x ) = δ ( u ) as prod ( t , x ) = 0 .
Following the concept of constructing a Brownian bridge on Brownian motion [53], we refer to
B ( t , x ) = B ( t , x ) prod ( t , x ) B ( 1 , 1 , , 1 )
as a Brownian sheet bridge on the cube ( t , x ) [ 0 , 1 ] × [ 0 , 1 ] d , where B ( t , x ) represents the Brownian sheet. Obviously, B ( t , x ) is Gaussian, and E B ( t , x ) = 0 and the covariance structure are
E B ( t , x ) B ( s , y ) = E B ( t , x ) B ( s , y ) prod ( t , x ) prod ( s , y )
we can obtain
E B 2 ( t , x ) = prod ( t , x ) 1 prod ( t , x )
and the density function of B ( t , x ) is
p ( 2 ) ( u ; t , x ) = 1 2 π prod ( t , x ) 1 prod ( t , x ) e u 2 2 prod ( t , x ) 1 prod ( t , x )
In addition to this, the Fokker–Planck equations of Brownian sheet bridge are
t p ( 2 ) ( u ; t , x ) = prod ( x ) 1 2 prod ( t , x ) 2 2 u 2 p ( 2 ) ( u , t , x ) x k p ( 2 ) ( u ; t , x ) = prod ( t , x ) 2 x k 1 2 prod ( t , x ) 2 u 2 p ( 2 ) ( u , t , x ) k = 1 , 2 , , d
with the initial condition p ( u ; t , x ) = δ ( u ) as prod ( t , x ) = 0 , and we obtain the solution (85).
Combining two probability density functions (80) and (85) yields their respective Jeffreys divergence and generalized De Burijn identities. The Jeffreys divergence of (74) can be obtained at various space–time points as
J D P ( t , x ) , P ( s , y ) = α ( t , x ) α ( s , y ) 2 + β ( s , y ) 2 β ( t , x ) + α ( t , x ) α ( s , y ) 2 + β ( t , x ) 2 β ( s , y ) 1
and the Fisher divergence between P ( 1 ) and P ( 2 ) at the identical space–time point represents
F D b k ( 1 ) , b k ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = 1 β 1 2 ( t , x ) β 2 2 ( t , x ) { α 1 ( t , x ) α 2 ( t , x ) 2 b k ( 2 ) β 1 2 ( t , x ) + b k ( 1 ) β 2 2 ( t , x ) + β 1 ( t , x ) β 2 ( t , x ) β 1 ( t , x ) + β 2 ( t , x ) b k ( 2 ) β 1 ( t , x ) b k ( 1 ) β 2 ( t , x ) }
where k = 0 , 1 , , d .
Bring the density function of Brownian sheet into Equation (87); we can easily obtain the Jeffreys divergence of the Brownian sheet at various space–time points as
J D P ( 1 ) ( t , x ) , P ( 1 ) ( s , y ) = prod ( s , y ) 2 prod ( t , x ) + prod ( t , x ) 2 prod ( s , y ) 1
and the generalized Fisher information on space–time variables is as follows:
F I 1 ( t ) P ( 1 ) ( t , x ) = 1 2 t 2 F I 1 ( x k ) P ( 1 ) ( t , x ) = 1 2 x k 2
k = 1 , 2 , , d .
Then, we can obtain quotients of the squared difference between Jeffreys divergence and space–time variables
J D P ( 1 ) ( t , x ) , P ( 1 ) ( s , x ) | t s | 2 = 1 2 s t J D P ( 1 ) ( t , x ) , P ( 1 ) ( t , x ˜ ( k ) ) | x k x k | 2 = 1 2 x k x k
and then we can obtain the relation between quotients and generalized Fisher information
J D P ( 1 ) ( t , x ) , P ( 1 ) ( s , x ) | t s | 2 F I 1 ( t ) P ( 1 ) ( t , x ) = t s J D P ( 1 ) ( t , x ) , P ( 1 ) ( t , x ˜ ( k ) ) | x k x k | 2 F I 1 ( x k ) P ( 1 ) ( t , x ) = x k x k
for k = 1 , 2 , , d . If we consider the approximation of spacetime points ( t , x ) and ( s , y ) , the final result (92) satisfies the conclusion of Theorem 2.
Similarly, we can obtain the Jeffreys divergence of Brownian sheet bridge at different space–time points
J D P ( 2 ) ( t , x ) , P ( 2 ) ( s , y ) = prod ( s , y ) 1 prod ( s , y ) 2 prod ( t , x ) 1 prod ( t , x ) + prod ( t , x ) 1 prod ( t , x ) 2 prod ( s , y ) 1 prod ( s , y ) 1
and the generalized Fisher information on space–time variables
F I 1 ( t ) P ( 2 ) ( t , x ) = 1 2 prod ( t , x ) 2 2 t 2 1 prod ( t , x ) 2 F I 1 ( x k ) P ( 2 ) ( t , x ) = 1 2 prod ( t , x ) 2 2 x k 2 1 prod ( t , x ) 2
k = 1 , 2 , , d . Further, we can easily get the quotients of the squared difference between Jeffreys divergence and space–time variables
J D P ( 2 ) ( t , x ) , P ( 2 ) ( s , x ) | t s | 2 = 1 prod ( x ) ( s + t ) 2 2 s t 1 prod ( s , x ) 1 prod ( t , x ) J D P ( 2 ) ( t , x ) , P ( 2 ) ( t , x ˜ ( k ) ) | x k x k | 2 = 1 2 x k x k 1 prod ( t , x ) 1 prod ( t , x ˜ ( k ) ) 1 prod ( t , x ) x k x k + x k 2
and then we can obtain the relation between quotients and generalized Fisher information
J D P ( 2 ) ( t , x ) , P ( 2 ) ( s , x ) | t s | 2 F I 1 ( t ) P ( 2 ) ( t , x ) = t 1 prod ( t , x ) 1 prod ( x ) ( s + t ) 2 s 1 prod ( s , x ) 1 2 prod ( t , x ) 2 J D P ( 2 ) ( t , x ) , P ( 2 ) ( t , x ˜ ( k ) ) | x k x k | 2 F I 1 ( x k ) P ( 2 ) ( t , x ) = x k 1 prod ( t , x ) x k 1 prod ( t , x ˜ ( k ) ) 1 2 prod ( t , x ) 2 1 prod ( t , x ) x k x k + x k 2
for k = 1 , 2 , , d . Without loss of generality, the result (96) also satisfies Theorem 2.
Next, we evaluate the Jeffreys divergence between the density functions (80) and (85) for the same space–time points. It should be noted that the Brownian sheet bridge density function is defined on a bounded domain; therefore, we limit our analysis to the space–time region of ( t , x ) [ 0 , 1 ] × [ 0 , 1 ] d .
The Jeffreys divergence between P ( 1 ) and P ( 2 ) can be easily obtained as
J D P ( 1 ) ( t , x ) , P ( 2 ) ( t , x ) = 1 prod ( t , x ) 2 + 1 2 1 prod ( t , x ) 1
and the Fisher divergence as shown in (88) is given by
F D b 0 ( 1 ) , b 0 ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = prod ( x ) prod ( x ) 1 prod ( t , x ) 2 F D b k ( 1 ) , b k ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = prod ( t , x ) x k prod ( t , x ) x k 1 prod ( t , x ) 2
with the remainder terms
R 0 P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = R k P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = 0
for k = 1 , 2 , , d . Furthermore, we can obtain the generalized De Bruijn identities
t J D P ( 1 ) ( t , x ) , P ( 2 ) ( t , x ) = 1 2 F D b 0 ( 1 ) , b 0 ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) x k J D P ( 1 ) ( t , x ) , P ( 2 ) ( t , x ) = 1 2 F D b k ( 1 ) , b k ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) k = 1 , 2 , , d
Next, we present two categories of significant Fokker–Planck equations and provide pertinent illustrations for computing Jefferys divergence, Fisher information, and Fisher divergence.
Theorem 5.
Suppose the functions in the Fokker–Planck Equations (43) for the continuous random field X ( t , x ) are formulated as follows:
a k ( u ; t , x ) 0 b k ( u ; t , x ) = b k ( t , x ) u 2 > 0 k = 0 , 1 , 2 , , d
where b k are continuously differentiable functions independent of u and a continuously differentiable function β ( t , x ) exists, such that
d β ( t , x ) = b 0 ( t , x ) d t + b 1 ( t , x ) d x 1 + + b d ( t , x ) d x d
the initial value X ( t , x ) = 1 as prod ( t , x ) = 0 and the initial density function is p ( u ; t , x ) = δ ( u 1 ) as prod ( t , x ) = 0 . Then, the density function is
p ( u ; t , x ) = e β ( t , x ) 2 π β ( t , x ) e log u + 3 2 β ( t , x ) 2 2 β ( t , x )
Proof. 
Depending on the conditions, it is easy to obtain the Fokker–Planck equations as
t p ( u ; t , x ) = b 0 ( t , x ) 2 u 2 2 p ( u ; t , x ) u 2 + 2 b 0 ( t , x ) u u p ( u ; t , x ) + b 0 ( t , x ) p ( u ; t , x ) x k p ( u ; t , x ) = b k ( t , x ) 2 u 2 2 p ( u ; t , x ) u 2 + 2 b k ( t , x ) u u p ( u ; t , x ) + b k ( t , x ) p ( u ; t , x ) k = 1 , 2 , , d
Take the transformation v = log u or u = e v and note p ˜ ( v ; t , x ) = p ( u ( v ) ; t , x ) ; we can obtain
t p ˜ ( v ; t , x ) = b 0 ( t , x ) 2 2 v 2 p ˜ ( v ; t , x ) + 3 b 0 ( t , x ) 2 v p ˜ ( v ; t , x ) + b 0 ( t , x ) p ˜ ( v ; t , x ) x k p ˜ ( v ; t , x ) = b k ( t , x ) 2 2 v 2 p ˜ ( v ; t , x ) + 3 b k ( t , x ) 2 v p ˜ ( v ; t , x ) + b k ( t , x ) p ˜ ( v ; t , x ) k = 1 , 2 , , d
with the solution
p ˜ ( v ; t , x ) = e 0 t b 0 ( s , x ) d s 2 π 0 t b 0 ( s , x ) d s e v + 3 2 0 t b 0 ( s , x ) d s 2 2 0 t b 0 ( s , x ) d s p ˜ ( v ; t , x ) = e 0 x k b k ( t , x ) d x k 2 π 0 x k b k ( t , x ) d x k e v + 3 2 0 x k b k ( t , x ) d x k 2 2 0 x k b k ( t , x ) d x k k = 1 , 2 , , d
then,
p ( u ; t , x ) = e 0 t b 0 ( s , x ) d s 2 π 0 t b 0 ( s , x ) d s e log u + 3 2 0 t b 0 ( s , x ) d s 2 2 0 t b 0 ( s , x ) d s p ( u ; t , x ) = e 0 x k b k ( t , x ) d x k 2 π 0 x k b k ( t , x ) d x k e log u + 3 2 0 x k b k ( t , x ) d x k 2 2 0 x k b k ( t , x ) d x k k = 1 , 2 , , d
Recall that a continuously differential function β ( t , x ) exists such that
d β ( t , x ) = b 0 ( t , x ) d t + b 1 ( t , x ) d x 1 + + b d ( t , x ) d x d
this enables the derivation of the probability density
p ( u ; t , x ) = e β ( t , x ) 2 π β ( t , x ) e log u + 3 2 β ( t , x ) 2 2 β ( t , x )
Remark 1.
In the stochastic process theory, a correlation exists between the Fokker–Planck equation and the Itô process. Specifically, if the Itô process is
d X t = μ ( X t , t ) d t + σ ( X t , t ) d B t
then the corresponding Fokker–Planck equation can be obtained as
t p ( u , t ) = 1 2 2 u 2 σ 2 ( u , t ) p ( u , t ) u μ ( u , t ) p ( u , t )
where μ and σ represent drift and diffusion, B t is the standard Brownian motion, or
d X t d t = μ ( X t , t ) + σ ( X t , t ) W t
where W t = d B t d t represents the white noise. Actually, if we consider the Itô processes corresponding to Fokker–Planck equations from Theorem 5, we can obtain
t X ( t , x ) = b 0 ( t , x ) X ( t , x ) W t x k X ( t , x ) = b k ( t , x ) X ( t , x ) W k
where W k represents the space white noise with respect to x k , k = 1 , 2 , , d . Further, we can also write Equation (113) in vector form
X ( t , x ) = γ ( t , x ) X ( t , x ) W ( t , x )
where
γ ( t , x ) = b 0 ( t , x ) , b 1 ( t , x ) , , b d ( t , x ) W ( t , x ) = W t , W 1 , , W d
represents the gradient operator andrepresents element by element multiplication. Notice that each equation in Equation (113) is similar in form to the geometric Brownian motion in the theory of stochastic processes. Similarly, we can call the space–time random field that satisfies Equation (113) a geometric Brownian filed.
If we consider different β 3 ( t , x ) and β 4 ( t , x ) in density function (103), we can obtain density functions p ( 3 ) ( u ; t , x ) and p ( 4 ) ( u ; t , x ) ; then, we can obtain the Jeffreys divergence
J D P ( 3 ) ( t , x ) , P ( 3 ) ( s , y ) = β 3 ( t , x ) + β 3 ( s , t ) + 4 8 β 3 ( t , x ) β 3 ( s , y ) β 3 ( t , x ) β 3 ( s , y ) 2
and generalized Fisher information
F I 1 ( t ) P ( 3 ) ( t , x ) = β 3 ( t , x ) + 2 4 β 3 2 ( t , x ) b 0 ( 3 ) ( t , x ) 2 F I 1 ( x k ) P ( 3 ) ( t , x ) = β 3 ( t , x ) + 2 4 β 3 2 ( t , x ) b k ( 3 ) ( t , x ) 2
and then the quotients
J D P ( 3 ) ( t , x ) , P ( 3 ) ( s , x ) | t s | 2 = β 3 ( t , x ) + β 3 ( s , x ) + 4 8 β 3 ( t , x ) β 3 ( s , y ) β 3 ( t , x ) β 3 ( s , y ) t s 2 J D P ( 3 ) ( t , x ) , P ( 3 ) ( t , x ˜ ( k ) ) | x k x k | 2 = β 3 ( t , x ) + β 3 ( t , x ˜ ( k ) ) + 4 8 β 3 ( t , x ) β 3 ( t , x ˜ ( k ) ) β 3 ( t , x ) β 3 ( t , x ˜ ( k ) ) x k x k 2
and we can easily obtain the relation
J D P ( 3 ) ( t , x ) , P ( 3 ) ( s , y ) | t s | 2 F I 1 ( t ) P ( 3 ) ( t , x ) = β 3 ( t , x ) β 3 ( s , x ) β 3 ( t , x ) + β 3 ( s , x ) + 4 2 β 3 ( t , x ) + 2 β 3 ( t , x ) β 3 ( s , y ) b 0 ( 3 ) ( t , x ) t s 2 J D P ( 3 ) ( t , x ) , P ( 3 ) ( t , x ˜ ( k ) ) | x k x k | 2 F I 1 ( x k ) P ( 3 ) ( t , x ) = β 3 ( t , x ) β 3 ( t , x ˜ ( k ) ) β 3 ( t , x ) + β 3 ( t , x ˜ ( k ) ) + 4 2 β 3 ( t , x ) + 2 β 3 ( t , x ) β 3 ( t , x ˜ ( k ) ) b k ( 3 ) ( t , x ) x k x k 2
for k = 1 , 2 , , d . Without a loss of generality, the result (119) also corroborates Theorem 2.
Furthermore, if we consider different β 3 ( t , x ) and β 4 ( t , x ) in density function (103), we can obtain density functions p ( 3 ) ( u ; t , x ) and p ( 4 ) ( u ; t , x ) ; then, the generalized Fisher divergence at the same space–time points is
F D b k ( 3 ) , b k ( 4 ) P ( 3 ) ( t , x ) P ( 4 ) ( t , x ) = β 3 ( t , x ) β 4 ( t , x ) 4 β 3 2 ( t , x ) β 4 2 ( s , y ) × b k ( 4 ) ( t , x ) β 3 ( t , x ) β 3 2 ( t , x ) 2 β 4 2 ( s , y ) b k ( 3 ) ( t , x ) β 4 ( t , x ) β 4 2 ( s , y ) 2 β 3 2 ( t , x ) × 4 β 3 ( t , x ) + 4 β 4 ( t , x ) 3 β 3 ( t , x ) β 4 ( t , x ) b k ( 4 ) ( t , x ) β 3 ( t , x ) b k ( 3 ) ( t , x ) β 4 ( t , x )
with the remainder terms
R k P ( 3 ) ( t , x ) P ( 4 ) ( t , x ) = 2 b k ( 3 ) ( t , x ) b k ( 4 ) ( t , x )
k = 0 , 1 , 2 , , d . Then, the generalized De Bruijn identities are as follows:
t J D P ( 3 ) ( t , x ) , P ( 4 ) ( t , x ) = 1 2 F D b 0 ( 3 ) , b 0 ( 4 ) P ( 3 ) ( t , x ) P ( 4 ) ( t , x ) 2 b 0 ( 3 ) ( t , x ) b 0 ( 4 ) ( t , x ) x k J D P ( 3 ) ( t , x ) , P ( 4 ) ( t , x ) = 1 2 F D b k ( 3 ) , b k ( 4 ) P ( 3 ) ( t , x ) P ( 4 ) ( t , x ) 2 b k ( 3 ) ( t , x ) b k ( 4 ) ( t , x ) k = 1 , 2 , , d
Additionally, we offer an alternative non-trivial form below that utilizes the implicit functions method to express our results. This form differs from the one presented in Theorem 5.
Theorem 6.
Suppose the functions in Fokker–Planck Equations (43) for the continuously bounded random field X ( t , x ) [ 0 , 1 ] are formulated as follows:
a k ( u ; t , x ) = 3 2 b k ( t , x ) u b k ( u ; t , x ) = b k ( t , x ) 1 u 2 k = 0 , 1 , 2 , , d
where b k are continuously differentiable functions independent of u and a continuously differentiable function β ( t , x ) exists such that
d β ( t , x ) = b 0 ( t , x ) d t + b 1 ( t , x ) d x 1 + + b d ( t , x ) d x d
the initial value X ( t , x ) = 0 as prod ( t , x ) = 0 and the initial density function is p ( u ; t , x ) = δ ( u ) as prod ( t , x ) = 0 . Then, the density function is as follows:
p ( u ; t , x ) = e 1 2 β ( t , x ) 2 π β ( t , x ) e v 2 2 β ( t , x ) u = sin v
Proof. 
Depending on the conditions, it is easy to obtain the Fokker–Planck equations as
t p ( u ; t , x ) = b 0 ( t , x ) 2 2 u 2 1 u 2 p ( u ; t , x ) 3 b 0 ( t , x ) 2 u u p ( u ; t , x ) x k p ( u ; t , x ) = b k ( t , x ) 2 2 u 2 1 u 2 p ( u ; t , x ) 3 b k ( t , x ) 2 u u p ( u ; t , x ) k = 1 , 2 , , d
By implementing the transformation with u = sin v and defining p ˜ ( v ; t , x ) = p ( u ( v ; t , x ) ) , the equations can be restructured as
t p ˜ ( v ; t , x ) = b 0 ( t , x ) 2 2 v 2 p ˜ ( v ; t , x ) + b 0 ( t , x ) 2 p ˜ ( v ; t , x ) x k p ˜ ( v ; t , x ) = b k ( t , x ) 2 2 v 2 p ˜ ( v ; t , x ) + b k ( t , x ) 2 p ˜ ( v ; t , x ) k = 1 , 2 , , d
with the solution
p ˜ ( v ; t , x ) = e 1 2 0 t b 0 ( s , x ) d s 2 π 0 t b 0 ( s , x ) d s e v 2 2 0 t b 0 ( s , x ) d s p ˜ ( v ; t , x ) = e 1 2 0 x k b k ( t , x ) d x k 2 π 0 x k b k ( t , x ) d x k e v 2 2 0 x k b k ( t , x ) d x k k = 1 , 2 , , d
Recall that a continuously differential function β ( t , x ) exists such that
d β ( t , x ) = b 0 ( t , x ) d t + b 1 ( t , x ) d x 1 + + b d ( t , x ) d x d
we can derive the probability density function
p ˜ ( v ; t , x ) = e 1 2 β ( t , x ) 2 π β ( t , x ) e v 2 2 β ( t , x )
then,
p ( u ; t , x ) = e 1 2 β ( t , x ) 2 π β ( t , x ) e v 2 2 β ( t , x ) u = sin v
Remark 2.
Similar to the discussion in Remark 1, we can obtain the Itô processes corresponding to the Fokker–Planck equations in Theorem 6
t X ( t , x ) = 3 2 b 0 ( t , x ) X ( t , x ) + b 0 ( t , x ) 1 X 2 ( t , x ) W t x k X ( t , x ) = 3 2 b k ( t , x ) X ( t , x ) + b k ( t , x ) 1 X 2 ( t , x ) W k
k = 1 , 2 , , d . In fact, this random field can be solved with a sinusoidal transformation, and the corresponding probability density function can be obtained. Although random field (132) has not yet found its application scenario, it gives us ideas for constructing different forms on space–time random fields in the future.
From density function (125), if we consider different β 5 ( t , x ) and β 6 ( t , x ) , we can obtain density functions p ( 5 ) ( u ; t , x ) and p ( 6 ) ( u ; t , x ) ; then, we can obtain the Jeffreys divergence and generalized Fisher information
J D P ( 5 ) ( t , x ) , P ( 5 ) ( s , y ) = 1 β 5 ( t , x ) β 5 ( s , y ) 2 β 5 ( t , x ) β 5 ( s , y ) β 5 ( t , x ) β 5 ( s , y ) 2
and
F I 1 ( t ) P ( 5 ) ( t , x ) = 1 2 β 5 ( t , x ) 2 β 5 2 ( t , x ) b 0 ( 5 ) ( t , x ) 2 F I 1 ( x k ) P ( 5 ) ( t , x ) = 1 2 β 5 ( t , x ) 2 β 5 2 ( t , x ) b k ( 5 ) ( t , x ) 2
and then the quotients
J D P ( 5 ) ( t , x ) , P ( 5 ) ( s , x ) | t s | 2 = 1 β 5 ( t , x ) β 5 ( s , x ) 2 β 5 ( t , x ) β 5 ( s , x ) β 5 ( t , x ) β 5 ( s , x ) t s 2 J D P ( 5 ) ( t , x ) , P ( 5 ) ( t , x ˜ ( k ) ) | x k x k | 2 = 1 β 5 ( t , x ) β 5 ( t , x ˜ ( k ) ) 2 β 5 ( t , x ) β 5 ( t , x ˜ ( k ) ) β 5 ( t , x ) β 5 ( t , x ˜ ( k ) ) x k x k 2
Obviously, we can easily obtain
J D P ( 5 ) ( t , x ) , P ( 5 ) ( s , x ) | t s | 2 F I 1 ( t ) P ( 5 ) ( t , x ) = 1 β 5 ( t , x ) β 5 ( s , x ) 1 2 β 5 ( t , x ) β 5 ( t , x ) β 5 ( s , x ) β 5 ( t , x ) β 5 ( s , x ) b 0 ( 5 ) ( t , x ) t s 2 J D P ( 5 ) ( t , x ) , P ( 5 ) ( t , x ˜ ( k ) ) | x k x k | 2 F I 1 ( x k ) P ( 5 ) ( t , x ) = 1 β 5 ( t , x ) β 5 ( t , x ˜ ( k ) ) 1 2 β 5 ( t , x ) β 5 ( t , x ) β 5 ( t , x ˜ ( k ) ) β 5 ( t , x ) β 5 ( t , x ˜ ( k ) ) b k ( 5 ) ( t , x ) x x k 2
for k = 1 , 2 , , d . Without a loss of generality, the result (136) corroborates Theorem 2.
Furthermore, if we consider different β 5 ( t , x ) and β 6 ( t , x ) in the density function (125), denoted as p ( 5 ) ( u ; t , x ) and p ( 6 ) ( u ; t , x ) , we can obtain the generalized Fisher divergence at the same space–time points
F D b k ( 5 ) , b k ( 6 ) P ( 5 ) ( t , x ) P ( 6 ) ( t , x ) = β 5 ( t , x ) β 6 ( t , x ) b k ( 6 ) β 5 ( t , x ) b k ( 5 ) β 6 ( t , x ) 2 β 5 2 ( t , x ) β 6 2 ( t , x ) β 5 ( t , x ) β 5 2 ( t , x ) + β 6 ( t , x ) β 6 2 ( t , x )
with the remainder terms
R k P ( 5 ) ( t , x ) P ( 6 ) ( t , x ) = b k ( 5 ) ( t , x ) b k ( 6 ) ( t , x )
for k = 0 , 1 , 2 , , d . Then, the generalized De Bruijn identities are as follows:
t J D P ( 5 ) ( t , x ) , P ( 6 ) ( t , x ) = 1 2 F D b 0 ( 5 ) , b 0 ( 6 ) P ( 5 ) ( t , x ) P ( 6 ) ( t , x ) b 0 ( 5 ) ( t , x ) b 0 ( 6 ) ( t , x ) x k J D P ( 5 ) ( t , x ) , P ( 6 ) ( t , x ) = 1 2 F D b k ( 5 ) , b k ( 6 ) P ( 5 ) ( t , x ) P ( 6 ) ( t , x ) b k ( 5 ) ( t , x ) b k ( 6 ) ( t , x ) k = 1 , 2 , , d

5. Conclusions

In this paper, we provide a generalization of the classical definitions of entropy, divergence, and Fisher information and derive these measures on a space–time random field. In addition, the Fokker–Planck Equations (43) for the space–time random field and density functions are obtained. Moreover, we obtain the Jeffreys divergence of a space–time random field at different space–time positions, and we obtain the approximation of the ratio of Jeffreys divergence to the square of space–time coordinate difference to the generalized Fisher information (54). Additionally, we use the Jeffreys divergence on two space–time random fields from the same type but different parameters Fokker–Planck equations, to obtain generalized De Bruijn identities (61). Finally, we give three examples of Fokker–Planck equations, with their solutions, to calculate the corresponding Jeffreys divergence, generalized Fisher information, and Fisher divergence and obtain the De Bruijn identities. These results encourage further research into the entropy divergence of space–time random fields, which advances the pertinent fields of information entropy, Fisher information, and De Bruijn identities.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank Pingyi Fan, Zhanjie Song, Ying Li, and Yumeng Song for providing relevant references and helpful discussions on topics related to this work.

Conflicts of Interest

The author declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
KLKullback–Leibler divergence
FIFisher information
CFICross–Fisher information
FDFisher divergence
sFDSymmetric Fisher divergence
JDJeffreys divergence

References

  1. Risken, H. The Fokker–Planck Equation: Methods of Solution and Applications; Springer: Berlin/Heidelberg, Germany, 1984. [Google Scholar]
  2. Stam, A.J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. Control 1959, 2, 101–112. [Google Scholar] [CrossRef]
  3. Barron, A.R. Entropy and the central limit theorem. Ann. Probab. 1986, 14, 336–342. [Google Scholar] [CrossRef]
  4. Johnson, O. Information Theory and the Central Limit Theorem; Imperial College Press: London, UK, 2004. [Google Scholar]
  5. Guo, D. Relative entropy and score function: New information estimation relationships through arbitrary additive perturbation. In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Republic of Korea, 28 June–3 July 2009; pp. 814–818. [Google Scholar]
  6. Toranzo, I.V.; Zozor, S.; Brossier, J.-M. Generalization of the De Bruijn Identity to General ϕ-Entropies and ϕ-Fisher Informations. IEEE Trans. Inform. Theory 2018, 64, 6743–6758. [Google Scholar] [CrossRef]
  7. Kharazmi, O.; Balakrishnan, N. Cumulative residual and relative cumulative residual Fisher information and their properties. IEEE Trans. Inform. Theory 2021, 67, 6306–6312. [Google Scholar] [CrossRef]
  8. Kolmogorov, A.N. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Dokl. Akad. Nauk SSSR 1941, 30, 299–303. [Google Scholar]
  9. Kolmogorov, A.N. On the degeneration of isotropic turbulence in an incompressible viscous flu. Dokl. Akad. Nauk SSSR 1941, 31, 538–542. [Google Scholar]
  10. Kolmogorov, A.N. Dissipation of energy in isotropic turbulence. Dokl. Akad. Nauk SSSR 1941, 32, 19–21. [Google Scholar]
  11. Yaglom, A.M. Some classes of random fields in n-dimensional space, related to stationary random processes. Theory Probab. Its Appl. 1957, 2, 273–320. [Google Scholar] [CrossRef]
  12. Yaglom, A.M. Correlation Theory of Stationary and Related Random Functions. Volume I: Basic Results; Springer: New York, NY, USA, 1987. [Google Scholar]
  13. Yaglom, A.M. Correlation Theory of Stationary and Related Random Functions. Volume II: Supplementary Notes and References; Springer: Berlin, Germany, 1987. [Google Scholar]
  14. Bowditch, A.; Sun, R. The two-dimensional continuum random field Ising model. Ann. Probab. 2022, 50, 419–454. [Google Scholar] [CrossRef]
  15. Bailleul, I.; Catellier, R.; Delarue, F. Propagation of chaos for mean field rough differential equations. Ann. Probab. 2021, 49, 944–996. [Google Scholar] [CrossRef]
  16. Wu, L.; Samorodnitsky, G. Regularly varying random fields. Stoch. Process Their Appl. 2020, 130, 4470–4492. [Google Scholar] [CrossRef]
  17. Koch, E.; Dombry, C.; Robert, C.Y. A central limit theorem for functions of stationary max-stable random fields on Rd. Stoch. Process Their Appl. 2020, 129, 3406–3430. [Google Scholar] [CrossRef]
  18. Ye, Z. On Entropy and ε-Entropy of Random Fields. Ph.D. Dissertation, Cornell University, Ithaca, NY, USA, 1989. [Google Scholar]
  19. Ye, Z.; Berger, T. A new method to estimate the critical distortion of random fields. IEEE Trans. Inform. Theory 1992, 38, 152–157. [Google Scholar] [CrossRef]
  20. Ye, Z.; Berger, T. Information Measures for Discrete Random Fields; Science Press: Beijing, China; New York, NY, USA, 1998. [Google Scholar]
  21. Ye, Z.; Yang, W. Random Field: Network Information Theory and Game Theory; Science Press: Beijing, China, 2023. (In Chinese) [Google Scholar]
  22. Ma, C. Stationary random fields in space and time with rational spectral densities. IEEE Trans. Inform. Theory 2007, 53, 1019–1029. [Google Scholar] [CrossRef]
  23. Hairer, M. A theory of regularity structures. Invent. Math. 2014, 198, 269–504. [Google Scholar] [CrossRef]
  24. Hairer, M. Solving the KPZ equation. Ann. Math. 2013, 178, 559–664. [Google Scholar] [CrossRef]
  25. Kremp, H.; Perkowski, N. Multidimensional SDE with distributional drift and Lévy noise. Bernoulli 2022, 28, 1757–1783. [Google Scholar] [CrossRef]
  26. Beeson, R.; Namachchivaya, N.S.; Perkowski, N. Approximation of the filter equation for multiple timescale, correlated, nonlinear systems. SIAM J. Math. Anal. 2022, 54, 3054–3090. [Google Scholar] [CrossRef]
  27. Song, Z.; Zhang, J. A note for estimation about average differential entropy of continuous bounded space–time random field. Chin. J. Electron. 2022, 31, 793–803. [Google Scholar] [CrossRef]
  28. Kramers, H.A. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica 1940, 7, 284–304. [Google Scholar] [CrossRef]
  29. Moyal, J.E. Stochastic processes and statistical physics. J. R. Stat. Soc. Ser. B Stat. Methodol. 1949, 11, 150–210. [Google Scholar] [CrossRef]
  30. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  31. Neeser, F.D.; Massey, J.L. Proper complex random processes with applications to information theory. IEEE Trans. Inform. Theory 1991, 39, 1293–1302. [Google Scholar] [CrossRef]
  32. Ihara, S. Information Theory-for Continuous Systems; World Scientific: Singapore, 1993. [Google Scholar]
  33. Gray, R.M. Entropy and Information Theory; Springer: Boston, MA, USA, 2011. [Google Scholar]
  34. Bach, F. Information Theory With Kernel Methods. IEEE Trans. Inform. Theory 2023, 69, 752–775. [Google Scholar] [CrossRef]
  35. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  36. Jeffreys, H. An invariant form for the prior probability in estimation problems. Proc. R. Soc. Lond. A 1946, 186, 453–461. [Google Scholar]
  37. Fuglede, B.; Topsøe, F. Jensen-Shannon divergence and Hilbert space embedding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Chicago, IL, USA, 27 June–2 July 2004; Volume 31. [Google Scholar]
  38. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; Volume 1, pp. 547–561. [Google Scholar]
  39. She, R.; Fan, P.; Liu, X.-Y.; Wang, X. Interpretable Generative Adversarial Networks With Exponential Function. IEEE Trans. Signal Process. 2021, 69, 3854–3867. [Google Scholar] [CrossRef]
  40. Liu, S.; She, R.; Zhu, Z.; Fan, P. Storage Space Allocation Strategy for Digital Data with Message Importance. Entropy 2020, 22, 591. [Google Scholar] [CrossRef]
  41. She, R.; Liu, S.; Fan, P. Attention to the Variation of Probabilistic Events: Information Processing with Message Importance Measure. Entropy 2019, 21, 439. [Google Scholar] [CrossRef]
  42. Wan, S.; Lu, J.; Fan, P.; Letaief, K.B. Information Theory in Formation Control: An Error Analysis to Multi-Robot Formation. Entropy 2018, 20, 618. [Google Scholar] [CrossRef]
  43. She, R.; Liu, S.; Fan, P. Recognizing Information Feature Variation: Message Importance Transfer Measure and Its Applications in Big Data. Entropy 2018, 20, 401. [Google Scholar] [CrossRef]
  44. Nielsen, F. An Elementary Introduction to Information Geometry. Entropy 2020, 22, 1100. [Google Scholar] [CrossRef] [PubMed]
  45. Nielsen, F. On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means. Entropy 2019, 21, 485. [Google Scholar] [CrossRef]
  46. Nielsen, F.; Nock, R. Generalizing skew Jensen divergences and Bregman divergences with comparative convexity. IEEE Signal Process. Lett. 2017, 24, 1123–1127. [Google Scholar] [CrossRef]
  47. Furuichi, S.; Minculete, N. Refined Young Inequality and Its Application to Divergences. Entropy 2021, 23, 514. [Google Scholar] [CrossRef]
  48. Pinele, J.; Strapasson, J.E.; Costa, S.I. The Fisher-Rao Distance between Multivariate Normal Distributions: Special Cases, Bounds and Applications. Entropy 2020, 22, 404. [Google Scholar] [CrossRef]
  49. Reverter, F.; Oller, J.M. Computing the Rao distance for Gamma distributions. J. Comput. Appl. Math. 2003, 157, 155–167. [Google Scholar] [CrossRef]
  50. Pawula, R.F. Generalizations and extensions of the Fokker–Planck-Kolmogorov equations. IEEE Trans. Inform. Theory 1967, 13, 33–41. [Google Scholar] [CrossRef]
  51. Pawula, R.F. Approximation of the linear Boltzmann equation by the Fokker–Planck equation. Phys. Rev. 1967, 162, 186–188. [Google Scholar] [CrossRef]
  52. Khoshnevisan, D.; Shi, Z. Brownian Sheet and Capacity. Ann. Probab. 1999, 27, 1135–1159. [Google Scholar] [CrossRef]
  53. Revuz, D.; Yor, M. Continuous Martingales and Brownian Motion, 2nd ed.; Springer: New York, NY, USA, 1999. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J. Jeffreys Divergence and Generalized Fisher Information Measures on Fokker–Planck Space–Time Random Field. Entropy 2023, 25, 1445. https://doi.org/10.3390/e25101445

AMA Style

Zhang J. Jeffreys Divergence and Generalized Fisher Information Measures on Fokker–Planck Space–Time Random Field. Entropy. 2023; 25(10):1445. https://doi.org/10.3390/e25101445

Chicago/Turabian Style

Zhang, Jiaxing. 2023. "Jeffreys Divergence and Generalized Fisher Information Measures on Fokker–Planck Space–Time Random Field" Entropy 25, no. 10: 1445. https://doi.org/10.3390/e25101445

APA Style

Zhang, J. (2023). Jeffreys Divergence and Generalized Fisher Information Measures on Fokker–Planck Space–Time Random Field. Entropy, 25(10), 1445. https://doi.org/10.3390/e25101445

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop