Next Article in Journal
Exploring the Entropy Complex Networks with Latent Interaction
Next Article in Special Issue
Rapidity and Energy Dependencies of Temperatures and Volume Extracted from Identified Charged Hadron Spectra in Proton–Proton Collisions at a Super Proton Synchrotron (SPS)
Previous Article in Journal
TLFND: A Multimodal Fusion Model Based on Three-Level Feature Matching Distance for Fake News Detection
Previous Article in Special Issue
Tsallis Entropy and Mutability to Characterize Seismic Sequences: The Case of 2007–2014 Northern Chile Earthquakes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Complex Matrix-Variate Dirichlet Averages and Its Applications in Various Sub-Domains

1
Department of Statistics, Cochin University of Science and Technology, Cochin 682 022, India
2
Department of Statistics, St. Thomas College Thrissur, Calicut University, Thenhipalam 680 001, India
3
Office for Outer Space Affairs, United Nations, Vienna International Center, A-1400 Vienna, Austria
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(11), 1534; https://doi.org/10.3390/e25111534
Submission received: 27 September 2023 / Revised: 1 November 2023 / Accepted: 7 November 2023 / Published: 10 November 2023

Abstract

:
This paper is about Dirichlet averages in the matrix-variate case or averages of functions over the Dirichlet measure in the complex domain. The classical power mean contains the harmonic mean, arithmetic mean and geometric mean (Hardy, Littlewood and Polya), which is generalized to the y-mean by de Finetti and hypergeometric mean by Carlson; see the references herein. Carlson’s hypergeometric mean averages a scalar function over a real scalar variable type-1 Dirichlet measure, which is known in the current literature as the Dirichlet average of that function. The idea is examined when there is a type-1 or type-2 Dirichlet density in the complex domain. Averages of several functions are computed in such Dirichlet densities in the complex domain. Dirichlet measures are defined when the matrices are Hermitian positive definite. Some applications are also discussed.
MSC:
15B52; 15B48; 26B10; 33C60; 33C65; 60E05; 62E15; 62H10; 62H05

1. Introduction

Dirichlet averages are a type of weighted average used in mathematics and statistics. Given a function f ( x ) and a probability distribution p ( x ) defined over a domain D, the Dirichlet average of f ( x ) over D with respect to p ( x ) is defined as:
f p = 1 | D | D f ( x ) p ( x ) d x ,
where | D | is the measure of the domain D. Intuitively, the Dirichlet average is the average value of f ( x ) weighted by the probability distribution p ( x ) over the domain D. The name “Dirichlet average” comes from the fact that the formula for the average involves an integral that is similar to the Dirichlet integral, which is an important integral in the theory of functions of a complex variable. Dirichlet averages have connections to many other important mathematical concepts, such as harmonic analysis, the Fourier series, and the theory of functions of a complex variable. Dirichlet averages play an important role in various problems in number theory, including the study of prime numbers and the distribution of arithmetic functions; see [1,2], etc. Ref. [3] used Dirichlet averages in the study of random matrices. Dirichlet averages are used in the study of option pricing and risk management in finance; see [4,5] used it in the study of Bayesian inference and probabilistic modeling in machine learning. Dirichlet averages are used in the study of natural language processing and text analysis; see [6]. Overall, Dirichlet averages are an important mathematical tool that have many applications in various disciplines. They provide a way to compute the average value of a function over a probability distribution and have connections to many other important mathematical concepts.
In [7], there is a discussion of the classical power mean, which contains the harmonic, arithmetic and geometric means. The classical weighted average is of the following form:
f ( b ) = [ w 1 z 1 b + + w n z n b ] 1 b .
where all the quantities are real scalar and where w = ( w 1 , , w n ) , z = ( z 1 , , z n ) , z j > 0 , w j > 0 , j = 1 , , n , and j = 1 n w j = 1 with a prime denoting the transpose. For b = 1 , f ( 1 ) gives j = 1 n w j z j or the arithmetic mean; when b = 1 , f ( 1 ) provides [ j ( w j z j ) ] 1 = the harmonic mean and when b 0 + , then f ( 0 + ) yields j = 1 n z j w j = the geometric mean. This weighted mean f ( b ) is generalized to the y-mean by de Finetti [8] and to the hypergeometric mean by Carlson [9]. A real scalar variable type-1 Dirichlet measure is involved for the weights ( w 1 , , w n 1 ) in Carlson’s generalization, and then average of a given function is taken over this Dirichlet measure. In the current literature this is known as Dirichlet average of that function, the function need not reduce to the classical arithmetic, harmonic, and geometric means. Additionally, Carlson offered a comprehensive and in-depth examination of the many types of Dirichlet averages Carlson developed the notion of the Dirichlet average in his work, see also [9,10,11,12,13,14]. The integral mean of a function with respect to the Dirichlet measure is known as its “Drichlet average”.
The paper is organized as follows: Section 1 gives the basic concepts for developing the theory of the matrix-variate Dirichlet measure in complex domain. Dirichlet averages for a function of matrix argument in the complex domain are developed in Section 2. In Section 3, we discuss the complex matrix-variate type-2 Dirichlet measure and averages over some useful matrix-variate functions. The rectangular matrix-variate Dirichlet measure is presented in Section 4. In Section 5, we establish the connection between Dirichlet averages and Tsallis entropy. Section 6 provides an elaborate account of the diverse sub-domains in which the technology finds valuable applications.

Complex Domain

In the present paper, we consider Dirichlet averages of various functions over Dirichlet measures in the complex domain in the matrix-variate cases. All matrices appearing in this paper are Hermitian positive definite and p × p unless stated otherwise. In order to distinguish, matrices in the complex domain will be denoted by a tilde as X ˜ and real matrices will be written without the tilde as X. We consider real-valued scalar functions of the complex matrix argument and such functions will be averaged over a complex matrix-variate Dirichlet measure. The following standard notations will be used: det ( X ˜ ) will mean the determinant of the complex matrix variable X ˜ . The absolute value of the determinant will be denoted by | det ( · ) | . This means that if det ( X ˜ ) = a + i b , i = 1 then ( a + i b ) ( a i b ) = ( a 2 + b 2 ) 1 2 = | det ( X ˜ ) | . t r ( · ) will denote the trace of ( · ) . X ˜ is integral over all X ˜ , where X ˜ may be rectangular, square or positive definite. X ˜ > O means that the p × p matrix X ˜ is Hermitian positive definite. Constant matrices, whether real or in the complex domain, will be written without the tilde unless the fact is to be stressed, and in that case, we use a tilde. O < A < X ˜ < B means A > O , X ˜ A > O , B X ˜ > O , where A and B are p × p constant positive definite matrices. Then,
O < A < X ˜ < B f ( X ˜ ) d X ˜ = A B f ( X ˜ ) d X ˜ ,
means the integral over the Hermitian positive definite matrix X ˜ > O . When O < A < X ˜ < B and f ( X ˜ ) is a real-valued scalar function of matrix argument, X ˜ and d X ˜ stand for the wedge product of differentials. Hence, for Z ˜ = ( z ˜ j k ) = X + i Y , a m × n matrix of distinct variables z ˜ j k ’s, where X and Y are real matrices, i = + 1 . Then, the differential element d Z ˜ = d X d Y , with d X and d Y being the wedge products of differentials in X and Y, respectively. For example, d X = j = 1 m k = 1 n d x j k if X = ( x j k ) and m × n . When Z ˜ is Hermitian, then X = X (symmetric) and Y = Y (skew symmetric), where prime denotes the transpose. In this case, d X = j k = 1 p d x j k = j k = 1 p d x j k and d Y = j < k = 1 p d y j k = j > k = 1 p d y j k . X ˜ > O f ( X ˜ ) d X ˜ means the integral over the Hermitian positive definite matrix X ˜ > O . It is a multivariate integral over all x ˜ j k ’s where X ˜ = ( x ˜ j k ) , x ˜ j k ’s are in the complex domain. The complex matrix-variate gamma function will be denoted by Γ ˜ p ( α ) , which has the following expression and integral representation:
Γ ˜ p ( α ) = π p ( p 1 ) 2 Γ ( α ) Γ ( α 1 ) Γ ( α ( p 1 ) ) , ( α ) > p 1
and
Γ ˜ p ( α ) = X ˜ > O | det ( X ˜ ) | α p e tr ( X ˜ ) d X ˜ , ( α ) > p 1
where ( · ) means the real part of ( · ) and the integration is over all Hermitian positive definite matrix X ˜ . For our computations to follow, we will need some Jacobians of transformations in the complex domain. These will be listed here without proofs. For the proofs and for other such Jacobians, see [15].
Lemma 1. 
Let X ˜ and Y ˜ be m × n with m n distinct complex variables as elements. Let A be m × m and B be n × n nonsingular constant matrices. Then
Y ˜ = A X ˜ B , det ( A ) 0 , det ( B ) 0 d Y ˜ = [ det ( A * A ) ] n [ det ( B * B ) ] m d X ˜
where A * and B * denote the conjugate transposes of A and B, respectively; if X , Y , A , B are real then
Y = A X B d Y = [ det ( A ) ] n [ det ( B ) ] m d X
and if a is a scalar quantity then
Y ˜ = a X ˜ d Y ˜ = | a | 2 m n d X ˜ .
Lemma 2. 
Let X ˜ be p × p and Hermitian matrix of distinct complex variables as elements, except for Hermitianness. Let A be a nonsingular constant matrix. Then
Y ˜ = A X ˜ A * d Y ˜ = | det ( A ) | 2 p d X ˜ .
If A , X , Y , X = X are real then
Y = A X A d Y = [ det ( A ) ] p + 1 d X .
If Y , X , a , X = X and a scalar, then
Y = a X d Y = a p ( p + 1 ) 2 d X
Lemma 3. 
Let X ˜ be p × p and nonsingular with the regular inverse X ˜ 1 . Then
Y ˜ = X ˜ 1 d Y ˜ = | det ( X ˜ * X ˜ ) | 2 p d X ˜   f o r   a   g e n e r a l   X ˜ | det ( X ˜ * X ˜ ) | p d X ˜   f o r   X ˜ = X ˜ *   o r   X ˜ = X ˜ *
Lemma 4. 
Let X ˜ be p × p Hermitian positive definite of distinct elements, except for Hermitian positive definiteness. Let T ˜ be a lower triangular matrix where T ˜ = ( t ˜ j k ) , t ˜ j k = 0 , j < k , t ˜ j k , j k are distinct, t ˜ k k = t k k > 0 , k = 1 , , p , that is, the diagonal elements are real and positive. Then
X ˜ = T ˜ T ˜ * d X ˜ = 2 p { k = 1 p t k k 2 ( p k ) + 1 } d T ˜ .
With the help of Lemma 4, we can evaluate the complex matrix-variate gamma integral in (2) and show that it is equal to the expression in (1). When Lemma 4 is applied to the integral in (2), the integral splits into p integrals of the form
k = 1 p 2 0 ( t k k 2 ) ( α p ) + 1 2 ( 2 ( p k ) + 1 ) e t k k 2 d t k k = k = 1 p Γ ( α ( k 1 ) ) , ( α ) > k 1 , k = 1 , , p
which results in the final condition as ( α ) > p 1 , and p ( p 1 ) / 2 integrals of the form
j > k e | t ˜ j k | 2 d t ˜ j k = j > k e ( t j k 1 2 + t j k 2 2 ) d t j k 1 d t j k 2 = j > k π π = π p ( p 1 ) 2 , | t ˜ j k | 2 = t j k 1 2 + t j k 2 2 .
Thus, the integral in (2) reduces to the expression in (1).
Lemma 5. 
Let X ˜ be n × p , n p matrix of full rank p. Let S ˜ = X ˜ * X ˜ , a p × p Hermitian positive definite matrix. Let d X ˜ and d S ˜ denote the wedge product of the differentials in X ˜ and S ˜ , respectively. Then
d X ˜ = | det ( S ˜ ) | n p π n p Γ ˜ p ( n ) d S ˜ .
This is a very important result because X ˜ is a rectangular matrix with m n distinct elements, whereas S ˜ is Hermitian positive definite and p × p . With the help of the above lemmas, we will average a few functions over the Dirichlet measures in the complex domain.

2. Dirichlet Averages for Functions of Matrix Argument in the Complex Domain

The Dirichlet distributions of real types 1 and 2 are generalized to standard distributions of beta and type-2. The literature contains these distributions, their characteristics, and a few generalizations in the form of Liouville distributions. Dirichlet type-1 and type-2 matrix-variate analogues can be found in the literature; [15] is one example. Generalizations of matrix variables to the Liouville family can be observed in [16]. Matrix-variate distributions, not generalized Dirichlet, may be seen from [17,18,19] provides examples of the use of scalar variable Dirichlet models in random division and other geometrical possibilities.
All the matrices appearing in this section are p × p Hermitian positive definite unless stated otherwise. Consider the following complex matrix-variate type-1 Dirichlet measure:
f 1 ( X ˜ 1 , , X ˜ k ) = D ˜ k | det ( X ˜ 1 ) | α 1 p | det ( X ˜ k ) | α k p × | det ( I X ˜ 1 X ˜ k ) | α k + 1 p
where X ˜ 1 , X ˜ k are p × p Hermitian positive definite, that is, X ˜ j > O , j = 1 , , k , such that I X ˜ j > O , j = 1 , , k , I ( X ˜ 1 + + X ˜ k ) > O . The normalizing constant D ˜ k can be evaluated by integrating out matrices one at a time and the individual integrals are evaluated by using a complex matrix-variate type-1 beta integral of the form
O I | det ( X ˜ ) | α p | det ( I X ˜ ) | β p d X ˜ = Γ ˜ p ( α ) Γ ˜ p ( β ) Γ ˜ p ( α + β ) , ( α ) > p 1 , ( β ) > p 1
where Γ ˜ p ( α ) is given in (1). It can be shown that the normalizing constant is the following:
D ˜ k = Γ ˜ p ( α 1 + + α k + 1 ) Γ ˜ p ( α 1 ) Γ ˜ p ( α k + 1 )
for ( α j ) > p 1 , j = 1 , , k + 1 . Since (10) is a probability measure, f ( X ˜ 1 , , X ˜ k ) is non-negative for all X ˜ j , j = 1 , , k and the total integral is one. It is a Dirichlet measure associated with a Dirichlet density, and it is also a statistical density; hence, we can denote the averages of given functions as the expected values of those functions, denoted by E ( · ) . Let us consider a few functions and take their averages over the complex matrix-variate Dirichlet measure in (8). Let
ϕ 1 ( X ˜ 1 , , X ˜ k ) = | det ( X ˜ 1 ) | γ 1 | det ( X ˜ k ) | γ k .
Then, the average of (11) over the measure in (8) is given by
E [ ϕ 1 ] = D ˜ k X ˜ 1 , , X ˜ k | det ( X ˜ 1 ) | α 1 + γ 1 p | det ( X ˜ k ) | α k + γ k p × | det ( I X ˜ 1 X ˜ k ) | α k + 1 p d X ˜ 1 . d X ˜ k .
Note that the only change is that α j is changed to α j + γ j for j = 1 , , k ; hence, the result is available from the normalizing constant. That is,
E [ ϕ 1 ] = { j = 1 k Γ ˜ p ( α j + γ j ) Γ ˜ p ( α j ) } Γ ˜ p ( α 1 + + α k ) Γ ˜ p ( α 1 + γ 1 + + α k + γ k + α k + 1 ) ,
for ( α j + γ j ) > p 1 , j = 1 , , k , ( α k + 1 ) > p 1 . Let
ϕ 2 ( X ˜ 1 , , X ˜ k ) = | det ( I X ˜ 1 X ˜ k ) | δ .
Then, in the integral for E [ ϕ 2 ] the only change is that the parameter α k + 1 is changed to α k + 1 + δ . Hence, the result is available from the normalizing constant D ˜ k . That is,
E [ ϕ 2 ] = Γ ˜ p ( α k + 1 + δ ) Γ ˜ p ( α k + 1 ) Γ ˜ p ( α 1 + + α k + 1 ) Γ ˜ p ( α 1 + + α k + 1 + δ )
for ( α k + 1 + δ ) > p 1 , ( α j ) > p 1 , j = 1 , , k . The structure in (14) is also the structure of the δ -th moment of the determinant of the matrix with a complex matrix-variate type-1 beta distribution. Hence, this ϕ 2 has an equivalent representation in terms of the determinant of a matrix with a complex matrix-variate type-1 beta distribution. Let
ϕ 3 ( X ˜ 1 , , X ˜ k ) = e tr ( X ˜ 1 ) .
Let us evaluate the Dirichlet average for k = 2 . Then
E [ ϕ 3 ] = D ˜ 2 X ˜ 1 , X ˜ 2 e tr ( X ˜ 1 ) | det ( X ˜ 1 ) | α 1 p | det ( X ˜ 2 ) | α 2 p × | det ( I X ˜ 1 X ˜ 2 ) | α 3 p d X ˜ 1 d X ˜ 3 .
Take out I X ˜ 1 from | det ( I X ˜ 1 X ˜ 2 ) | and make the transformation
U ˜ 2 = ( I X ˜ 1 ) 1 2 X ˜ 2 ( I X ˜ 1 ) 1 2 .
Then, from Lemma 2, d U ˜ 2 = | det ( I X ˜ 1 ) | p d X ˜ 2 . Now, U ˜ 2 can be integrated out by using a complex matrix-variate type-1 beta integral given in (9). That is,
O < U ˜ 2 < I | det ( U ˜ 2 ) | α 2 p | det ( I U ˜ 2 ) | α 3 p d U ˜ 2 = Γ ˜ p ( α 2 ) Γ ˜ p ( α 3 ) Γ ˜ p ( α 2 + α 3 )
for ( α 2 ) > p 1 , ( α 3 ) > p 1 . The X ˜ 1 integral to be evaluated is the following:
X ˜ 1 e tr ( X ˜ 1 ) | det ( X ˜ 1 ) | α 1 p | det ( I X ˜ 1 ) | α 2 + α 3 p d X 1 ˜ .
In order to evaluate the integral in (17), we can expand the exponential part by using zonal polynomials for complex argument; see [15,20]. We need a few notations and results from zonal polynomial expansions of determinants. The generalized Pochhammer symbol is the following:
[ a ] M = j = 1 p ( a j + 1 ) k j = Γ ˜ p ( a , M ) Γ ˜ p ( a ) , Γ ˜ p ( a , M ) = Γ ˜ p ( a ) [ a ] M
where the usual Pochhmmer symbol is
( a ) m = a ( a + 1 ) ( a + m 1 ) , a 0 , ( a ) 0 = 1
and M represents the partition, M = ( m 1 , , m p ) , m 1 m 2 m p , m 1 + + m p = m and the zonal polynomial expansion for the exponential function is the following:
e tr ( X ˜ ) = m = 0 M C ˜ M ( X ˜ ) m !
where C ˜ M ( X ˜ ) is zonal polynomial of order m in the complex matrix argument X ˜ ; see (6.1.18) of [15]. One result on zonal polynomial that we require will be stated here as a lemma.
Lemma 6. 
O < Z ˜ < I | det ( Z ˜ ) | α p | det ( I Z ˜ ) | β p C ˜ M ( Z ˜ A ˜ ) d Z ˜ = Γ ˜ p ( α , M ) Γ ˜ p ( β ) Γ ˜ p ( α + β , M ) C ˜ M ( A ˜ ) = Γ ˜ p ( α ) Γ ˜ p ( β ) Γ ˜ p ( α + β ) ( α ) M ( α + β ) M C ˜ M ( A ˜ ) ,
see also (6.1.21) of [15], for ( α ) > p 1 , ( β ) > p 1 , A ˜ > O . By using (21), we can evaluate the X ˜ 1 -integral in E [ ϕ 3 ] . That is,
O < X ˜ 1 < I e tr ( A X ˜ 1 ) | det ( X ˜ 1 ) | α 1 p | det ( I X ˜ 1 ) | α 2 + α 3 p d X ˜ 1 = m = 0 M O < X ˜ 1 < I C ˜ M ( A ˜ X ˜ 1 ) m ! | det ( X ˜ 1 ) | α 1 p | det ( I X ˜ 1 ) | α 2 + α 3 p d X ˜ 1 = m = 0 M C ˜ M ( A ˜ ) m ! Γ ˜ p ( α 1 , M ) Γ ˜ p ( α 2 + α 3 ) Γ ˜ p ( α 1 + α 2 + α 3 , M ) .
Now, with the result on X ˜ 2 -integral, D ˜ 2 and the above result will result in all the gamma products being canceled and the final result is the following:
E [ ϕ 3 ] = m = 0 M C ˜ M ( A ˜ ) m ! ( α 1 ) M ( α 1 + α 2 + α 3 ) M = 1 F 1 ( α 1 ; α 1 + α 2 + α 3 ; A ˜ )
for ( α j ) > p 1 , j = 1 , 2 , 3 and 1 F 1 is a confluent hypergeometric function of complex matrix argument A ˜ .

3. Dirichlet Averages in Complex Matrix-Variate Type-2 Dirichlet Measure

Consider the type-2 Dirichlet measure
f 2 ( X ˜ 1 , , X ˜ k ) = D ˜ k | det ( X ˜ 1 ) | α 1 p | det ( X ˜ k ) | α k p × | det ( I + X ˜ 1 + + X ˜ k ) | ( α 1 + + α k + 1 )
for ( α j ) > p 1 , j = 1 , , k + 1 and it can be seen that the normalizing constant is the same as that in the type-1 Dirichlet measure. Let us evaluate some Dirichlet averages in the measure (23). Let
ϕ 4 ( X ˜ 1 , , X ˜ k ) = | det ( X ˜ 1 ) | γ 1 | det ( X ˜ k ) | γ k .
Then, when the average is taken, the change is that α j changes to α j + γ j , j = 1 , , k ; hence, one should be able to find the value from the normalizing constant by adjusting for α k + 1 . Write ( α 1 + + α k + 1 ) = ( α 1 + γ 1 + + α k + γ k ) + ( α k + 1 γ 1 γ k ) . That is, replace α j by α j + γ j , j = 1 , , k and replace α k + 1 by α k + 1 γ 1 γ k to obtain the result from the normalizing constant. Therefore,
E [ ϕ 4 ] = { j = 1 k Γ ˜ p ( α j + γ j ) Γ ˜ p ( α j ) } Γ ˜ p ( α k + 1 γ 1 γ k ) Γ ˜ p ( α k + 1 )
for ( α j + γ j ) > p 1 , j = 1 , , k and ( α k + 1 γ 1 γ k ) > p 1 , ( α k + 1 ) > p 1 . Thus, only a few moments will exist, interpreting E [ ϕ 4 ] as the product moment of the determinants of X ˜ 1 , X ˜ k . Let
ϕ 5 ( X ˜ 1 , , X ˜ k ) = | det ( I + X ˜ 1 + + X ˜ k ) | δ .
Then, when the average is taken the only change in the integral is that α k + 1 is changed to α k + 1 + δ ; hence, from the normalizing constant the result is the following:
E [ ϕ 5 ] = Γ ˜ p ( α k + 1 + δ ) Γ ˜ p ( α k + 1 ) Γ ˜ p ( α 1 + + α k + 1 ) Γ ˜ p ( α 1 + + α k + 1 + δ ) ,
for ( α k + 1 + δ ) > p 1 , the other conditions on the parameters for D ˜ k remain the same. Observe that if ( δ ) > 0 , then the structure in (27) is that of the δ -th moment of the determinant of a complex matrix-variate type-1 beta matrix. Thus, this type-2 form gives a type-1 form result. Let
ϕ 6 ( X ˜ 1 , X ˜ 2 ) = e tr ( A X ˜ 1 ) | det ( I + X ˜ 1 ) | α 1 + α 3 .
Then, the Dirichlet average of ϕ 6 in the complex matrix-variate type-2 Dirichlet measure in (23) for k = 2 is the following:
E [ ϕ 6 ] = D ˜ 2 X ˜ 1 , X ˜ 2 e tr ( X ˜ 1 ) | det ( I + X ˜ 1 ) | α 2 + α 3 | det ( X ˜ 1 ) | α 1 p | det ( X ˜ 2 ) | α 2 p × | det ( I + X ˜ 1 + X ˜ 2 ) | ( α 1 + α 2 + α 3 ) d X ˜ 1 d X ˜ 3 .
Take out ( I + X ˜ 1 ) from I + X ˜ 1 + X ˜ 2 and make the transformation
U ˜ 2 = ( I + X ˜ 1 ) 1 2 X ˜ 2 ( I + X ˜ 1 ) 1 2 d U ˜ 2 = | det ( I + X ˜ 1 ) | p d X ˜ 2 .
The U 2 ˜ -integral gives
U ˜ 2 > O | det ( U ˜ 2 ) | α 2 p | det ( I + U ˜ 2 ) | ( α 1 + α 2 + α 3 ) d U ˜ 2 = Γ ˜ p ( α 2 ) Γ ˜ p ( α 1 + α 3 ) Γ ˜ p ( α 1 + α 2 + α 3 ) .
Observe that the exponent becomes zero and the factor containing | det ( I + X ˜ 1 ) | disappears. Then, the X ˜ 1 -integral is
X ˜ 1 > O | det ( X ˜ 1 ) | α 1 p e tr ( A X ˜ 1 ) d X ˜ 1 = Γ ˜ p ( α 1 ) | det ( A ) | α 1 .
The results from (29), (30) and D ˜ 2 gives the final result as follows:
E [ ϕ 6 ] = Γ ˜ p ( α 1 + α 3 ) Γ ˜ p ( α 3 ) | det ( A ) | α 1
and the original conditions on the parameters remain the same and no further conditions are needed, where A > O . Note that if ϕ 6 did not have the factor | det ( I + X ˜ 1 ) | α 1 + α 3 , a factor containing | det ( I + X ˜ 1 ) | would also have been present, then the X ˜ 1 -integral would have gone in terms of a Whittaker function of matrix argument; see [15].

4. Dirichlet Averages in Complex Rectangular Matrix-Variate Dirichlet Measure

Let B j be n j × n j a Hermitian positive definite constant matrix and let B j 1 2 denote the Hermitian positive definite square root of B j . Let X ˜ j be a n j × p , n j p matrix of full rank p so that X ˜ j * X ˜ j = S ˜ j > O or S ˜ j is a Hermitian positive definite. Observe that for p = 1 , X ˜ j * B j X ˜ j is a positive definite Hermitian form. Hence, our results to follow will also cover results on Hermitian forms. Consider the model
f 3 ( X ˜ 1 , , X ˜ k ) = G ˜ k | det ( X ˜ 1 * B 1 X ˜ 1 ) | α 1 | det ( X ˜ k * B k X ˜ k ) | α k × | det ( I X ˜ 1 * B 1 X ˜ 1 X ˜ k * B k X ˜ k ) | α k + 1 p
where G ˜ k is the normalizing constant and O < X ˜ j * B j X ˜ j < I , j = 1 , , k , O < X ˜ 1 * B 1 X ˜ 1 + + X ˜ k * B k X ˜ k < I , j = 1 , , k . The normalizing constant is evaluated by using the following procedure. Let Y ˜ j = B j 1 2 X ˜ j d Y j ˜ = | det ( B j ) | p d X ˜ j from Lemma 1. Let Y ˜ j * Y ˜ j = S ˜ j . Then, from Lemma 5 we have
d Y ˜ j = π n j p Γ ˜ p ( n j ) | det ( S ˜ j ) | n j p d X ˜ j .
Then,
d X ˜ 1 d X ˜ k = { j = 1 k π n j p Γ ˜ p ( n j ) | det ( B ˜ j ) | p | det ( S ˜ j ) | n j p } d S ˜ 1 d S ˜ k .
Since the total integral is 1, we have
1 = X ˜ 1 , X ˜ k f 3 ( X ˜ 1 , , X ˜ k ) d X ˜ 1 d X ˜ k = G ˜ k { j = 1 k π n j p Γ ˜ p ( n j ) } S ˜ 1 , , S ˜ k | det ( S ˜ 1 ) | α 1 + n 1 p × | det ( S ˜ k ) | α k + n k p | det ( I S ˜ 1 S ˜ k ) | α k + 1 p d S ˜ 1 d S ˜ k .
Now, evaluating the type-1 Dirichlet integrals over the S ˜ j ’s, one obtains the following result:
G ˜ k = { j = 1 k | det ( B j ) | p Γ ˜ p ( n j ) π n j p 1 Γ ˜ p ( α j + n j ) } × Γ ˜ p ( α 1 + + α k + 1 + n 1 + + n k ) Γ ˜ p ( α k + 1 )
for B j > O , ( α j + n j ) > p 1 , j = 1 , , k , ( α k + 1 ) > p 1 . Thus, (32) with (35) defines a rectangular complex matrix-variate type-1 Dirichlet measure. There is a corresponding type-2 Dirichlet measure, given by the following:
f 4 ( X ˜ 1 , , X ˜ k ) = G ˜ k | det ( X ˜ 1 ) | α 1 | det ( X ˜ k ) | α k × | det ( I + X ˜ 1 + + X ˜ k ) | ( α 1 + + α k + 1 + n 1 + + n k )
for B j > O , ( α j + n j ) > p 1 , j = 1 , , k , ( α k + 1 ) > p 1 and G ˜ k is the same as the one appearing in (35). Let us compute the Dirichlet averages of some functions in the type-2 rectangular complex matrix-variate Dirichlet measure in (36). Let
ϕ 7 ( X ˜ 1 , , X ˜ k ) = | det ( X ˜ 1 ) | γ 1 | det ( X ˜ k ) | γ k .
Then, when we take the expected value of ϕ 7 in (36) the only change is that α j changes to α j + γ j , j = 1 , , k ; hence, the final result is available from the normalizing constant. Therefore
E [ ϕ 7 ] = { j = 1 k Γ ˜ p ( α j + n j + γ j ) Γ ˜ p ( α j + n j ) } Γ ˜ p ( α k + 1 γ 1 γ k ) Γ ˜ p ( α k + 1 )
for ( α j + n j + γ j ) > p 1 , j = 1 , , k , ( α k + 1 γ 1 γ k ) > p 1 , ( α k + 1 ) > p 1 . Let
ϕ 8 ( X ˜ 1 , , X ˜ k ) = | det ( I + X ˜ 1 + + X ˜ k ) | δ .
Then, the only change is that α k + 1 goes to α k + 1 + δ in the integral and no other change is there; hence, the average is available from the normalizing constant. That is,
E [ ϕ 8 ] = Γ ˜ p ( α k + 1 + δ ) Γ ˜ p ( α k + 1 ) Γ ˜ p ( α 1 + + α k + 1 + n 1 + + n k ) Γ ˜ p ( α 1 + + α k + 1 + n 1 + + n k + δ )
for ( α j + n j ) > p 1 , j = 1 , , k , ( α k + 1 + δ ) > p 1 , ( α k + 1 ) > p 1 .
The case p = 1 in the complex rectangular matrix-variate type-1 Dirichlet measure is very interesting. We have a set of Hermitian positive definite quadratic forms here having a joint density of the following form:
f 5 ( X ˜ 1 , , X ˜ k ) = G ˜ k [ X ˜ 1 * B 1 X ˜ 1 ] α 1 [ X ˜ k * B k X ˜ k ] α k × | det ( I [ X ˜ 1 * B 1 X ˜ 1 ] [ X ˜ k * B k X ˜ k ] ) | α k + 1 p
where B j > O , and X ˜ j * B j X ˜ j is a scalar quantity, j = 1 , , k . Consider the same types of transformations as before. Y ˜ j = B j 1 2 X ˜ j . Then, Y ˜ j * Y ˜ j = | y ˜ j 1 | 2 + + | y ˜ j n j | 2 or the sum or squares of the absolute values of y ˜ j r where Y ˜ j * = ( y ˜ j 1 * , , y ˜ j n j * ) . This is an isotropic point in in the 2 n j -dimensional Euclidean space. From here, one can establish various connections to geometrical probability problems; see [19]. Also, (41) is associated with the theory of generalized Hermitian forms in pathway models; see [21]. Let us evaluate the h-th moment of
ϕ 9 ( X ˜ 1 , , X ˜ k ) = [ X ˜ 1 * B 1 X ˜ 1 + + X ˜ k * B k X ˜ k ] h
for p = 1 . For p > 1 we have seen that this is not available directly but moments of | det ( I X ˜ 1 * B 1 X ˜ 1 X ˜ k * B k X ˜ k ) | was available. But for p = 1 , one can obtain the h-th moment of both for an arbitrary h. By computing the h-th moment of [ 1 X ˜ 1 * B 1 X ˜ 1 X ˜ k * B k X ˜ k ] , for p = 1 , we note that for arbitrary h, this quantity and its complementary part [ X ˜ 1 * B 1 X ˜ 1 + + X ˜ k * B k X ˜ k ] are both scalar variable type-1 beta distributed with the parameters ( α k + 1 , j = 1 k ( α j + n j ) ) and ( j = 1 k ( α j + n j ) , α k + 1 ) , respectively. Then,
E [ ϕ 9 ] = Γ ˜ p ( j = 1 k ( α j + n j ) + h ) Γ ˜ p ( j = 1 k ( α j + n j ) ) Γ ˜ p ( j = 1 k ( α j + n j ) + α k + 1 ) Γ ˜ p ( j = 1 k ( α j + n j ) + α k + 1 + h )
for ( α j ) > p 1 , j = 1 , , k + 1 , ( j = 1 k ( α j + n j ) + h ) > p 1 . Consider ϕ 9 in the complex matrix-variate type-2 Dirichlet measure for p = 1 . Then, the h-th moment will reduce to the following:
E [ ϕ 9 ] = Γ ˜ p ( j = 1 k ( α j + n j ) + h ) Γ ˜ p ( j = 1 k ( α j + n j ) ) Γ ˜ p ( α k + 1 h ) Γ ˜ p ( α k + 1 )
for ( α k + 1 h ) > p 1 , ( α j ) > p 1 , j = 1 , , k + 1 , ( j = 1 k ( α j + n j ) + h ) > p 1 .
Many such results can be obtained for the type-1 and type-2 Dirichlet measures in Hermitian positive definite Dirichlet measures or in rectangular matrix-variate Dirichlet measures.

5. A Connection to Tsallis Statistics of Non-Extensive Statistical Mechanics

Ref. [22] introduced an entropy measure and, by optimizing this entropy in an escort density, and under the constraint that the first moment in the escort density is prefixed which will correspond to a physical law of conservation of energy, obtained the famous Tsallis statistics of non-extensive statistical mechanics. Tsallis entropy is a variant of Havrda–Charvát entropy; see [23]. Havrda–Charvát entropy is an α -generalized Shannon entropy and Shannon entropy in the discrete distribution is the following:
S ( f ) = C j = 1 k p j ln p j , p j > 0 , j = 1 , , k , p 1 + + p k = 1
and its continuous version is the following:
S ( f ) = C x f ( x ) ln f ( x ) d x , f ( x ) 0 for   all   x   a n d   x f ( x ) d x = 1
where C is a constant. A generalized entropy, introduced by Mathai, is a variant of Havrda–Charvát entropy and Tsallis entropy in the real scalar variable case, but Mathai’s entropy is set in a very general framework. It is the following:
M α ( f ) = X [ f ( X ) ] 1 + a α η d X 1 α a , α a , η > 0
where a is a fixed anchoring point, α is the parameter of interest, η > 0 is a fixed scaling factor or unit of measurement, f ( X ) is a real-valued scalar function of X such that f ( X ) 0 for all X, X f ( X ) d X = 1 or f ( X ) is a statistical density where X can be a scalar or vector or matrix or a collection of matrices in the real or complex domain and d X is the wedge product of all distinct real scalar variables in X. For example, if X = [ x 1 , , x p ] , where x j , j = 1 , , p are distinct real scalar variables and a prime denoting the transpose, then d X = d x 1 d x p = d X . For two real scalar variables x and y, the wedge product of differentials is defined as d x d y = d y d x , so that d x d x = 0 , d y d y = 0 . If X = ( x i j ) is a p × q matrix of distinct real scalar variables x i j ’s, then d X = i = 1 p j = 1 q d x i j . If X ˜ = X 1 + i X 2 , i = ( 1 ) , X 1 , X 2 is real, then d X ˜ = d X 1 d X 2 . If X = [ X 1 , , X k ] , a collection of matrices in the real domain, then d X = d X 1 d X k . If X ˜ = [ X ˜ 1 , , X ˜ k ] , then d X ˜ = d X ˜ 1 d X ˜ k . Thus, (47) is the expected value of [ f ( X ) ] a α η , where the deviation of α from the anchoring point a is measured in terms of η units.
When α a in the real scalar case, we can see that (47) goes to Shannon’s entropy of (46). But (47) is set up in a very general framework. Let us consider (47) when X is a p × 1 vector of distinct real scalar positive variables, x j > 0 , j = 1 , , p and let x 1 + + x p < 1 so that the x j ’s are in a unit ball. Let us optimize (47) under two product moment type constraints. Let
A = E [ x 1 α 1 1 x p α p 1 ] a α η and   B = E [ ( x 1 α 1 1 x p α p 1 ) a α η ( j = 1 p x j ) ]
for ( α j ) > 0 , j = 1 , , p , where ( · ) denotes the real part of ( · ) . Let the constraints be A is prefixed and B is prefixed. If we use calculus of variation to optimize (47) under the above constraints, then the Euler equation is the following, where λ 1 and λ 2 are Lagrangian multipliers:
f { f 1 + a α η λ 1 ( x 1 α 1 1 x p α p 1 ) a α η f λ 2 ( x 1 α 1 1 x p α p 1 ) a α η ( j = 1 p x j ) f } = 0
( 1 + a α η ) f a α η = λ 1 ( x 1 α 1 1 x p α p 1 ) a α η + λ 2 ( x 1 α 1 1 x p α p 1 ) a α η ( j = 1 p x j ) f = λ 3 x 1 α 1 1 x p α p 1 [ 1 + λ 4 j = 1 p x j ] η a α
for some λ 3 and λ 4 . Let α < a . Then, let us take λ 4 = b ( a α ) , b > 0 so that the right side of the above equation for f can form a density with λ 3 being the normalizing constant there. If λ 4 = b ( a α ) with b > 0 , α < a , then the right side of f will be a positive exponential function and will not produce a density. Then,
f = λ 3 x 1 α 1 1 x p α p 1 [ 1 b ( a α ) ( x 1 + + x p ) ] η a α , η > 0 , b > 0 , α < a
is a Mathai’s pathway form of real scalar type-1 Dirichlet density. When α > a , then (48) switches into a real scalar type-2 Dirichlet density with the corresponding normalizing constant.
Note that for q = 1 , a q × q Hermitian positive definite matrix is a real scalar positive variable. Hence, (48) holds in the real and complex cases for q = 1 of q × q real positive definite or Hermitian positive definite matrices X 1 , , X p or X ˜ 1 , , X ˜ p . The above is an example of the connection of type-1 and type-2 Dirichlet models to Tsallis entropy.

6. Applications

For our applications in the theory of special functions, fractional calculus, mechanics, biology, probability, and stochastic processes, Dirichlet averages and their diverse approaches are used. In this section, the main areas where the applications of Dirichlet averages are presented:

6.1. Special Functions

Dirichlet averages were introduced by Carlson in his 1977 work. Carlson [10,11,12,13] observed that the straightforward idea of this kind of averaging generalizes and unifies a wide range of special functions, including various orthogonal polynomials and generalized hyper-geometric functions. The relationship between Dirichlet splines and an important class of hypergeometric functions of several variables is given in [14,24]. Numerous investigations of B-splines, including those by [14,25,26], used Dirichlet averages.

6.2. Fractional Calculus

The Dirichlet average of elementary functions like power function, exponential function, etc. is given by many notable mathematicians. There are many results available in the literature converting the elementary function into the summation form after taking the Dirichlet average of those functions, using the fractional integral, and obtaining new results; see [27,28,29,30,31,32]. Those results will be used in the future by mathematicians and scientists in a variety of fields.

6.3. Statistical Mechanics

Statistical mechanics is a branch of physics that studies the behaviour of large systems of particles, such as gases, liquids, and solids. In statistical mechanics, entropy is a measure of the degree of disorder or randomness in a system; for more details, see [33,34]. The greater the entropy, the more disordered the system. Dirichlet averages and statistical mechanics are connected through the concept of entropy. Dirichlet averages are a type of mathematical average that weighs a set of values according to a given probability distribution. For example, given a set of values x 1 , x 2 , , x n and a probability distribution p 1 , p 2 , , p n , the Dirichlet average is defined as:
D ( p , x ) = i = 1 n p i x i .
Statistical mechanics is a branch of physics that studies the behavior of large systems of particles, such as gases, liquids, and solids. The connection between Dirichlet averages and statistical mechanics comes from the fact that the Dirichlet average can be seen as a type of average energy of a system weighted by a probability distribution. In statistical mechanics, the average energy of a system is also weighted by a probability distribution, and the entropy of the system is related to the probability distribution of the energy states. In particular, the Boltzmann entropy of a system is given by:
S = k i = 1 n p i ln p i ,
where k is the Boltzmann constant. This formula shows that the entropy of a system is proportional to the negative logarithm of the probability distribution of the energy states. Thus, Dirichlet averages and statistical mechanics are connected through the concept of entropy, which relates the average energy of a system to the probability distribution of its energy states. Dirichlet forms and their applications to quantum mechanics and statistical mechanics were established by [35]. Connections between Dirichlet distributions and a scale-invariant probabilistic model based on Leibniz-like pyramids are introduced by [36]. Ref. [37] showed that marginalizing the joint distribution of individual energies is a symmetric Dirichlet distribution.

6.4. Gene Expression Modeling

Clustering is a key data processing technique for interpreting microarray data and determining genetic networks. Hierarchical Dirichlet processes (HDP) clustering is able to capture the hierarchical elements that are common in biological data, such as gene expression data, by including a hierarchical structure into the statistical model. Ref. [38] presented a hierarchical Dirichlet process model for gene expression clustering.

6.5. Geometrical Probability

Thomas and Mathai [39] propose a generalized Dirichlet model application to geometrical probability problems. When the linearly independent random points in Euclidean n space have highly general real rectangle matrix-variate beta density, the volumes of random parallelotopes are explored. In order to evaluate statistical hypotheses, structural decomposition is provided, and random volumes are linked to generalized Dirichlet models and likelihood ratio criteria. This makes it possible to calculate percentage points of random volumes using the generalized Dirichlet marginal’s p-values.

6.6. Bayesian Analysis

Carlson’s original definition of Dirichlet averages is expressed as mixed multinomial distributions’ probability-generating functions. They also significantly contribute to the solution of elliptic integrals and have several connections to statistical applications. Ref. [40] found that several nested families are built for Bayesian inference in multinomial sampling and contingency tables that generalize the Dirichlet distributions. These distributions can be used to model populations of personal probabilities evolving under the process of inference from statistical data.

7. Conclusions

In this study, the fundamental ideas for the theory development of the matrix-variate Dirichlet measure in the complex domain are presented. The complex matrix-variate type-2 Dirichlet measure and averages over some useful matrix-variate functions are discussed. We establish the Dirichlet measure of the rectangular matrix-variate and the relationship between Tsallis entropy and Dirichlet averages and identify a few applications in various domains. Additionally, a few applications are covered.

Author Contributions

Conceptualization, H.J.H.; Writing—original draft, P.T. and N.S.; Writing—review & editing, P.T., N.S. and H.J.H.; Supervision, H.J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the referees for their valuable comments, which enabled the authors to improve the presentation of the material in the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hardy, G.H. On Dirichlet us Divisor Problem. Proc. Lond. Math. Soc. 1917, 2, 1–25. [Google Scholar] [CrossRef]
  2. Hardy, G.H.; Littlewood, J.E. Some Problems of “Partitio Numerorum”. III. On the Expression of a Number as a Sum of Primes. Acta Math. 1923, 44, 1–70. [Google Scholar] [CrossRef]
  3. Erdős, L.; Yau, H. A Dynamical Approach to Random Matrix Theory; New York University: New York, NY, USA, 2017. [Google Scholar]
  4. Mai, J.-F.; Schenk, S.; Scherer, M. Analyzing model robustness via a distortion of the stochastic root: A Dirichlet prior approach. Stat. Risk Model. 2015, 32, 177–195. [Google Scholar] [CrossRef]
  5. Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent Dirichlet Allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
  6. Griffiths, T.L.; Steyvers, M. Finding Scientific Topics. Proc. Natl. Acad. Sci. USA 2004, 101, 5228–5235. [Google Scholar] [CrossRef] [PubMed]
  7. Hardy, G.H.; Littlewood, J.E.; Polya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  8. de Finetti, B. Theory of Probability; Wiley: New York, NY, USA, 1974; Volume I. [Google Scholar]
  9. Carlson, B.C. Special Functions of Applied Mathematics; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  10. Carlson, B.C. Lauricella’s hypergeometric function FD. J. Math. Anal. Appl. 1963, 7, 452–470. [Google Scholar] [CrossRef]
  11. Carlson, B.C. A connection between elementary and higher transcendental functions. SIAM J. Appl. Math. 1969, 17, 116–148. [Google Scholar] [CrossRef]
  12. Carlson, B.C. Invariance of an integral average of a logarithm. Amer. Math. Mon. 1975, 82, 379–382. [Google Scholar] [CrossRef]
  13. Carlson, B.C. Dirichlet Averages of xtlogx. SIAM J. Math. Anal. 1987, 18, 550–565. [Google Scholar] [CrossRef]
  14. Carlson, B.C. B-splines, hypergeometric functions and Dirichlet average. J. Approx. Theory 1991, 67, 311–325. [Google Scholar] [CrossRef]
  15. Mathai, A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument; World Scientific Publishing: New York, NY, USA, 1997. [Google Scholar]
  16. Gupta, R.D.; Richards, D.S.P. Multivariate Liouville distribution. J. Multivariate Anal. 1987, 23, 233–256. [Google Scholar] [CrossRef]
  17. Hayakawa, T. On the distribution of the latent roots of a complex Wishart matrix (non-central case). Ann. Inst. Statist. Math. 1972, 24, 1–17. [Google Scholar] [CrossRef]
  18. Fujikoshi, Y. Asymptotic expansions of the non-null distributions of two criteria for the linear hypothesis concerning complex multivariate normal populations. Ann. Inst. Statist. Math. 1971, 23, 477–490. [Google Scholar] [CrossRef]
  19. Mathai, A.M. An Introduction to Geometrical Probability: Distributional Aspects with Applications; Gordon and Breach: Amsterdam, The Netherlands, 1999. [Google Scholar]
  20. Mathai, A.M.; Provost, S.B.; Hayakawa, T. Bilinear Forms and Zonal Polynomials; Lecture Notes Series; Springer: New York, NY, USA, 1995. [Google Scholar]
  21. Mathai, A.M. Random volumes under a general matrix-variate model. Linear Algebra Its Appl. 2007, 425, 162–170. [Google Scholar] [CrossRef]
  22. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  23. Mathai, A.M.; Rathie, P.N. Basic Concepts in Information Theory and Statistics: Axiomatic Foundations and Applications; Wiley Eastern: Mumbai, India, 1975. [Google Scholar]
  24. Neuman, E.; Fleet, P.J.V. Moments of Dirichlet splines and their applications to hypergeometric functions. J. Comput. Appl. Math. 1994, 53, 225–241. [Google Scholar] [CrossRef]
  25. Massopust, P.; Forster, B. Multivariate complex B-splines and Dirichlet averages. J. Approx. Theory 2010, 162, 252–269. [Google Scholar] [CrossRef]
  26. Simić, S.; Bin-Mohsin, B. Stolarsky means in many variables. Mathematics 2020, 8, 1320. [Google Scholar] [CrossRef]
  27. Kilbas, A.A.; Kattuveettill, A. Representations of Dirichlet averages of generalized Mittag–Leffler function via fractional integrals and special functions. Frac. Calc. Appl. Anal. 2008, 11, 471–492. [Google Scholar]
  28. Saxena, R.K.; Pogány, T.K.; Ram, J.; Daiya, J. Dirichlet averages of generalized multi-index Mittag–Leffler functions. Armen. J. Math. 2010, 3, 174–187. [Google Scholar]
  29. Uthayakumar, R.; Gowrisankar, A. Generalized Fractal Dimensions in Image Thresholding Technique, Inf. Sci. Lett. 2014, 3, 125–134. [Google Scholar] [CrossRef]
  30. Noor, M.A.; Noor, K.I.; Iftikhar, S.; Awan, M.U. Fractal Integral Inequalities for Harmonic Convex Functions. Appl. Math. Inf. Sci. Vol. 2018, 12, 831–839. [Google Scholar] [CrossRef]
  31. Dinesh, V.; Murugesan, G. A CPW-Fed Hexagonal Antenna With Fractal Elements For UWB Applications, Appl. Math. Inf. Sci. 2019, 13, 73–79. [Google Scholar] [CrossRef]
  32. Kumar, D.; Ram, J.; Choi, J. Dirichlet Averages of Generalized Mittag–Leffler Type Function. Fractal Fract. 2022, 6, 297. [Google Scholar] [CrossRef]
  33. Liu, Y. Extended Bayesian Framework for Multicategory Support Vector Machine. J. Stat. Appl. Prob. 2020, 9, 1–11. [Google Scholar]
  34. Kumar, M.; Awasthi, A.A.; Kumar, A.; Patel, K.K. Sequential Testing Procedure for the Parameter of Left Truncated Exponential Distribution. J. Stat. Appl. Prob. 2020, 9, 119–125. [Google Scholar]
  35. Albeverio, S.; Høegh-Krohn, R. Some remarks on Dirichlet forms and their applications to quantum mechanics and statistical mechanics. Funct. Anal. Markov Process. 1982, 923, 120–132. [Google Scholar]
  36. Rodriguez, A.; Tsallis, C. Connection between Dirichlet distributions and a scale-invariant probabilistic model based on Leibniz-like pyramids. J. Stat. Mech. Theory Exp. 2014, 12, P12027. [Google Scholar] [CrossRef]
  37. Scalas, E.; Gabriel, A.T.; Martin, E.; Germano, G. Velocity and energy distributions in microcanonical ensembles of hard spheres Phys. Rev. E 2015, 92, 022140. [Google Scholar]
  38. Wang, L.; Wang, X. Hierarchical Dirichlet process model for gene expression clustering. EURASIP J. Bioinform. Syst. Biol. 2013, 5, 1–14. [Google Scholar] [CrossRef]
  39. Thomas, S.; Mathai, A.M. p-Content of a p-Parallelotope and Its Connection to Lilkelihood Ratio Statistic. Sankhyä Indian J. Stat. Ser. 2009, 71, 49–63. [Google Scholar]
  40. Dickey, J.M. Multiple hypergeometric functions: Probabilistic interpretations and statistical uses. J. Amer. Statist. Assoc. 1983, 78, 628–637. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Thankamani, P.; Sebastian, N.; Haubold, H.J. On Complex Matrix-Variate Dirichlet Averages and Its Applications in Various Sub-Domains. Entropy 2023, 25, 1534. https://doi.org/10.3390/e25111534

AMA Style

Thankamani P, Sebastian N, Haubold HJ. On Complex Matrix-Variate Dirichlet Averages and Its Applications in Various Sub-Domains. Entropy. 2023; 25(11):1534. https://doi.org/10.3390/e25111534

Chicago/Turabian Style

Thankamani, Princy, Nicy Sebastian, and Hans J. Haubold. 2023. "On Complex Matrix-Variate Dirichlet Averages and Its Applications in Various Sub-Domains" Entropy 25, no. 11: 1534. https://doi.org/10.3390/e25111534

APA Style

Thankamani, P., Sebastian, N., & Haubold, H. J. (2023). On Complex Matrix-Variate Dirichlet Averages and Its Applications in Various Sub-Domains. Entropy, 25(11), 1534. https://doi.org/10.3390/e25111534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop