Next Article in Journal
An Innovative Numerical Method Utilizing Novel Cubic B-Spline Approximations to Solve Burgers’ Equation
Previous Article in Journal
A Practical Website Fingerprinting Attack via CNN-Based Transfer Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extended Zeta Function with Applications in Model Building and Bayesian Analysis

Emeritus Professor, Department of Mathematics and Statistics, McGill University, Montreal, QC H3A2K6, Canada
Mathematics 2023, 11(19), 4076; https://doi.org/10.3390/math11194076
Submission received: 22 August 2023 / Revised: 12 September 2023 / Accepted: 20 September 2023 / Published: 26 September 2023

Abstract

:
In certain problems in model building and Bayesian analysis, the results end up in forms connected with generalized zeta functions. This necessitates the introduction of an extended form of the generalized zeta function. Such an extended form of the zeta function is introduced in this paper. In model building situations and in various types of applications in physical, biological and social sciences and engineering, a basic model taken is the Gaussian model in the univariate, multivariate and matrix-variate situations. A real scalar variable logistic model behaves like a Gaussian model but with a thicker tail. Hence, for many of industrial applications, a logistic model is preferred to a Gaussian model. When we study the properties of a logistic model in the multivariate and matrix-variate cases, in the real and complex domains, invariably the problem ends up in the extended zeta function defined in this paper. Several such extended logistic models are considered. It is also found that certain Bayesian considerations also end up in the extended zeta function introduced in this paper. Several such Bayesian models in the multivariate and matrix-variate cases in the real and complex domains are discussed. It is stated in a recent paper that “Quantum Mechanics is just the Bayesian theory generalized to the complex Hilbert space”. Hence, the models developed in this paper are expected to have applications in quantum mechanics, communication theory, physics, statistics and related areas.

1. Introduction

Since we will be dealing with scalar, vector and matrix variables in the real and complex domains; both mathematical and random variables; and real scalar, vector and matrix parameters in the real and complex domains, we will need a multiplicity of symbols. In order to simplify matters, we will be using the following simplified notations: Real scalar mathematical as well as random variables will be denoted by the same lower-case letters, such as x 1 , x 2 . Real mathematical as well as random vector/matrix variables, whether square or rectangular matrices, will be denoted by capital letters such as X 1 , X 2 . a , b , etc., and A , B , etc., will be used to denote scalar and vector/matrix constants, respectively. Variables, whether scalar vector or matrix in the complex domain will be denoted with a tilde placed over them. No tilde will be used on constants. When symbols or Greek letters are used, the notation will be explained then and there. Let X = ( x i j ) be a p × q matrix, where the elements, x i j s, are functionally independent (distinct) real scalar variables. Then, the wedge product of differentials will be denoted by d X = i , j d x i j = d x 11 d x p q . A p × q matrix in the complex domain can be written as X ˜ = X 1 + i X 2 , where i = ( 1 ) , X 1 , X 2 real p × q matrices. Then, d X ˜ = d X 1 d X 2 . X will denote the transpose of X and X ˜ * will denote the conjugate transpose of X ˜ . When a p × p matrix X ˜ = X ˜ * , then X ˜ is Hermitian. When X ˜ is Hermitian and if X ˜ = X 1 + i X 2 ,   i = ( 1 ) , X 1 , X 2 real, then X 1 = X 1 (symmetric) and X 2 = X 2 ( skew symmetric). The determinant of a p × p matrix X will be denoted as | X | or det ( X ) and the absolute value of the determinant of a square matrix in the complex domain will be denoted as | det ( X ˜ ) | = | det ( X ˜ X ˜ * ) | , | X ˜ | = a + i b | X ˜ X ˜ * | = a 2 + b 2 , and| det ( X ˜ ) | = ( a 2 + b 2 ) .
The need for introducing an extended zeta function will be illustrated by considering a simple model building situation [1]. Let x be a real scalar variable. A popular model in the real scalar variable case in different fields, such as mathematical sciences, physical sciences, biological and social sciences and related fields, is the Gaussian model. But a real scalar variable logistic model behaves like a Gaussian model but with a thicker tail and, hence, a logistic model is preferred in many industrial applications. A generalized logistic model was introduced in [2] as an exponentiation of a basic type-2 beta model. A basic type-2 beta model is the following:
f ( x ) d x = c x α 1 ( 1 + x ) ( α + β ) d x ,   0 x <
and f ( x ) = 0 elsewhere. Let x = e y ,   0 x < < y < . Then,
f ( x ) d x = c x α 1 ( 1 + x ) ( α + β ) d x = c e α y ( 1 + e y ) α + β d y
where
c = Γ ( α + β ) Γ ( α ) Γ ( β )
and then the function corresponding to y, denoted by g ( y ) , is the following:
g ( y ) d y = c e α y ( 1 + e y ) α + β d y ,   ( α ) > 0 ,   ( β ) > 0 , < y < ,
where ( · ) denotes the real part of ( · ) . It may be observed that
e α y ( 1 + e y ) α + β = e β y ( 1 + e y ) α + β , < y <
and when α = β = 1 , then (4) reduces to
e y ( 1 + e y ) 2 = e y ( 1 + e y ) 2 = g 1 ( y ) , < y < ,   ( α ) > 0 ,   ( β ) > 0
and g 1 ( y ) is the basic logistic model, which is the most important model in industrial applications as a replacement for the standard Gaussian model.
Let us look into some extensions of the logistic model in (3) to vector/matrix cases in the real and complex domains. Consider a p × 1 real vector X, X = [ x 1 , , x p ] , where x j , j = 1 , , p are distinct real scalar variables. Then, X X = x 1 2 + + x p 2 . If X ˜ is in the complex domain, then X ˜ * X ˜ = | x ˜ 1 | 2 + + | x ˜ p | 2 = ( x 11 2 + x 12 2 ) + + ( x p 1 2 + x p 2 2 ) , x ˜ j = x j 1 + i x j 2 , i = ( 1 ) , x j 1 , x j 2 , j = 1 , , p real. Consider the evaluation of the following integral in the real domain:
X e α X X ( 1 + a e X X ) α + β d X = k = 0 ( α + β ) k ( a ) k k ! X e ( α + k ) X X d X = π p 2 k = 0 ( α + β ) k ( a ) k k ! 1 ( α + k ) p 2 , 0 < a < 1 ,
for ( α ) > 0 and ( β ) > 0 , where, for example, ( a ) k = a ( a + 1 ) ( a + k 1 ) , a 0 , and ( a ) 0 = 1 is the Pochhammer symbol. The binomial expansion is valid here since 0 < a < 1 ,   0 < a e ( X X ) < 1 ,   X X > 0 . Note that the basic zeta function ζ ( ρ ) and the generalized zeta function ζ ( ρ , α ) are the following [3]:
ζ ( ρ ) = k = 1 1 k ρ , ( ρ ) > 1 ; ζ ( ρ , α ) = k = 0 1 ( α + k ) ρ , ( ρ ) > 1 , ( α ) 0 , 1 , 2 , ; ζ ( 2 ) = ζ ( 2 , 1 ) = π 2 6 , ζ ( 4 ) = π 4 90 .
Note that (5) is connected to a generalized zeta series but it is not a zeta series. Hence, we need to have an extension of the generalized zeta series. In the above integral, if X X is replaced by [ X X ] δ ,   δ > 0 and if there is another multiplicative factor [ X X ] γ , then the integral becomes complicated, but it can be evaluated by using some results of Jacobians of matrix transformation. For the discussions in Section 2 and Section 3, we will need some Jacobians of matrix transformations.
The organization of this paper is as follows. In Section 2, an extended zeta function is introduced and then several extended logistic models are considered. All these models end up in extended zeta functions. In Section 3, some Bayesian procedures are considered, some of which will also end up in extended zeta functions.

2. An Extension of the Logistic Model

We will introduce several models here; each one eventually will end up with the extended zeta function introduced here.
Definition 1 
(Extended zeta function). Let α j 0 , 1 , ,   j = 1 , , r and b j 0 , 1 , ,   j = 1 , , q . An extended zeta function will be defined by the following series, denoted by
ζ p , q r ( x ) = ζ [ { ( m 1 , α 1 ) , , ( m r , α r ) } : a 1 , , a p ; b 1 , , b q ; x ] = k = 0 [ 1 ( α j + k ) m 1 ( α r + k ) m r ( a 1 ) k ( a p ) k ( b 1 ) k ( b q ) k x k k ! ]
where ( α + β ) k is the Pochhammer symbol and it is assumed that j = 1 r m j > 1 ,   q p or p = q + 1 ,   | x | < 1 .
Now, we examine the evaluations of a number of integrals. Let u = ( X μ ) A ( X μ ) , where X is a p × 1 real vector; X = [ x 1 , , x p ] ; < x j < ; j = 1 , , p , with the expected value or mean value E [ X ] = μ , where E [ · ] denotes the expected value and A is a constant p × p real positive definite matrix; A > O ; and μ is a p × 1 parameter vector. Consider the evaluation of the following integral:
f 1 ( X ) d X = c 1 [ u ] γ e α u ( 1 + a e u ) α + β d X , ( α ) > 0 , ( β ) > 0 ,   0 < a < 1 , ( γ ) 0
where c 1 is a constant. Note that u = ( X μ ) A ( X μ ) > 0 since A > O is positive definite. In order to bring this to a sum of squares, one can make a transformation Y = A 1 2 ( X μ ) , then u = Y Y = y 1 2 + + y p 2 , where A 1 2 is the positive definite square root of the positive definite matrix A > O . Then, we can convert d X into d u . This will enable us to evaluate the integral in (8). Two results of Jacobians of matrix transformations are needed here. These will be listed here as lemmas without proofs. For the proofs and for other Jacobians, see [4].
Lemma 1. 
Let the p × q matrix X = ( x i j ) be in the real domain, where the p q elements, x i j s, are functionally independent real scalar variables, and let A be a p × p and B be a q × q nonsingular constant matrices. Then,
Y = A X B , | A | 0 , | B | 0 d Y = | A | q | B | p d X .
Let the p × q matrix Y ˜ be in the complex domain and, where A and B are p × p and q × q nonsingular constant matrices, respectively, in the real or complex domain, then
Z ˜ = A Y ˜ B ,   | A | 0 ,   | B | 0 d Z ˜ = | det ( A ) | 2 q | det ( B ) | 2 p d Y ˜ = | det ( A A * ) | q | det ( B * B ) | p d Y ˜
where | det ( · ) | denotes the absolute value of the determinant of ( · ) .
Lemma 2. 
Let the p × q , p q matrix X of rank p be in the real domain with p q distinct elements x i j s. Let the p × p matrix S = X X , which is positive definite. Then, going through a transformation involving a lower triangular matrix with positive diagonal elements and a semi-orthonormal matrix and after integrating out the differential element corresponding to the semi-orthonormal matrix, we will have the following connection between d X and d S . See the details from [4]:
d X = π p q 2 Γ p ( q 2 ) | S | q 2 p + 1 2 d S
where, for example, Γ p ( α ) is the real matrix-variate gamma function given by
Γ p ( α ) = π p ( p 1 ) 4 Γ ( α ) Γ ( α 1 2 ) Γ ( α p 1 2 ) , ( α ) > p 1 2 = Y > O | Y | α p + 1 2 e tr ( Y ) d Y , ( α ) > p 1 2
where tr ( · ) means the trace of the square matrix ( · ) , and ( · ) is the real part of ( · ) . Since Γ p ( α ) is associated with a real matrix-variate gamma integral of (12), we call Γ p ( α ) a real matrix-variate gamma function. This Γ p ( α ) is also known by different names in the literature. When the p × q ,   p q matrix Z ˜ of rank p, with distinct elements, is in the complex domain and letting S ˜ = Z ˜ Z ˜ * , which is p × p and Hermitian positive definite, then, going through a transformation involving a lower triangular matrix with real and positive diagonal elements and a semi-unitary matrix, we can establish the following connection between d Z ˜ and d S ˜ :
d Z ˜ = π p q Γ ˜ p ( q ) | det ( S ˜ ) | q p d S ˜
where, for example, Γ ˜ p ( α ) is the complex matrix-variate gamma function given by
Γ ˜ p ( α ) = π p ( p 1 ) 2 Γ ( α ) Γ ( α 1 ) Γ ( α p + 1 ) , ( α ) > p 1 = T ˜ > O | det ( T ˜ ) | α p e tr ( T ˜ ) d T ˜ , ( α ) > p 1 .
We call Γ ˜ p ( α ) the complex matrix-variate gamma because it is associated with a matrix-variate gamma integral in the complex domain.
Now, we can evaluate the integral in (8). Make the transformation Y = A 1 2 ( X μ d Y = | A | 1 2 d X from Lemma 1. Let u = Y Y d Y = π p 2 Γ ( p 2 ) u p 2 1 d u from Lemma 2, observing that Y here is a 1 × p matrix. Note that since 0 < a e u < 1 , u > 0 , one can expand ( 1 + a e u ) ( α + β ) by using a binomial expansion as k = 0 ( α + β ) k ( a ) k k ! e k u . Then,
X f 1 ( X ) d X = c 1 | A | 1 2 π p 2 Γ ( p 2 ) k = 0 ( α + β ) k ( a ) k k ! u = 0 u γ + p 2 1 e ( α + k ) u d u = c 1 | A | 1 2 π p 2 Γ ( p 2 ) Γ ( γ + p 2 k = 0 ( α + β ) k ( a ) k k ! ( α + k ) ( γ + p 2 ) ) = c 1 | A | 1 2 π p 2 Γ ( p 2 ) Γ ( γ + p 2 ) ζ [ { ( γ + p 2 , α ) } : α + β ; ; a ]
where ζ [ { ( γ + p 2 , α ) } : α + β ; ; a ] is the case for p = 1 , q = 0 and r = 1 of (7) for | a | < 1 .
Hence, when f 1 ( X ) is a statistical density of X, then c 1 will be the normalizing constant there and
c 1 1 = | A | 1 2 π p 2 Γ ( p 2 ) Γ ( γ + p 2 ) ζ [ { ( γ + p 2 , α ) } : α + β ; ; a ]
for ( γ ) + p 2 > 1 , ( α ) > 0 , ( β ) > 0 , 0 < a < 1 and ζ ( · ) is defined in (7). We will be constructing our models as non-negative integrable functions with normalizing constants so that they can also be used as statistical densities. Then, we have the first extended logistic model as the following, which is also a statistical density.

2.1. Extended Logistic Model 1 in the Real Domain

f 1 ( X ) d X = | A | 1 2 Γ ( p 2 ) π p 2 Γ ( γ + p 2 ) [ ζ [ { ( γ + p 2 , α ) } : α + β ; ; a ] ] 1 × [ ( X μ ) A ( X μ ) ] γ e α ( X μ ) A ( X μ ) ( 1 + a e ( X μ ) A ( X μ ) ) α + β d X
where X is a p × 1 vector in the real domain, ( α ) > 0 ,   ( β ) > 0 ,   0 < a < 1 ,   p 1 ,   ( γ ) + p 2 > 1 and A = A > O . The results in the complex domain, corresponding to the results in the real domain, will be denoted with a letter a in the subscript as well as in the section number of the equation numbers. For example, the result in the complex domain corresponding to f 1 ( X ) will be denoted as f 1 a ( X ˜ ) and the equation number will be denoted by (18) so as to avoid the occurrence of too many equation numbers. Let X ˜ be a p × 1 vector in the complex domain with distinct elements. Then, the j-th element in X ˜ is x ˜ j = x j 1 + i x j 2 , where i = ( 1 ) , x j 1 , x j 2 real and j = 1 , , p . Let X ˜ * denote the conjugate transpose of X ˜ . Then, the model in the complex domain, corresponding to f 1 ( X ) , is available by replacing ( X μ ) A ( X μ ) with ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) , μ ˜ = E [ X ˜ ] and Y ˜ = A 1 2 ( X ˜ μ ˜ ) d Y ˜ =   | det ( A ) | d X ˜ ,   Y ˜ * Y ˜ = | y ˜ 1 | 2 + + | y ˜ p | 2 =   ( y 11 2 + y 12 2 ) + + ( y p 1 2 + y p 2 2 ) ,   y ˜ j = y j 1 + i y j 2 ,   i = ( 1 ) ,   j = 1 , , p . Let u ˜ = Y ˜ * Y ˜ . Note that, here, u ˜ is real. Hence, we can apply Lemma 2 in the real case for 2 p real variables or p complex variables. Both will lead to the same results. For 2 p real variables, we have d Y ˜ = π 2 p 2 Γ ( 2 p 2 ) u 2 p 2 1 d u = π p Γ ( p ) u p 1 d u , which is the same result when applying the complex version of the lemma. Then, we have the equation for the density in the complex domain, shown in Section 2.2.

2.2. Extended Logistic Model 1 in the Complex Domain

f 1 a ( X ˜ ) d X ˜ = | det ( A ) | Γ ( p ) π p Γ ( γ + p ) ζ [ { ( γ + p , α ) } : α + β ; ; a ] × [ ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) ] γ e α ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) ( 1 + a e ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) ) α + β d X ˜
for 0 < a < 1 , ( α ) > 0 , ( β ) > 0 , A = A * > O , ( γ ) + p > 1 and p 1 , | det ( A ) | is the absolute value of the determinant of A and ζ ( · ) is the extended zeta function defined in (7).
Note 1. 
In logistic model 1, we have taken u = ( X μ ) A ( X μ ) in the real case, where X is p × 1 with distinct real scalar variables as elements and μ = E [ X ] , A > O is a real positive definite constant matrix. Particular cases herein are (1): μ = O ; (2): A = I , where I is the identity matrix; and (3): μ = O , A = I , both in the real and complex domains. In the complex domain, μ is denoted as μ ˜ . One can consider a generalization of model 1 by taking u δ , δ > 0 in the exponent in the real case and u ˜ δ , δ > 0 in the exponent in the complex case. Then, in the real and complex cases, the exponential parts will be the following, respectively:
e [ ( X μ ) A ( X μ ) ] δ ( 1 + a e [ ( X μ ) A ( X μ ) ] δ ) α + β , e [ ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) ] δ ( 1 + a e [ ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) ] δ ) α + β
and the first factor will be the same, that is, [ ( X μ ) A ( X μ ) ] γ and [ ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) ] γ , respectively, in the real and complex domains. In this case, the normalizing constants will be multiplied by δ, both in the real and complex cases. In the real case, γ + p 2 is to be replaced by 1 δ ( γ + p 2 ) . In the complex case, γ + p is to be replaced by 1 δ ( γ + p ) . These will be the changes and, hence, this generalization of model 1 will not be listed here.
Note 2. 
In the PhD thesis written by [5], details of the applications of various types of statistical models, especially in the complex domain, are given. Most of the models are Gaussian- and Wishart-based. Ref. [5] is mainly concerned with the analysis of PolSAR (Polarimetric Synthetic Aperture Radar) data on single-look and multi-look radar return signals. It is found that non-Gaussian models give better representations in certain regions such as forests, sea surface, urban areas, etc.; see, for example, Refs. [6,7,8,9]. In the real scalar case, it is found that in many of the industrial applications a logistic model is better than a Gaussian model. A logistic model behaves like a standard Gaussian model but with thicker tail. This is the main reason why a logistic model became very popular in industrial applications. Hence, in modeling multi-look return data from radar and sonar and in other communication problems, logistic-based multivariate and matrix-variate models in the complex domain are likely to give better representations where, currently, Gaussian- and Wishart-based models are used. Hence, it is hoped that the results obtained in this paper and the various models presented here will be highly useful in physics, engineering, communication theory, statistics and related areas.
Note that the h-th moment of ( X μ ) A ( X μ ) and ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) , for an arbitrary h, are available from the normalizing constants in extended logistic model 1 by replacing γ with γ + h and then taking the ratios of the respective normalizing constants.
The next model will involve a positive definite matrix variable. Let X = ( x i j ) = X > O be a p × p symmetric positive definite matrix in the real domain. Due to symmetry, there are only p ( p + 1 ) / 2 distinct real scalar variable x i j s in X > O . When symmetric matrices are involved, the transformations will not be based on Lemmas 1 and 2. We need one more result on Jacobians involving a symmetric matrix. This will be listed here as Lemma 3. For the proof and other details, see [4].
Lemma 3. 
Let the p × p matrix X be real symmetric, X = X , with p ( p + 1 ) / 2 distinct real scalar variable x i j s, and let A be a p × p nonsingular matrix. Then,
Y = A X A , | A | 0 d Y = | A | p + 1 d X .
Let the p × p matrix X ˜ = X ˜ * in the complex domain be Hermitian. Let A be a p × p nonsingular constant matrix in the real or complex domain. Then,
Y ˜ = A X ˜ A * , | A | 0 , d Y ˜ = | det ( A ) | 2 p d X ˜ = | det ( A * A ) | p d X ˜ .
For 0 < a < 1 ,   ( α ) > 0 ,   ( β ) > 0 ,   ( γ ) > p 1 2 , consider the following integral:
X > O f 2 ( X ) d X = c 2 X > O | X | γ p + 1 2 e α tr ( X ) ( 1 + a e tr ( X ) ) α + β d X = c 2 k = 0 ( α + β ) k ( a ) k k ! X > O | X | γ p + 1 2 e ( α + k ) tr ( X ) d X = c 2 k = 0 ( α + β ) k ( a ) k k ! ( α + k ) p γ Γ p ( γ ) = c 2 Γ p ( γ ) ζ [ { ( p γ , α ) } : α + β ; ; a ]
for ( γ ) > 1 , p 1 and ζ ( · ) are defined in (7). Hence, we have the model that begins Section 2.3.

2.3. Extended Logistic Model 2 in the Real Domain

For X > O , a p × p real positive definite matrix, ( γ ) > p 1 2 ,   ( α ) > 0 ,   ( β ) > 0 ,   0 < a < 1 ,
f 2 ( X ) d X = [ Γ p ( γ ) ζ [ { ( p γ , α ) } : α + β ; ; a ] ] 1 | X | γ p + 1 2 e α tr ( X ) ( 1 + a e tr ( X ) ) α + β d X .
If the p × p matrix X ˜ > O is Hermitian positive definite, then | X | γ p + 1 2 is to be replaced with | det ( X ˜ ) | γ p , the rest of the procedure in the real case goes through and the model will be as follows in Section 2.4.

2.4. Extended Logistic Model 2 in the Complex Domain

For the p × p Hermitian positive definite matrix X ˜ = X ˜ * > O , ( α ) > 0 ,   ( β ) > 0 ,   ( γ ) > p 1 ,   0 < a < 1 ,   2 p ( γ ) > 1 and Γ ˜ P ( γ ) , as defined in Lemma 2. The model in the complex domain is the following:
f 2 a ( X ˜ ) d X ˜ = [ Γ ˜ p ( γ ) ζ [ { ( 2 p γ , α ) } : α + β ; : a ] ] 1 | det ( X ˜ ) | γ p e α tr ( X ˜ ) ( 1 + a e tr ( X ˜ ) ) α + β d X ˜ .
Note that one can generalize model 2 by replacing tr ( X ) in the exponent with [ tr ( X ) ] δ , δ > 0 or with [ tr ( X ) ] δ 1 , δ 1 > 0 in the numerator and [ tr ( X ) ] δ 2 , δ 2 > 0 , X with X μ , μ = E [ X ] in the denominator. Such generalized integrals can be evaluated, but the process will be very lengthy and, hence, such generalizations involving a determinant and power of trace in the exponent will not be considered here. Now, we will look into a model involving a rectangular matrix in the real and complex domains.
Consider a real p × q , p q matrix X of rank p. Then, we know that S = X X > O (real symmetric positive definite). From Lemma 2, d X = π p q 2 Γ p ( q 2 ) | S | q 2 p + 1 2 d S . Now, we examine the evaluation of the following integral:
X f 3 ( X ) d X = c 3 | X X | γ p + 1 2 e α tr ( X X ) ( 1 + a e tr ( X X ) ) α + β d X = c 3 k = 0 ( α + β ) k ( a ) k k ! π p q 2 Γ p ( q 2 ) S > O | S | γ + q 2 p + 1 2 e ( α + k ) tr ( S ) d S = c 3 k = 0 ( α + β ) k ( a ) k k ! π p q 2 Γ p ( q 2 ) Γ p ( γ + q 2 ) ( α + k ) p ( γ + q 2 ) = c 3 π p q 2 Γ p ( γ + p 2 ) Γ p ( q 2 ) ζ [ { ( p ( γ + q 2 ) , α ) } : α + β ; ; a ]
for p ( ( γ ) + q 2 ) > 1 , ( α ) > 0 , ( β ) > 0 , 0 < a < 1 nd ζ ( · ) is defined in (7). The binomial expansion is valid since tr ( X X ) > 0 , 0 < a e tr ( X X ) < 1 . The integral is evaluated by making use of the real matrix-variate gamma integral of Lemma 2. Therefore, we have the model shown in Section 2.5.

2.5. Extended Logistic Model 3 in the Real Domain

For ( γ + q 2 ) > p 1 2 , ( α ) > 0 , ( β ) > 0 , 0 < a < 1 , ( p ( γ + q 2 ) ) > 1 , X a p × q real matrix of rank p , p q and then the p × p matrix X X = S > O ,
f 3 ( X ) d X = Γ p ( q 2 ) π p q 2 Γ p ( γ + q 2 ) [ ζ [ { ( p ( γ + q 2 ) , α ) } : α + β ; ; a ] ] 1 × | X X | γ p + 1 2 e α tr ( X X ) ( 1 + a e tr ( X X ) ) α + β d X
for p ( γ ) + q 2 ) > 1 , p 1 , q 1 , ( α ) > 0 , ( β ) > 0 , 0 < a < 1 and X X = S > O . ζ ( · ) is defined in (7).
Now, we look at the complex domain. Let the p × q matrix X ˜ in the complex domain be of rank p , p q . Then, the following are the replacement to obtain model 3 corresponding to model 3 in the real domain: Replace | X X | γ p + 1 2 with | det ( X ˜ X ˜ * ) | γ p . In the transformation S ˜ = X ˜ X ˜ * , we have d X ˜ = π p q Γ ˜ p ( q ) | det ( S ˜ ) | q p d S ˜ , where Γ ˜ p ( q ) is defined in Lemma 2. The rest of the procedure is parallel to that in the real case. Hence, model 3 in the complex domain is shown in Section 2.6.

2.6. Extended Logistic Model 3 in the Complex Domain

f 3 a ( X ˜ ) d X ˜ = Γ ˜ p ( q ) π p q Γ ˜ p ( γ + q ) [ ζ [ { ( 2 p ( γ + q ) , α ) } : α + β ; ; a ] ] 1 × | det ( X ˜ X ˜ * ) | γ p e α tr ( X ˜ X ˜ * ) ( 1 + a e tr ( X ˜ X ˜ * ) ) α + β d X ˜
for 2 p ( ( γ ) + q ) > 1 , p 1 , q 1 , ( α ) > 0 , ( β ) > 0 , 0 < a < 1 , ( γ ) > q + p 1 and X ˜ X ˜ * = S ˜ > O .
Note 3. 
We can generalize model 3 in the real case by replacing X X with A 1 2 ( X M ) B ( X M ) A 1 2 , where M = E [ X ] is a p × q constant matrix; A > O and B > O are p × p and q × q constant positive definite matrices, respectively; and A 1 2 and B 1 2 are, respectively, the positive definite square roots of A > O and B. Then, the normalizing constant c 3 is to be multiplied by | A | q 2 | B | p 2 in the real case and in the complex case the normalizing constant is multiplied by | det ( A ) | q | det ( B ) | p , that is, the absolute values of the determinants raised to q and p, respectively. ( X M ) in the real case is replaced by ( X ˜ M ˜ ) * , A = A * > O and B = B * > O in the complex domain. Therefore, no separate listing will be there for this generalization. Another generalization of model 3 in the real domain is to replace tr ( A 1 2 ( X M ) B ( X M ) A 1 2 ) with [ tr ( A 1 2 ( X M ) B ( X M ) A 1 2 ) ] δ , δ > 0 , that is, an arbitrary power is taken for the trace in the exponent, either the same δ > 0 for both the numerator and denominator exponents or different δ 1 and δ 2 and the corresponding replacements in the complex domain. Even with the same δ > 0 , the evaluation of the integral will become very lengthy and when the δ’s are different; it will be more difficult also. The same procedures will also work for δ < 0 , δ 1 < 0 , δ 2 < 0 . These types of generalizations will not be considered here.
Now, we consider a case where the multiplicative factor determinant is replaced by a trace and the exponential trace is given an arbitrary power. Let the p × q real matrix X be of rank p , p q . Consider [ tr ( X X ) ] δ , δ > 0 . Since tr ( X X ) is the sum of squares of p q real scalar variables, one can apply Lemma 2 by taking a row vector of p q real scalar variables. Let u = tr ( X X ) d X = π p q 2 Γ ( p q 2 ) u p q 2 1 d u , u > 0 . Then, let us evaluate the following integral:
X f 4 ( X ) d X = c 4 X [ tr ( X X ) ] γ e α [ tr ( X X ) ] δ ( 1 + a e [ tr ( X X ) ] δ ) α + β d X = c 4 k = 0 ( α + β ) k ( a ) k k ! π p q 2 Γ ( p q 2 ) u = 0 u γ + p q 2 1 e ( α + k ) u δ d u = c 4 π p q 2 Γ ( p q 2 ) k = 0 ( α + β ) k ( a ) k k ! 1 δ Γ ( 1 δ ( γ + p q 2 ) ) ( α + k ) 1 δ ( γ + p q 2 ) = c 4 π p q 2 Γ ( 1 δ ( γ + p q 2 ) ) Γ ( p q 2 ) 1 δ ζ [ { ( 1 δ ( γ + p q 2 ) , α ) } : α + β ; ; a ]
for ( γ ) + p q 2 > δ , δ > 0 , ( α ) > 0 , ( β ) > 0 and 0 < a < 1 ; ζ ( · ) is defined in (7). From this integral, we can construct the model in Section 2.7.

2.7. Extended Logistic Model 4 in the Real Domain

f 4 ( X ) d X = δ Γ ( p q 2 ) π p q 2 Γ ( 1 δ ( γ + p q 2 ) ) [ ζ [ { ( 1 δ ( γ + p q 2 ) , α ) } ; α + β ; ; a ] ] 1 × [ tr ( X X ) ] γ e α [ tr ( X X ) ] δ ( 1 + a e [ tr ( X X ) ] δ ) α + β d X
for ( α ) > 0 , ( β ) > 0 , ( γ ) + p q 2 > δ , δ > 0 and 0 < a < 1 . The following are the replacements to obtain model 4 in the complex domain. Replace X X with X ˜ X ˜ * . Then, tr ( X ˜ X ˜ * ) is the sum of squares of the absolute values of p q complex variables or the sum of squares of 2 p q real scalar variables. Therefore, p q 2 in the real case will go to p q . Then, using steps parallel to those in the real case, one has model 4 in the complex domain, shown in Section 2.8.

2.8. Extended Logistic Model 4 in the Complex Domain

For ( α ) > 0 , ( β ) > 0 , ( γ ) + p q > δ , δ > 0 , p 1 , q 1 , X ˜ X ˜ * = S ˜ > O and 0 < a < 1 . ζ ( · ) is given in (7),
f 4 a ( X ˜ ) d X ˜ = δ Γ ( p q ) π p q Γ ( 1 δ ( γ + p q ) ) [ ζ [ { ( 1 δ ( γ + p q ) , α ) } ; α + β ; ; a ] ] 1 × [ tr ( X ˜ X ˜ * ) ] γ e α [ tr ( X ˜ X ˜ * ) ] δ ( 1 + a e [ tr ( X ˜ X ˜ * ) ] δ ) α + β d X ˜ .
Note 4. 
One can generalize model 4 as follows: Take the δ in the numerator exponent as δ 1 and that in the denominator exponent as δ 2 δ 1 . In this case, the evaluation of the integral becomes very complicated. One can introduce a factor containing a determinant such as | X X | ρ , ( ρ ) > 0 as an additional factor into model 4 in the real case and | det ( X ˜ X ˜ * ) | ρ , ( ρ ) > 0 in the complex case. Then, the evaluation of the integral will become very lengthy; see [10] and the references therein. Hence, these cases will not be discussed here. Another generalization possible is to replace X X with A 1 2 ( X M ) B ( X M ) A 1 2 , where the p × p constant matrix A > O , the q × q constant matrix B > O , M = E [ X ] and A 1 2 is the positive definite square root of A > O in the real case and the corresponding replacements in the complex domain with A = A * > O , B = B * > O . This generalization brings only minor changes. In the real case, the normalizing constant is multiplied by | A | q 2 | B | p 2 , and in the complex case, the normalizing constant is multiplied by | det ( A ) | q | det ( B ) | p . The particular cases here will be (1): M = O ; (2) M = O , A = I p o r B = I q ; (3): M = O , A = I p and B = I q . This generalized model will not be listed as a different model here.

3. Some Bayesian Models

Our aim here will be to look into a few Bayesian procedures which will end up in the extended zeta function. Bayesian models are very important in many applied areas. According to [1] “Quantum Mechanics is just the Bayesian theory generalised to the complex Hilbert space”. The models that we construct in this section will hold in multivariate and matrix-variate complex domains, and will hold in complex Hilbert spaces as well; hence, the models are expected to be highly useful in many areas, especially in Quantum Physics.
Consider the standard Gaussian function Z N m ( O , I ) in the real case and Z ˜ N ˜ m ( O , I ) in the complex case, where Z is m × 1 , Z = [ z 1 , , z m ] and Z ˜ is m × 1 , Z ˜ = [ z ˜ 1 , , z ˜ m ] , and the functions are the following, denoted by f ( Z ) in the real case and f ( Z ˜ ) in the complex case, where
f ( Z ) = 1 ( 2 π ) m 2 e 1 2 Z Z ; f ( Z ˜ ) = 1 π m e Z ˜ * Z ˜ .
We will include the normalizing constants in all our models so that they will also be statistical densities. Consider a constant p × m , p m matrix H of rank p. Then, in the real case, X = H Z N p ( O , H H ) and X is called a scale mixture. Then, H H > O (real positive definite). Let A = ( H H ) 1 . Then, the function or density of X = H Z , at a given A, denoted by f ( X | A ) , is the following:
f ( X | A ) d X = | A | 1 2 ( 2 π ) p 2 e 1 2 tr ( A X X ) d X , A > O , tr ( A X X ) = X A X ,
for X = [ x 1 , , x p ] , < x j < and j = 1 , , p . In the complex case, X ˜ = H Z ˜ and the function or density of X ˜ in the conditional space, denoted by f ( X ˜ | A ) , is the following:
f a ( X ˜ | A ) d X ˜ = | det ( A ) | π p e X ˜ * A X ˜ d X ˜ , A = A * > O , tr ( A X ˜ X ˜ * ) = X ˜ * A X ˜ ,
for X ˜ = [ x ˜ 1 , , x ˜ p ] , | x ˜ j | 2 = x j 1 2 + x j 2 2 , < x j k < , k = 1 , 2 and j = 1 , , p . We will consider several cases of prior functions or prior distributions for the parameter matrix A.

3.1. Bayesian Models

In the real case, let g ( A ) be the prior function or prior density of A, where
g ( A ) d A = | C | γ Γ p ( γ ) | A | γ p + 1 2 e tr ( C A ) d A , C > O , A > O , ( γ ) > p 1 2
where Γ p ( γ ) is defined in Lemma 2. The corresponding function in the complex domain is denoted again by g a ( A ˜ ) , where A and C in the complex case are Hermitian positive definite.
g a ( A ˜ ) d A ˜ = | det ( C ) | γ Γ ˜ p ( γ ) | det ( A ˜ ) | γ p e tr ( C A ˜ ) d A ˜ , A ˜ = A ˜ * > O , C = C * > O , ( γ ) > p 1
where Γ ˜ p ( γ ) is defined in Lemma 2. Note that C in (30) may be complex or real. It is assumed that the p × p constant positive definite matrices C in the real and complex cases are known. If C has its own prior function or prior density, then we have one more step in the Bayesian procedure. The unconditional function or unconditional density of X and X ˜ will be called the Bayesian models in this paper, and they will be denoted by f 1 ( X ) and f 1 a ( X ˜ ) , respectively. The Bayesian procedure involves one more step of computing the posterior function or posterior density, denoted by g 1 ( A | X ) = f ( X | A ) g ( A ) / f 1 ( X ) in the real case and g 1 a ( A ˜ | X ˜ ) f a ( X ˜ | A ˜ ) / f 1 a ( X ˜ ) in the complex case, and then studying the properties of A in the real case and A ˜ in the complex case in the conditional space of A | X and A ˜ | X ˜ , respectively. In the above real case, f 1 ( X ) is as follows in Section 3.2.

3.2. Bayesian Model 1 in the Real Case

f 1 ( X ) = A > O f ( X | A ) g ( A ) d A = | C | γ ( 2 π ) p 2 Γ p ( γ ) A > O | A | γ + 1 2 p + 1 2 e 1 2 tr ( A X X ) tr ( C A ) d A = | C | γ Γ p ( γ + 1 2 ) ( 2 π ) p 2 Γ p ( γ ) | C + 1 2 X X | ( γ + 1 2 ) , ( γ + 1 2 ) > p 1 2 = Γ p ( γ + 1 2 ) | C | 1 2 Γ p ( γ ) ( 2 π ) p 2 { j = 1 p ( 1 + λ j ) } ( γ + 1 2 )
where λ 1 , , λ p are the eigenvalues of C 1 2 1 2 X X C 1 2 > O , C = C > O and ( γ ) > p 1 2 .

3.3. Bayesian Model 1 in the Complex Domain

Let the unconditional function or unconditional density of X ˜ in the complex domain be denoted by f 1 a ( X ˜ ) . Then, going through parallel steps to those in the real case, we have f 1 a ( X ˜ ) as the following:
f 1 a ( X ˜ ) d X ˜ = Γ ˜ p ( γ + 1 ) π p Γ ˜ p ( γ ) | det ( C ) | { j = 1 p ( 1 + λ j ) } 2 ( γ + 1 )
for ( γ ) > p 1 and C = C * > O , where λ j > 0 and j = 1 , , p are the eigenvalues of C 1 2 X ˜ X ˜ * C * 1 2 . We have used the same notations for the eigenvalues in the complex case as in the real case, but in the complex case the eigenvalues will be real because the matrix here is Hermitian positive definite, but the eigenvalues need not be the same as those in the real case.
The next step in the Bayesian procedure will be illustrated by computing the expected value of | A | in the conditional space of A, given X, in the real case here. Then, in the real case, the conditional density of A, given X, denoted by g 1 ( A | X ) = f ( X | A ) g ( A ) f 1 ( X ) and the Bayes’ estimate of | A | , given X, is E [ | A | ] in g 1 ( A | X ) and it is the following, where E ( · ) denotes the expected value of ( · ) :
A > O | A | g 1 ( A | X ) d A = | C + 1 2 X X | γ + 1 2 Γ p ( γ + 1 2 ) A > O | A | γ + 3 2 p + 1 2 e tr ( A ( C + 1 2 X X ) ) d A = Γ p ( γ + 3 2 ) Γ p ( γ + 1 2 ) | C + 1 2 X X | 1 = Γ ( γ + 3 2 ) Γ ( γ + 1 ) Γ ( γ + 1 p 2 ) Γ ( γ + 1 2 p 2 ) | C + 1 2 X X | .
For the next model also, we will start with the conditional function as a Gaussian one as in (27) and (28). For the prior function or prior density, we will consider a matrix-variate logistic-type function. Let the prior function or prior density for A in the real case be the following, again denoted by g ( A ) :
g ( A ) d A = c Γ p ( γ ) | A | γ p + 1 2 e α tr ( A ) ( 1 + a e tr ( A ) ) α + β d A , 0 < a < 1 , A > O ,
for ( α ) > 0 , ( β ) > 0 and ( γ ) > p 1 2 , where the normalizing constant c is already evaluated in Section 2. The corresponding prior function or prior density in the complex domain, denoted by g ( A ˜ ) , will be the following:
g ( A ˜ ) d A ˜ = c ˜ Γ ˜ p ( γ ) | det ( A ˜ ) | γ p e α tr ( A ˜ ) ( 1 + a e tr ( A ˜ ) ) α + β d A ˜ ,   A ˜ = A ˜ * > O ,   0 < a < 1 ,
for ( α ) > 0 , ( β ) > 0 and ( γ ) > p 1 .
Then, the unconditional function or density of X in the real case, again denoted by f 1 ( X ) , is as follows in Section 3.4.

3.4. Bayesian Model 2 in the Real Domain

f 1 ( X ) d X = c ( 2 π ) p 2 Γ p ( γ ) A > O | A | γ + 1 2 p + 1 2 e 1 2 tr ( A X X ) e α tr ( A ) ( 1 + a e tr ( A ) ) α + β d A = c ( 2 π ) p 2 Γ p ( γ ) k = 0 ( α + β ) k ( a ) k k ! A > O | A | γ + 1 2 p + 1 2 e ( α + k ) tr ( A ) 1 2 tr ( A X X ) d A = c Γ p ( γ + 1 2 ) ( 2 π ) p 2 Γ p ( γ ) k = 0 ( α + β ) k ( a ) k k ! | ( α + k ) I + 1 2 X X | ( γ + 1 2 ) .
The corresponding f 1 a ( X ˜ ) in the complex domain is the following:
f 1 a ( X ˜ ) = c ˜ Γ ˜ p ( γ + 1 ) π p Γ ˜ p ( γ ) k = 0 ( α + β ) k ( a ) k k ! | det ( ( α + k ) I + X ˜ X ˜ * ) | ( γ + 1 ) .
By using the properties of the determinant of a partitioned matrix, we have the following equivalent relationships in the real case:
| ( α + k ) I + 1 2 X X | = 1 X 1 2 X ( α + k ) I = ( α + k ) p [ 1 + X X 2 ( α + k ) ] = ( α + k ) p 1 [ α + 1 2 X X + k ] .
Then, f 1 ( X ) of (33), that is, in the real case, is the following:
f 1 ( X ) = c ( 2 π ) p 2 Γ p ( γ + 1 2 ) Γ p ( γ ) k = 0 ( α + β ) k ( a ) k k ! ( α + k ) ( γ + 1 2 ) ( p 1 ) [ ( α + 1 2 X X ) + k ] ( γ + 1 2 ) = c ( 2 π ) p 2 Γ p ( γ + 1 2 ) Γ p ( γ ) ζ [ { ( ( γ + 1 2 ) ( p 1 ) , α ) , ( ( γ + 1 2 ) , α + 1 2 X X ) } : α + β ; ; a ] = 1 ( 2 π ) p 2 Γ p ( γ + 1 2 ) Γ p ( γ ) ζ [ { ( ( γ + 1 2 ) ( p 1 ) , α ) , ( γ + 1 2 , α + 1 2 X X ) } ; α + β ; ; a ] ζ [ { ( p γ , α ) } ; α + β ; ; a ]
for ( γ ) > p 1 2 , p ( γ ) > 1 , ( γ ) + 1 2 > 1 , ( α ) > 0 , ( β ) > 0 , 0 < a < 1 and X is p × 1 real. By using steps parallel to those in the real case, we can obtain the corresponding unconditional function or density in the complex domain as follows in Section 3.5.

3.5. Bayesian Model 2 in the Complex Domain

f 1 a ( X ˜ ) = 1 π p Γ ˜ p ( γ + 1 ) Γ ˜ p ( γ ) ζ [ { ( 2 ( γ + 1 ) ( p 1 ) , α ) , ( 2 ( γ + 1 ) , α + X ˜ * X ˜ ) } : α + β ; ; a ] ζ [ { ( 2 p γ , α ) } ; α + β ; ; a ]
for ( γ ) > p 1 , 2 ( ( γ ) + 1 ) > 1 , ( α ) > 0 , ( β ) > 0 , 0 < a < 1 and X ˜ is p × 1 .
For the next model, we will also start with a real multivariate Gaussian function or density with the parameter matrix being the covariance matrix therein, denoted by a p × p real positive definite matrix B > O . The Gaussian function in the real domain is the following, again denoted by f ( X | B ) :
f ( X | B ) d X = 1 ( 2 π ) p 2 | B | 1 2 e 1 2 tr ( B 1 X X ) d X , B > O , X = [ x 1 , , x p ] ,
for < x j < and j = 1 , , p . In the complex domain, the corresponding function or density is the following:
f a ( X ˜ | B ) = 1 π p | det ( B ) | e tr ( B 1 X ˜ X ˜ * ) d X ˜ , B = B * > O , X ˜ = [ x ˜ 1 , , x ˜ p ] ,
for x ˜ j = x j 1 + i x j 2 , < x j k < , k = 1 , 2 , j = 1 , , p and i = ( 1 ) . The two models that we considered so far had A = B 1 having a prior function or density. But, when B has a prior function or density, then the problem becomes complicated. Then, the integral to be evaluated will belong to matrix-variate Bessel type or Krätzel type. Let the prior function or density for B be an extended logistic function or density. We will denote the prior function in the real and complex domains by g ( B ) and g a ( B ) , respectively, where in the real case B = B > O and in the complex case B = B * > O . In the real case, let
g ( B ) d B = c Γ p ( γ ) | B | γ p + 1 2 e α tr ( B ) ( 1 + a e tr ( B ) ) α + β d B
for ( α ) > 0 , ( β ) > 0 , 0 < a < 1 , ( γ ) > p 1 2 and c 1 = ζ [ { ( p γ , α ) } : α + β ; ; a ] . The corresponding prior function or density in the complex domain can be obtained by replacing | B | with | det ( B ) | , p + 1 2 with p, Γ p ( γ ) by Γ ˜ p ( γ ) and the condition ( γ ) > p 1 2 with ( γ ) > p 1 . Hence, it will not be listed here separately in order to save space. Then, the unconditional density of X in the real case, again denoted by f 1 ( X ) , is as follows in Section 3.6.

3.6. Bayesian Model 3 in the Real Domain

f 1 ( X ) = B > O f ( X | B ) g ( B ) d B = c Γ p ( γ ) ( 2 π ) p 2 B > O | B | γ 1 2 p + 1 2 e 1 2 tr ( B 1 X X ) e α tr ( B ) ( 1 + a e tr ( B ) ) α + β d B = c Γ p ( γ ) ( 2 π ) p 2 k = 0 ( α + β ) k ( a ) k k ! B > O | B | γ 1 2 p + 1 2 e 1 2 tr ( B 1 X X ) ( α + k ) tr ( B ) d B
for 0 < a < 1 , ( α ) > 0 , ( β ) > 0 , B = B > O , ( γ ) > p 1 2 and X is p × 1 real. Direct evaluation of the integral in (38) is difficult because it is a real matrix-variate Bessel type or Krätzel type integral involving B and B 1 in the exponent. Ref. [11] evaluated the general form of this integral for the real scalar variable case, in connection with the generalized reaction-rate probability integral in nuclear reaction rate theory. We can consider the Fourier transform of f 1 ( X ) in (38) by taking an expected value of a conditional expectation. Let T be a p × 1 parameter vector. The Fourier transform of f 1 ( X ) , denoted by F f 1 ( T ) , is E [ e i T X ] , i = ( 1 ) . This we can evaluate by taking a conditional expectation at a given B and then taking an expectation over B. That is,
F f 1 ( T ) = E [ e i T X ] = E [ E [ e i T X | B ] ] .
But from real Gaussian density, we have the conditional expectation of e i T X | B as e 1 2 T B T , and in the complex case it is e 1 4 T * B T , where T * is the conjugate transpose of T; see, for example, [12]. Observe that in the real case 1 ( 2 π ) p 2 | B | 1 2 e 1 2 X B 1 X is now replaced by e 1 2 T B T . In the complex case, 1 | det ( B ) | ( π ) p e X ˜ * B 1 X ˜ is now replaced by e T * B T . Now, take the integral over B to complete the integration in the real case:
B > O | B | γ p + 1 2 e ( α + k ) tr ( B ) 1 2 tr ( B T T ) d B = Γ p ( γ ) | ( α + k ) I + 1 2 T T | γ
for ( γ ) > p 1 2 . By using the properties of the determinant of a partitioned matrix, we have the following:
| ( α + k ) I + 1 2 T T | = 1 T 1 2 T ( α + k ) I = | ( α + k ) I | [ 1 + 1 2 T T α + k ] = ( α + k ) p 1 [ α + k + 1 2 T T ] | ( α + k ) I + 1 2 T T | γ = ( α + k ) γ ( p 1 ) [ α + k + 1 2 T T ] γ .
The corresponding quantity in the complex domain is ( α + k ) 2 γ ( p 1 ) [ α + k + T * T ] 2 γ . The unconditional function or density of X in the real case is now available from the inverse Fourier transform of [ α + k + 1 2 T T ] γ . In order to evaluate it, we will use the technique used in [12]. We can replace [ α + k + 1 2 T T ] γ with an equivalent integral, namely,
[ α + k + 1 2 T T ] γ = 1 Γ ( γ ) 0 y γ 1 e y [ α + k + 1 2 T T ] d y = 1 Γ ( γ ) 0 y γ 1 e y ( α + k ) e y 2 T T d y , ( γ ) > 0
where y is a real positive scalar. The Fourier inverse of e y 2 T T is 1 y ( 2 π ) p 2 e 1 2 y X X . Hence, the Fourier inverse coming from (39) is the following:
f 1 ( X ) = c Γ p ( γ ) k = 0 ( α + β ) k ( a ) k k ! Γ p ( γ ) Γ ( γ ) × 1 ( 2 π ) p 2 ( α + k ) ( p 1 ) γ 0 y γ 1 2 1 e y ( α + k ) 1 2 y X X d y = c ( 2 π ) p 2 ( α + k ) ( p 1 ) γ Γ ( γ ) k = 0 ( α + β ) k ( a ) k k ! 0 y γ 1 2 1 e y ( α + k ) 1 2 y X X d y c 1 = ζ [ { ( p γ , α ) } : α + β ; ; a ] .
For evaluating the integral in (41), we will use the following lemma:
Lemma 4. 
For δ ± ν , ν = 0 , 1 , 2 , , ( ρ ) > 0 , ( γ ) > 0
P ( γ ) = 0 t δ 1 e ρ t γ t d t = Γ ( δ ) ρ δ 0 F 1 ( ; 1 δ ; γ ρ ) + γ δ Γ ( δ ) 0 F 1 ( ; 1 + δ ; γ ρ ) .
where 0 F 1 ( · ) is the Bessel hypergeometric series.
Proof. 
Let us take the Mellin transform of P ( γ ) with Mellin parameter s. Then,
M P ( s ) = 0 γ s 1 P ( γ ) d γ = 0 t δ 1 e ρ t [ 0 γ s 1 e γ t d γ ] d t = Γ ( s ) Γ ( δ + s ) ρ ( δ + s ) , ( s ) > 0 , ( δ + s ) > 0 .
From the inverse Mellin transform,
P ( γ ) = 1 ρ δ ( 2 π i ) c i c + i Γ ( s ) Γ ( δ + s ) ( γ ρ ) s d s , i = ( 1 ) .
For δ ± ν , ν = 0 , 1 , , the poles of the integrand are simple and then the integral can be evaluated as the sum of the residues at the poles of Γ ( s ) and Γ ( δ + s ) . Poles coming from Γ ( s ) give the sum of residues, denoted by S 1 , as the following:
S 1 = 1 ρ δ ν = 0 ( 1 ) ν ν ! Γ ( δ ν ) ( γ ρ ) ν = Γ ( δ ) ρ δ ν = 0 1 ( 1 δ ) ν ( γ ρ ) ν = Γ ( δ ) ρ δ 0 F 1 ( ; 1 δ ; γ ρ )
where 0 F 1 ( · ) is a Bessel hypergeometric series, the properties used are lim s ν ( s + ν ) Γ ( s ) = ( 1 ) ν ν ! and Γ ( γ ν ) = ( 1 ) ν Γ ( γ ) ( 1 γ ) ν whenever 1 γ 0 , 1 , and Γ ( γ ) is defined. The sum of the residues coming from Γ ( δ + s ) , denoted by S 2 , is the following:
S 2 = γ δ Γ ( δ ) 0 F 1 ( ; 1 + δ ; γ ρ ) .
Note that S 1 + S 2 proves the lemma.
Then, the integral part in (41) is the following:
0 y γ 1 2 1 e y ( α + k ) X X 2 y d y = Γ ( γ 1 2 ) ( α + k ) γ 1 2 0 F 1 ( ; 3 2 γ ; 1 2 ( α + k ) X X ) + Γ ( 1 2 γ ) ( 1 2 X X ) γ 1 2 0 F 1 ( ; γ + 1 2 ; 1 2 ( α + k ) X X )
for ( γ 1 2 ) ± ν , ν = 0 , 1 , 2 , ( α ) > 0 and ( γ ) > 1 2 . In the complex case, the corresponding integral will be the following:
0 y γ 1 1 e y ( α + k ) X ˜ * X ˜ y d y = Γ ˜ ( γ 1 ) ( α + k ) γ 1 0 F 1 ( ; 2 γ ; ( α + k ) X ˜ * X ˜ ) + Γ ˜ ( 1 γ ) ( X ˜ * X ˜ ) γ 1 0 F 1 ( ; γ ; ( α + k ) X ˜ * X ˜ )
for ( γ 1 ) ± ν , ν = 0 , 1 , 2 , ( α ) > 0 and ( γ ) > 1 .    □
Note 5. 
The integral in Lemma 4 can also be evaluated in terms of a modified Bessel function of the second kind:
0 t ν 1 e α t γ t d t = 2 γ α ν 2 K ν ( 2 α γ )
for ( α ) > 0 and ( γ ) > 0 , where K ν is the modified Bessel function of the second kind of order ν; see, for example, page 368 of [13]. Hence, from (41) and (42), we have the unconditional density of X in the real case as the following:
f 1 ( X ) = c ( 2 π ) p 2 Γ ( γ ) k = 0 ( α + β ) k ( a ) k k ! { Γ ( γ 1 2 ) ( α + k ) γ 1 2 0 F 1 ( ; 3 2 γ ; 1 2 ( α + k ) X X ) + Γ ( 1 2 γ ) ( 1 2 X X ) γ 1 2 0 F 1 ( ; γ + 1 2 ; 1 2 ( α + k ) X X ) }
for ( γ ) > 1 2 , γ 1 2 ± ν , ν 0 , 1 , , ( α ) > 0 and c 1 = ζ [ { ( p γ , α ) } : α + β ; ; a ] . In the complex domain, the unconditional density f 1 a ( X ˜ ) is available from (44) by replacing γ 1 2 with γ 1 and 1 2 X X with X ˜ * X ˜ . That is,
f 1 a ( X ˜ ) = c ˜ ( π ) p Γ ˜ ( γ ) k = 0 ( α + β ) k ( a ) k k ! { Γ ˜ ( γ 1 ) ( α + k ) γ 1 0 F 1 ( ; 2 γ ; ( α + k ) X ˜ * X ˜ ) + Γ ˜ ( 1 γ ) ( X ˜ * X ˜ ) γ 1 0 F 1 ( ; γ ; ( α + k ) X ˜ * X ˜ ) }
for ( γ ) > 1 , ( γ ) 1 ± ν , ν = 0 , 1 , , ( α ) > 0 and c ˜ 1 = ζ [ { ( 2 p γ , α ) } : α + β ; ; a ] .

3.7. Bayesian Model 4 in the Real Case

Now, we consider a rectangular matrix. Let the p × q , p q real matrix X of rank p have the following density:
f ( X ) d X = c | A X B X | γ e α tr ( A X B X ) ( 1 + a e tr ( A X B X ) ) α + β d X = c k = 0 ( α + β ) k ( a ) k k ! | A X B X | γ e ( α + k ) tr ( A X B X ) d X ,
for 0 < a < 1 , A > O and B > O . , where A is a p × p and B is a q × q constant positive definite matrix. The normalizing constant c can be evaluated by making the transformations Y = A 1 2 X B 1 2 d Y = | A | q 2 | B | p 2 d X and U = Y Y d Y = π p q 2 Γ p ( q 2 ) | U | q 2 p + 1 2 d U , and then integrate out by using a real matrix-variate gamma integral to obtain
c 1 = π p q 2 Γ p ( γ + q 2 ) Γ p ( q 2 ) | A | q 2 | B | p 2 ζ [ { ( p ( γ + q 2 ) , α ) } : α + β ; ; a ] , ( γ + q 2 ) > p 1 2 .
The density in the complex case, denoted by f ( X ˜ ) , is available from the real case structure by replacing | A X B X | with | det ( A X ˜ B X ˜ * ) | and A X B X with A X ˜ B X ˜ * , and then the normalizing constant in the complex case, denoted by c ˜ , is the following:
c ˜ 1 = π p q Γ ˜ p ( γ + q ) Γ ˜ p ( q ) | det ( A ˜ ) | q | det ( B ˜ ) | p ζ [ { ( 2 p ( γ + q ) , α ) } : α + β ; ; a ] , ( γ + q ) > p 1 .
If matrix A has a prior function or density, then f ( X ) in the real case is f ( X | A ) . Let the prior density of A in the real case be again denoted by g ( A ) where,
g ( A ) d A = | C | δ Γ p ( δ ) | A | δ p + 1 2 e tr ( C A ) d A , ( δ ) > p 1 2
where the constant positive definite matrix C > O is assumed to be known. Then, the unconditional density of X in the real case, denoted by f 1 ( X ) , is the following:
f 1 ( X ) = A > O f ( X | A ) g ( A ) d A = | B | p 2 | C | δ Γ p ( q 2 ) | X B X | γ π p q 2 Γ p ( δ ) Γ p ( γ + q 2 ) ζ [ { ( p ( γ + q 2 ) , α ) } : α + β ; ; a ] × k = 0 ( α + β ) k ( a ) k k ! A > O | A | γ + q 2 + δ p + 1 2 e ( α + k ) tr ( A X B X ) tr ( C A ) d A = | B | p 2 | C | δ Γ p ( q 2 ) | X B X | γ π p q 2 Γ p ( δ ) Γ p ( γ + q 2 ) ζ [ { ( p ( γ + q 2 ) , α ) } : α + β ; ; a ] × Γ p ( γ + δ + q 2 ) k = 0 ( α + β ) k ( a ) k k ! | ( α + k ) X B X + C | ( γ + δ + q 2 ) .
One can have the following simplification by using the properties of the partitioned determinant:
| ( α + k ) X B X + C | = | X B X | | ( α + k ) I + ( X B X ) 1 2 C ( X B X ) 1 2 | = | X B X | j = 1 p ( k + α + λ j )
where λ j > 0 , j = 1 , , p are the eigenvalues of the positive definite matrix
( X B X ) 1 2 C ( X B X ) 1 2 .
From (50) and (51), the unconditional density can be written in terms of an extended zeta function. Observe that | X B X | γ is cancelled in the normalizing constant and the balance of the exponent is δ + q 2 . Then, f 1 ( X ) , the unconditional density in the real domain, is the following:
f 1 ( X ) = | B | p 2 | C | δ Γ p ( q 2 ) | X B X | ( δ + q 2 ) Γ p ( γ + δ + q 2 ) π p q 2 Γ p ( δ ) Γ p ( γ + q 2 ) ζ [ { ( p ( γ + q 2 ) , α ) } : α + β ; ; a ] × ζ [ { ( ( γ + δ + q 2 ) , α + λ 1 ) , , ( ( γ + δ + q 2 ) , α + λ p ) } : α + β ; ; a ]
for B > O , C > O , ( δ ) > p 1 2 , ( γ ) + q 2 > p 1 2 , p ( ( γ ) + q 2 ) > 1 , ( α ) > 0 , ( β ) > 0 and 0 < a < 1 , where the λ j ’s are defined in (51).
In the complex case, the prior density of A ˜ = A ˜ * > O , corresponding to the real case g ( A ) , denoted by g a ( A ˜ ) , is the following:
g a ( A ˜ ) = | det ( C ) | δ Γ ˜ p ( δ ) | det ( A ˜ ) | δ p e tr ( C A ˜ ) d A ˜ , A ˜ = A ˜ * > O , C = C * > O , ( δ ) > p 1 .
The unconditional density of X ˜ , denoted by f 1 ( X ˜ ) , is the following:
f 1 a ( X ˜ ) = | det ( B ) | p | det ( C ) | δ Γ ˜ p ( q ) | det ( X B X * ) | ( δ + q ) Γ ˜ p ( γ + δ + q ) π p q Γ ˜ p ( δ ) Γ ˜ p ( γ + q ) ζ [ { ( 2 p ( γ + q ) , α ) } : α + β ; ; a ] × ζ [ { ( 2 ( γ + δ + q ) , α + λ 1 ) , , ( 2 ( γ + δ + q ) , α + λ p ) } : α + β ; ; a ]
for B = B * > O , C = C * > O , ( δ ) > p 1 , ( γ + q > p 1 , 2 p ( ( γ ) + q ) > 1 , ( α ) > 0 , ( β ) > 0 and 0 < a < 1 , where the λ j s appearing here are the real positive eigenvalues of the Hermitian positive definite matrix ( X ˜ B X ˜ * ) 1 2 C ( X ˜ B X ˜ * ) 1 2 , C = C * > O , B = B * > O , and we have used the same notation, λ j , as in the real case, and the λ j s in the complex case need not be equal to those in the real case.

4. Concluding Remarks

Instead of A in (52) and (54) having a prior distribution, if B has a prior distribution or if both A and B have compatible prior distributions, then one can also evaluate the unconditional density, but the procedure will be a little more lengthy. Note that we have only mainly considered two cases of the unconditional function or density for X, namely, one case where the p × 1 vector X has a Gaussian density with a null mean value and X has a p × q rectangular matrix-variate gamma distribution. Then, we considered different prior distributions for a parameter matrix, mainly in the class of logistic-type distributions. We did not consider the situations where a determinant appears as a multiplicative factor when the trace in the exponent has an arbitrary power, or a determinant and a trace appear as multiplicative factors, when the trace in the exponent has an arbitrary power, because these cases will involve lengthy steps and too much space would have been taken for the discussion. We could have also looked into Mathai’s pathway-type models for the conditional and prior functions or densities, which could be considered as extensions of multivariate-, matrix-variate Gaussian-, matrix-variate gamma- or Wishart-type models. Also, in order to save space, we did not study various properties of the models introduced here. In order to limit the size of the manuscript, we stop the discussion here.
For further studies, apart from the items mentioned above, one can explore the pathway idea by combining matrix-variate type-1 beta, type-2 beta and gamma densities, observing that particular cases of the matrix-variate gamma will produce matrix-variate Gaussian as well as Wishart densities. One can study various properties of the models introduced here. Such studies are likely to bring in more parameters from the set a j s and b j s. One can explore the applications of the models here, especially the ones in the complex domain, in quantum physics, communication theory and related areas in light of the evidence in [1,5].
This research received no external funding. The author declares no conflict of interest. The author acknowledges with thanks the comments and suggestions made by the five or six reviewers of the original version of this paper, which enabled him to improve the presentation of this paper.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Benavoli, A.; Facchini, A.; Zaffalon, M. Quantum mechanics: The Bayesian theory generalised to the space of Hermitian matrices. arXiv 2016, arXiv:1605.08177 1605. [Google Scholar] [CrossRef]
  2. Mathai, A.M.; Provost, S.B. On q-logistic and related distributions. IEEE Trans. Reliab. 2006, 55, 237–244. [Google Scholar] [CrossRef]
  3. Mathai, A.M. A Handbook of Generalized Special Functions for Statistical and Physical Sciences; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  4. Mathai, A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument; World Scientific Publishing: New York, NY, USA, 1997. [Google Scholar]
  5. Deng, X. Texture Analysis and Physical Interpretation of Polarimetric SAR Data. Ph.D. Thesis, Universitat Politecnica de Catalunya, Barcelona, Spain, 2016. [Google Scholar]
  6. Bombrun, L.; Beaulieu, J.-M. Fisher distribution for texture modeling of Polarimetric SAR data. IEEE Geosci. Remote Sens. Lett. 2008, 5, 512–516. [Google Scholar] [CrossRef]
  7. Frery, A.C.; Muller, H.J.; Yanasse, C.C.F.; Sant’Anna, S.J.S. A model for extremely heterogeneous clutter. IEEE Trans. Geosci. Remote Sens. 1997, 35, 648–659. [Google Scholar] [CrossRef]
  8. Jakeman, E.; Pusey, P. Significance of K disributions in scatering experiments. Phys. Rev. Lett. 1978, 40, 546–550. [Google Scholar] [CrossRef]
  9. Yueh, S.H.; Kong, J.A.; Jao, J.K.; Shin, R.T.; Novak, L.M. K-distribution and Polarimetric terrain radar clutter. J. Electromagn. Waves Appl. 1989, 3, 747–768. [Google Scholar] [CrossRef]
  10. Díaz-Garcia, J.A.; Ramón Gutiérrez-Jáimez, R. Compound and scale mixture of matricvariate and matrix variate Kotz-type distributions. J. Korean Stat. Soc. 2010, 39, 75–82. [Google Scholar] [CrossRef]
  11. Mathai, A.M.; Haubold, H.J. Modern Problems in Nuclear and Neutrino Astrophysics; Akademie: Berlin, Germany, 1988. [Google Scholar]
  12. Mathai, A.M.; Provost, S.B.; Haubold, H.J. Multivariate Statistical Analysis in the Real and Complex Domains; Springer Nature: Cham, Switzerland, 2022. [Google Scholar]
  13. Jeffrey, A.; Zwillinger, D. Tables of Integrals, Series and Products, 7th ed.; Academic Press: Boston, MA, USA, 2007. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mathai, A.M. An Extended Zeta Function with Applications in Model Building and Bayesian Analysis. Mathematics 2023, 11, 4076. https://doi.org/10.3390/math11194076

AMA Style

Mathai AM. An Extended Zeta Function with Applications in Model Building and Bayesian Analysis. Mathematics. 2023; 11(19):4076. https://doi.org/10.3390/math11194076

Chicago/Turabian Style

Mathai, Arak M. 2023. "An Extended Zeta Function with Applications in Model Building and Bayesian Analysis" Mathematics 11, no. 19: 4076. https://doi.org/10.3390/math11194076

APA Style

Mathai, A. M. (2023). An Extended Zeta Function with Applications in Model Building and Bayesian Analysis. Mathematics, 11(19), 4076. https://doi.org/10.3390/math11194076

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop