2.1. Notations and Formats
This paper deals with scalar/vector/matrix variables; scalar and matrix constants; square and rectangular matrices; singular, nonsingular, and positive definite matrices; and all are defined in the real and complex domains. Hence, a multiplicity of symbols and equations are needed to represent all of these items distinctly. In order to simplify matters and to bring the function and equation numbers to a manageable stage, the following notations will be used in this paper.
Real scalar variables, whether mathematical or random, will be denoted by lowercase letters such as x and y. Real vector/matrix variables, whether mathematical or random, whether square matrices or rectangular matrices, will be denoted by capital letters such as X and Y. Scalar constants will be denoted by lowercase letters such as a and b, and vector/matrix constants will be denoted by , etc. A tilde will be used to designate variables in the complex domain such as and . No tilde will be used on constants. When Greek letters and other symbols appear, the notations will be explained then and there. Let be a matrix where the elements are functionally independent or distinct real scalar variables. Then, the wedge product of the differentials will be defined as . When x and y are real scalars, then the wedge product of their differentials is defined as so that . For a square matrix A, the determinant will be denoted as or as . When A is in the complex domain, then the absolute value of the determinant or modulus of the determinant will be denoted as . If and b are real scalars, then . If is in the complex domain, then one can write and as real, and the wedge product of the differentials in will be defined as . We will consider only real-valued scalar functions in this paper. will denote the integral over X of the real-valued scalar function of X. When is a real-valued scalar function of X, whether X is a scalar variable, vector, or matrix in the real or complex domain, and if for all X, and , then will be defined as a density or statistical density. When a square matrix X is positive definite, then it will be denoted as , where , and a prime denotes the transpose. The conjugate transpose of any matrix in the complex domain will be written as . When a square matrix is in the complex domain, and if , then is Hermitian. If , then is called Hermitian positive definite. When , then means the integral of the real-valued scalar function over the real positive definite matrix X such that and (all positive definites), where and are constant matrices, and they have a similar notation and interpretation in the complex domain as well. For example, means B is positive definite, X is positive definite, and is positive definite, and the right side of the inequalities is not zero but the capital letter O. In order to avoid a multiplicity of numbers, the following procedure will be used. For a function number in the complex domain, corresponding to the same number in the real domain, a letter c will be affixed to the function number. For example, will correspond to in the real domain. This notation will enable a reader to recognize a function in the complex domain instantly by recognizing the subscript c. Other notations will be explained whenever they occur for the first time.
2.2. Some Matrix-Variate Integrals
Let us start with an example of the evaluation of an integral in the real domain, which will show the different types of hurdles to overcome to achieve the final result. Let
be a
matrix of rank
p, where the
elements
are functionally independent (distinct) real scalar variables. Suppose that we wish to evaluate the following integral, where
is a real-valued scalar function of the
matrix
X, observing that integral over
X and the wedge product of the differentials,
, are already explained in
Section 2.1:
where
is a
positive definite constant matrix, and
is a
positive definite constant matrix;
is the positive definite square root of the positive definite matrix
,
denotes the expected value of
, and
means the real part of
. The first step here is to simplify the matrix
into a convenient form by making a transformation of
from the Lemma 1 given below by observing that
, since
M is a constant matrix. The corresponding integral in the complex domain is the following:
where
(both Hermitian positive definites),
A is
,
B is
, and
. The transformation in the complex case is
.
Proofs of the following lemmas are given in detail in [
11]. For the sake of illustration, the proof of the real part of Lemma
1 is detailed in
Appendix A at the end of this paper.
Lemma 1. Let the matrix be in the real domain, where the elements are functionally independent (distinct) real scalar variables, and let A be a nonsingular constant matrix and B be a nonsingular constant matrix. Then, we have the following: Let the matrix be in the complex domain, where A and B are and nonsingular constant matrices, respectively, in the real or complex domain; then, we have the following:where denotes the absolute value of the determinant of . The proof of Lemma 1 and other lemmas to follow may be seen from [
11]. When a
matrix
X is symmetric,
, then we have a companion result to Lemma 1, which will be stated next.
Lemma 2. Let be a symmetric matrix, and let A be a constant nonsingular matrix. Then, we have the following:and when a matrix in the complex domain is Hermitian, as well as when A is a nonsingular constant matrix in the real or complex domain, then we have the following: Now, under Lemma 1, (
1) reduces to an integral over
Y. Let us denote it as
. Then, we have the following:
The corresponding integral in the complex case is the following:
The function
in the real domain, when
and
, is often referred to as a Kotz model by most of the authors who use such a model. When the exponent of the determinant is
, then the evaluation of the integral over
is very difficult, which will be seen from the computations to follow. When
, and, in the real domain, ref. [
6] also calls the model a Kotz model, the normalizing constant given by them and claimed to be available in the earlier literature does, nevertheless, not seem to be correct. The correct normalizing constant and its evaluation in the real and complex domains will be given in detail below. Since
involves a determinant and a trace, where the determinant is a product of the eigenvalues and the trace is a sum, two elementary symmetric functions, if there is a transformation involving elementary symmetric functions, then, one can handle the determinant and trace together. This author does not know of any such transformation. Going through the eigenvalues does not seem to be a good option, because the Jacobian element will involve a Vandermonde determinant and is not very convenient to handle. The next possibility is triangularization, and, in this case, one can also make the determinant be a product of the scalar variables and trace be a sum. Then, one can use a general polar coordinate transformation so that the trace becomes a single variable, namely, the radial variable
r, and in the product of determinant and trace,
r and sine and cosine product will also as separated. Hence, this approach will as a convenient one. Continuing with the evaluation of (
3) in the real case, we have the following situations: if
and
, or
, then one would immediately convert
into
and integrate it out by using a real matrix-variate gamma integral; in the case of
and
; one would integrate it out by using the scalar variable gamma integral for
and matrix-variate gamma integral for
. This conversion can be done with the help of Lemma 3, which is given below.
Lemma 3. Let the matrix X of rank m be in the real domain with distinct elements . Let the matrix be denoted by , which is positive definite. Then, going through a transformation involving a lower triangular matrix with positive diagonal elements and a semiorthonormal matrix and after integrating out the differential element corresponding to the semiorthonormal matrix, we will have the following connection between and ; see the following details from [11]:where, for example, is the real matrix-variate gamma function given bywhere means the trace of the square matrix . Since is associated with the above real matrix-variate gamma integral, we call a real matrix-variate gamma function. This is also known by different names in the literature. When the matrix of rank m, with distinct elements, is in the complex domain, and we let , which is and Hermitian positive definite, then, by going through a transformation involving a lower triangular matrix with real and positive diagonal elements and a semiunitary matrix, we can establish the following connection between and ; please refer to [11]:where, for example, is the complex matrix-variate gamma function given by We call the complex matrix-variate gamma, because it is associated with the above matrix-variate gamma integral in the complex domain.
However, in our (
3), both the determinant and trace enter as multiplicative factors, and there is an exponent
for the exponential trace. In order to tackle this situation, we will convert
to
, where
T is a lower triangular matrix, by using Theorem 2.14 of [
11], which is restated here as a lemma. The idea is that, in this case,
becomes product of the squares of the diagonal elements in
T only, and
is also a sum of squares. This conversion can also be achieved by converting the
from Lemma 3 to a
by using another result, where
T is lower triangular.
Lemma 4. Let X be a matrix of rank m with functionally independent real scalar variables as elements. Let T be a lower triangular matrix, and let be a semiorthonormal matrix . Consider the transformation , where both T and are uniquely selected, for example, with the diagonal elements positive in T and with the first column elements that are positive in . Then, after integrating out the differential element corresponding the semiorthonormal matrix , one has the following connection between and ; please refer to [11]:and in the complex case, let be a matrix of rank m with distinct elements in the complex domain. Let be a lower triangular matrix in the complex domain with the diagonal elements that are real and positive, and let be a semiunitary matrix, , where and are uniquely chosen. Then, after integrating out the differential element corresponding to , one has the following connection between and : Let us consider the evaluation of (
3) in the real case first. By converting the
in (
3) to a
by using Lemma 4, the integral part of (
3) over
Y is the following, which is denoted by
:
The corresponding equation in the complex domain is the following:
Note that in the real case, we have the following:
where in
, there are
p terms, and the second sum has
terms; thus, we have a total of
terms. The corresponding quantity in the complex domain is the following:
where
and
are real, and, in the first sum, there are
p square terms, but, in the sum
, there are a total of
square terms; thus, there are a total of
square terms in the complex case.
Let us consider a polar coordinate transformation in the real case on all of the
terms by using the transformation on page 44 of [
12], which is restated here for convenience, that is,
, and
.
for
for
in the real case, and
in the complex case. The structure of the polar coordinate transformation in the complex case remains as in the real case. The only change is that in the real case
and in the complex case
. The Jacobian of the transformation in the real case is the following:
and, in the complex case, the Jacobian is given by the following:
for the same ranges for
as in the real case, but, in the complex case,
.
The normalizing constant
c in the real case coming from (
5) is quoted in [
6] by citing earlier works. However, none of them seems to have given an evaluation of the integral in (
5) explicitly. The normalizing constant
c given in [
6] does not seem to be correct. Since the integral in (
5) appears in very many places as a Kotz integral and is used in many disciplines, a detailed evaluation of the integral in (
5) is warranted. In addition, no work seems to have given
in the complex case. Hence, the evaluations of
c and
in the real and complex cases, respectively, will be given here in detail.
Evaluation of the Integral in (5) in the Real Case and (6) in the Complex Case
Note that
. From the Jacobian part, the factor containing
r is
. In the product
, each
contains an
. In addition, the Jacobian part is
. Upon collecting all
r values, the exponent of
in the real case is the following:
Then, integration over
r gives the following:
for
and
. The corresponding integral over
r in the complex domain is the following:
for
and
.
2.3. Evaluation of the Sine and Cosine Product in the Real Case
Consider the integration of the factors containing the
s in the real case. These
s come from
and from the Jacobian part. Consider
. The exponent of
is
. The exponent of
is
, and the part coming from the Jacobian is
. Note that
. Then, the integral over over
, denoting the integral over
as
, gives the following, where in all the integrations over the
s to follow, we will use the transformations
and
:
Now, by collecting all of the factors containing
and proceeding as in the case of
, we have the following result for the integral over
:
Note that the denominator gamma in (
11) cancels with one numerator gamma in (
10). The pattern of cancellation of the denominator gamma in the next step with one numerator gamma in its previous step will continue leaving only one factor in the numerator and no factor in the denominator, except the very first step involving (
10) and (
11), where the first denominator gamma, namely,
, is left out. When integrating
, we have the following:
Note that, when considering
, there is no cosine factor coming from
, and the cosine factor comes only from the Jacobian part. We can see that
Again, the denominator gamma in (
13) cancels with one numerator gamma in (
12). This pattern will continue for
. For the integrals over
the only contribution is from the Jacobian part; no sine factor will be there. Consider
. We see that
Again, cancellation will hold. Now, consider a few last cases of
. For
, we have
and for
, we have
and the last
goes from
to
with no contribution from the Jacobian part; hence, we have the following:
Note that, starting from
to
, the gamma factor left in the numerator is
. There are
such factors, and the last one is
; thus, the product is
. For
, the factors left out in the numerator are
, and for
, we have
, thus giving
. For
, there is one gamma left in the denominator, namely,
. Taking the product of the integral over all the
s in the real case yields the following:
where
is the real matrix-variate gamma defined in Lemma 3.
Evaluation of the Integral over the s in the Complex Case
The sine and cosine functions come from the transformations corresponding to
from the Jacobian when going from
to
and from the Jacobian in the polar coordinate transformation. The Jacobian part in the polar coordinate transformation is the following:
By collecting factors containing
, we observe that
comes from
, and
comes from
and the Jacobian part. The exponent of
is
, and the exponent of
is
. In all of the integrals to follow,
due to the evenness of the integrand. Then, we will use the transformations
and
; the steps are parallel to those in the real case. Therefore, we have the following:
for
. By collecting the factors containing
, we note that the exponent of
is
, and the exponent of
is
. Hence, we have the following:
Note that
from the denominator of (
18) cancels with the same in the numerator of (
17) to leave one gamma, namely,
in the numerator of (
17) and one gamma, namely,
in the denominator of (
17). The pattern of cancellation of the gamma in the denominator of a step canceling with a gamma in the numerator of the previous step will continue as seen in the real case. Let us check for
and
to see whether the pattern is continuing, where in
, there is no contribution of the sine function, and the only cosine function is coming from the Jacobian part. For
, we have the following:
For
, we have the following:
The pattern of cancelation is continuing. However, starting from
, the factor left out in the numerator is
, and the last factor gives
, because the range here is
, and, hence, the factors left out in the numerator are
, and one gamma, namely,
is left out in the denominator of the integration over
. Hence, the integration over all of the sine and cosine functions in the complex case is the following:
where
is the complex matrix-variate gamma function defined in Lemma 3. Then, the final result of integration over
r and the integration over all of the
s in the real case is the following:
where
,
and
are constant matrices, where
A is a
,
B is a
, and
X is a
real matrix of rank
p; the corresponding integral in the complex case is given by the following:
for
and
. Therefore, the normalizing constants
c and
are the following:
for
and
; likewise, we have the following:
for
and
.
From the general results in (
22) and (
23) we can have the following interesting special cases:
;
;
;
;
; and
. Since the integral over the sine and cosine product is very important in many types of applications, we will give these as theorems here.
Theorem 1. Let and let in the real case; the integral over , denoted by , is the following: The corresponding result in the complex case is the following, where here has the same format as in the real case, but here .
Theorem 2. Let and let . The integral over in the complex case is the following: From (
22) in the real case, we have the following theorems:
Theorem 3. Let Y be a matrix of rank p with the elements being functionally independent real scalar variables. For and , we have the following: Remark 1. In the widely used normalizing constant in [6], which was quoted from earlier references, its corresponding to the normalizing constant in Theorem 3 above does not seem to be correct. The correct one is given in Theorem 3. There are several normalizing constants reported in [6] for various particular cases of Theorem 3. Unfortunately, all of the normalizing constants quoted there, except one, seem to be incorrect. The normalizing constants in [6] as they appear, the corresponding translation in terms of the parameters of the present paper, and the corresponding correct ones are listed in Appendix B at the end of this paper. Theorem 4. Let be a matrix in the complex domain with rank p, where the elements are functionally independent complex scalar variables. For and , we have the following: The details of the derivations are already given in Theorem 2 and in earlier parts. First, by using the Lemma 4 complex part and then using the general polar coordinate transformation for the variables, the variables concerned are transformed in to r and . Then, the r for the complex case is integrated out, and the s are then integrated out by using Theorem 2.
Corollary 1. This is the corollary to Theorem 3 for . For and α, as defined in Theorem 3, we have the following:for and . The results quoted from some of the earlier works of others and reported in [
6], corresponding to our Theorem 3 and Corollary 1, do not agree with our results; see
Appendix B at the end of this paper.
The result corresponding to Corollary 1 in the complex case is the following:
Corollary 2. This is the corollary to Theorem 4 for . For and α, as defined in Theorem 4, we have the following:for and . Corollary 3. This is the corollary to Theorem 3 for . For and α, as defined in Theorem 3, we have the following: The corresponding result in the complex domain is the following.
Corollary 4. This is the corollary to Theorem 4 for . For and α, as defined in Theorem 4, we have the following:for and . Theorem 5. Let be a real positive definite matrix with functionally independent real scalar variables . Then, the following integral over is equivalent to the integral over Y, where Y is a matrix of rank p with distinct real scalar elements. Then, we have the following:for and . This result enables us to go back and forth from a real full-rank rectangular matrix to a real positive definite matrix. The proof is straightforward. Let . Then, . Then, from Lemma 3, , which establishes the result. The corresponding result in the complex domain is the following.
Theorem 6. Let the matrix in the complex domain be a Hermitian positive definite, where the distinct scalar complex variables are the elements . Then, the following integral over is equivalent to the integral over , where is a matrix in the complex domain of rank p with distinct complex variables as elements. Then, we have the following:for and . The proof is parallel to that in the real case. Here, we use Lemma 3 in the complex case; that is the only difference.