1. Introduction
Let
,
, let
be the set of all algebraic numbers,
be the set of all matrices of size
with elements from a ring
K,
be the set of all invertible matrices in
,
be the ring
,
be the Kronecker delta,
be the smallest differential field containing the field
and the functions
(see [
1], Chapter 1). For
,
put
. For vectors
we will write
, if there is a permutation
of the numbers
such that
,
.
The main task of the theory of transcendental numbers is to establish the transcendence and algebraic independence of various sets of numbers. One of the main methods of this theory—the Siegel-Shidlovskii method (see [
2,
3])—allows us to prove the algebraic independence of the values of entire functions of some class (so-called E-functions) if these functions are algebraically independent over
. The Siegel-Shidlovskii method is applied, in particular, to generalized hypergeometric functions
, where:
,
,
,
. If
, then,
. The function
satisfies the (generalized) hypergeometric differential equation:
where
(see [
4] or [
5], Lemma 1).
If
, then the functions:
form the fundamental system of solutions of the equation
, obtained from
by the substitution
, where
(see [
4], Section 5.7.1; ref. [
5], Corollary 1.2; ref. [
6], Corollary from Lemma 1).
Intensive study of the properties of generalized hypergeometric functions and their values continue for a long time, up to the present moment (see, for example, recent articles [
7,
8] with an extensive bibliography).
F. Bakers, W. Brownwell and G. Heckman [
9], and in fact still earlier E. Kolchin [
10], introduced the concepts of cogredience and contragredience of differential equations and systems of equations; these concepts are important for establishing algebraic independence of functions.
Definition 1. If , are arbitrary fundamental matrices of two systems of linear homogeneous differential equations:and one of the equalities:holds, where , , is a function with the condition , then the original systems are called cogredient (respectively, contragredient). Similarly, the concepts of cogredience and contragredience are determined for linear homogeneous differential equations of an arbitrary order.
For
, cogredience and contragredience are equivalent concepts, since:
In [
9], hypergeometric functions of the form:
are considered, where
,
,
,
. Thus,
is a 1st-order entire function obtained from a generalized hypergeometric function of type
by the substitution
.
In article [
9] the parameter set
S is called admissible if it satisfies at least one of the following conditions:
- (A)
and all sums are distinct .
- (B)
is odd or ; the set is modulo not a union of sequences of a fixed length d, where .
In [
9], two parameter sets
and
are called similar, if
and for some
,
According to the basic statement of article [
9], if
are admissible sets of rational parameters, the numbers
,
, and
are linearly independent over
, and
in the case of similarity
and
, then the numbers:
together with the numbers
are algebraically independent.
But it is easy to give an example that refutes the statement of the article [
9].
Example 1. Consider the functions:different from Bessel functions with index λ only by multiplier and satisfying the equations:and Kummer’s functions,satisfying the equations: There is an identity (see [11], Section 7.1, Formula (4a)),where . In the notation of [9], identity (
6)
can be written as:where , . Putting in the main statement of the article [9] , we obtain a contradiction with identity (
7).
This error in article [
9] (also repeated in [
5,
12]), comes from the fact that when proving its basic statement, the possibility of cogredience and contragredience of equations for hypergeometric functions with equal values of
q, but different
l is not considered.
In Example 1, we specifically have that if
,
, and
are fundamental matrices of Equations (
4) and (
5), in which
, corresponding to sets of functions, respectively,
then (see [
13], Theorem 1, ref. [
6], Lemma 6, and (
3)),
Moreover, in [
6], Theorem 2 the identities (
8) are generalized for the equations
, where
,
,
,
,
,
, with the condition:
2. Main Results
Definition 2. The equation,is called reducible (linearly homogeneously reducible) if it has a solution such that , are algebraically dependent (respectively, linearly dependent) over . Similarly, these concepts are defined for a system of differential equations.
The condition
or the existence of a divisor
of numbers
l and
q such that
is necessary and sufficient for linear homogeneous reducibility of the equations
in the case
(see [
14], Theorem 8 or [
3], Chapter 10). Necessary and sufficient conditions for the irreducibility of the equations
were also obtained by V.Kh. Salikhov [
15].
Note that the irreducibility of the system of differential equations is equivalent to the fact that its Galois group contains
or
(see [
9], p. 280 and Theorem 2.2). The concrete form of the Galois group of the hypergeometric equation was found by N. Katz [
16]. As it turned out, in the majority of cases it contains
. The exception is a relatively small number of cases, for which, among other conditions,
is even and
,
for some
(see [
17], pp. 59–60). Since the dimension of the group
is equal to
, then the transcendence degree of the set of elements of any fundamental matrix of the corresponding equation over
, where
W is the Wronskian, is also equal to
(see [
1], Lemma 6.2). This obviously ensures the irreducibility of the equation and makes the conditions of the following two theorems natural.
Theorem 1. Let , , , , be the fundamental matrix of the operator ,, . Let the numbers , as well as , belong to and linearly independent over , . Then for algebraic independence functions:over it is necessary and sufficient that condition (
9)
does not hold for any pair of indices and if , ; , where , , then . Remark. The statement of Theorem 1 remains valid if functions , are replaced with , where are polynomials from , linearly independent over .
Condition (
11), considered in the case
by K. Siegel [
2], and in the general case, by E. Kolchin ([
10], pp. 1157–1158), is equivalent to the fact that the Galois group of the operator
contains
. It follows from what was said before Theorem 1 that condition (
11) holds for "almost all" hypergeometric equations except those whose parameter sets can be represented by points of some certain sub-varieties of small dimensions.
According to [
9] (see also Lemma 9), it is sufficient to check condition (
11) for the operator
.
Theorem 2. Under the assumptions of Theorem 1,,
,
,
,
.
Then for the algebraic independence of the numbers:the following four conditions are necessary and sufficient: - 1°.
If , where , then .
- 2°.
If , where , , then .
- 3°.
If for some and all we have , where for , then at least for one j.
- 4°.
Condition (
9)
, from which is excluded, does not hold for any pair of indices .
3. Proof of the Theorems
Lemma 1. (see [6], Lemma 6 and formula (6)). Let , , , , , . Then the equation has the form:and the Wronskian of this equation equals:where , , . Lemma 2. (Theorem 1 in [18]). Let be the fundamental matrix of the system,moreover, Then for the cogredience and contragredience of systems , it is necessary and sufficient that equalities (
2)
hold, where: In fact, the formulation of Lemma 2 slightly differs from the formulation of Theorem 1 from [
18], but the proof is exactly the same.
Lemma 3. (Corollary 1 in [18]). Let be the fundamental matrix of the system,, , and the functions:be algebraically independent over . Then any algebraically independent over functions whose logarithmic derivatives belong to , will be algebraically independent over the field generated over by functions (
14).
Lemma 4. (see [5], Lemma 6 and [10]). Let be a differential field with the field of constants . Let be the fundamental matrix of the differential equation:, . Suppose that the field of constants of the differential field is , and . Then either for some k or, for some indices the equality holds as well as at least one of the following equalities:where , , , . Lemma 5. (Lemma 7 in [5]). If, under the assumptions of Lemma 4, , , , then in equalities (
15)
, , , . Lemma 6. Let be the fundamental matrices corresponding to the collections of functions , , , ⋯, , and , ⋯, , respectively, where , . Then:where ; besides, the matrices , and are lower-triangular with 1’s in the main diagonal and do not depend on . The proof is similar to [
13], Lemma 1.
Lemma 7. (Lemma 2 in [13]). Let and be the fundamental matrices corresponding to the collections of functions , and , respectively, where , . Then:where is a lower-triangular matrix with the determinant , independent on . Lemma 8. Let be the fundamental system of solutions of some homogeneous linear differential equation with coefficients from . Then the functions and , , where , , constitute also the fundamental systems of solutions of homogeneous linear differential equations of the same order with coefficients from , moreover, the first equation has no new singular points in .
The proof is carried out by simple calculations using the statements of Lemmas 6 and 7, respectively.
Lemma 9. Let be the fundamental system of solutions of some homogeneous linear differential equation with coefficients from , , , , , and be the fundamental matrices of the corresponding differential equations, , . Then: Proof. According to Lemma 7, the elements of the matrix are linear combinations of the elements of the matrix with coefficients from and vice versa, and . This implies the statement of Lemma 9. □
Note that an analogue of Lemma 9 for the fundamental matrices of the system , where , is proved even easier.
Lemma 10. (see [19], Theorem 1). Let , , , , the equation be linearly homogeneously irreducible, be an arbitrary fundamental matrix of this equation, for , . Then there exist matrices and , such that: Lemma 11. (Lemma 8 in [6]). Let , , , , and be arbitrary fundamental matrices of the differential operators and . Then there exists a matrix such that , . Lemma 12. For the linearly homogeneously irreducible equations and , where , , , , to be cogredient, it is necessary and sufficient that:and, to be contragredient, it is necessary and sufficient that: Lemma 12 was proved by the author (see [
5], Lemma 14) in the case when in the definition of cogredience and contragredience the function
g of the form (
13) is used. But in view of Lemmas 1 and 2, it follows that Lemma 12 is true in the general case.
The next lemma follows from [
9], Theorems 2.2 and 2.4.
Lemma 13. Let and be irreducible systems of linear homogeneous differential equations of dimensions and , with coefficients from , , and be systems of differential equations obtained from systems and by the change . Then if the systems and are cogredient or contragredient, then the same condition holds for systems and .
Let us raise the question of whether cogredience or contragredience can occur, if in the substitution
takes on different values for systems
and
. The answer is identities (
8) and their generalizations from [
6].
The next lemma is a generalization of Lemma 12.
Lemma 14. For the cogredience and contragredience of linearly homogeneously irreducible equations , where , , it is necessary and sufficient that the condition (
9)
holds or and the conditions formulated in Lemma 12 hold. Proof. If , then the statement of Lemma 14 follows from Lemmas 7, 12 and 13. Let further . In this case, in view of what was said after Example 1, it is enough to check only the necessity of its conditions.
Assume that the equations considered in the Lemma are cogredient or contragredient. Repeating the proof of Lemma 12 (i.e., Lemma 14 of [
5]), we get that
,
in the case of cogredience and
in the case of contragredience,
, where
.
As in the proof of Lemma 12, we use the asymptotic expansions of the fundamental matrices of the operators
in infinity:
, where
,
is the primitive root of order
of 1,
is the matrix whose entries in the first
columns are formal series in negative powers of
z, while, in the other columns, they are series in negative powers of
. Obviously, the numbers
,
are also pairwise distinct.
In the case
according to (
12) and Lemma 2 in equalities (
13)
, and equalities (
2) have the form
,
. Under the cogredient condition, substituting in the equality
expansions
, we obtain:
where
,
. Further, just as in Lemma 12, it is proved that
, and the matrix
S in equality (
16) is block-diagonal.
In the case
, equality (
16) turns into:
If,
then all the elements of the matrix
S in the first
columns, are equal to zero, which contradicts to the condition
. From here again we get that
, and the matrix
S is block-diagonal. Then equality (
16) turns into:
The elements of the matrix on the left side of this equality, standing in the first
columns and
rows, have the form:
which is impossible for
.
Under the condition of contragredience (assuming inequalities (
18) and
), we also come to a contradiction.
Let now
, and for some
condition (
18) is incorrect (obviously, there is no more than one such value of
k). In this case, since
, all the elements of the matrix
S in the first
columns and in all lines, except, perhaps, one, are zero. Since
, then from here and (
18)
,
,
,
,
,
, and the left side of (
17) has the form:
Therefore, the numbers and are integers, from where .
Under the condition of contragredience, we come to a similar expression:
obtained by replacing
with
, and condition (
9) also holds. Lemma 14 is proved. □
Let us pass to the proof of Theorem 1. The Wronskians
of the operators
have the form (
12), where
. The functions
according to Lemmas 8 and 6 constitute the fundamental system of solutions of an equation of the form (
10), where
, with the Wronskian
. Conditions (
11) in view of Lemma 9 also hold for the functions
, and equality (
15) by virtue of Lemmas 2, 5 and 6 will pass to (
2), where
g has the form (
13). Then, under the conditions of Theorem 1 according to Lemmas 4 and 14, the functions
are algebraically independent over
. If in the set
there are functions algebraically independent over
with functions:
from the statement of Theorem 1, then we attach them to the set (
20). The functions of the resulting set together with the functions (
19) according to Lemma 3 are algebraically independent over
. From here the statement of Theorem 1 follows.
Let us pass to the proof of Theorem 2. The number is the value of the function at the point .
Recall that by the
adjoint to the differential equation:
we mean the equation:
(see, for example, ref. [
20], Chapter 2, §5). Similarly, the adjoint differential operators
L and
are defined. In particular, the operators from Lemma 11 are adjoint ([
5], Lemma 4).
For differential operator
L of the form (
21), the Lagrange identity
holds, where
are arbitrary analytic functions (see, for example, ref. [
20], Chapter 2, §5). This implies:
Lemma 15. If is a differential operator of order m and are analytic functions such that , , then .
Lemmas 11, 14 and 15 show that condition 2°. of Theorem 2, which excludes the contragredience of the equations under consideration, is essential. In general, the cogredience of the equations is admissible, because if
,
,
, but
, then the functions
(taking into account the possibility of changing parameters by integers) belong to the fundamental system of solutions of one equation (see (
1)) and are algebraically independent. But it is necessary that the number of such functions does not exceed
; otherwise they will form a complete fundamental system of solutions and, together with their derivatives, will be bound by the Liouville relation.
Summarizing the above, we get that Theorem 2 follows from Lemmas 14 and 15, Theorem 1 and the main theorem of A.B. Shidlovskii ([
3], Chapter 3, §13).
Example 2. Consider the functions:satisfying the equations, Condition (
11)
for the functions holds according to [16,17]. Let , , , , . Then, as follows from Theorem 2, for the algebraic independence of the numbers:the following three conditions are necessary and sufficient: - 1°.
If , where , then .
- 2°.
If , where , , then .
- 3°.
If for some and all we have , where , then at least for one j.