1. Introduction
The class of totally positive matrices possesses rich mathematical properties and numerous applications. It has been extensively studied (cf. [
1,
2,
3,
4,
5,
6]), and attracts significant interest across various fields of mathematics, including approximation theory, combinatorics, computer-aided geometric design, and economics. Let us recall that a matrix is TP if all its minors are nonnegative, and strictly totally positive (STP) if all its minors are positive.
Different characterizations of the total positivity property of a matrix exist in terms of the sign of certain collections of its minors. They imply a reduction in the number of determinants that need to be analyzed to determine whether a matrix is TP or STP. For example, a matrix is STP if and only if all its minors with consecutive rows and columns are positive (see [
7,
8]). This characterization may be refined, as pointed out in [
4], where it is shown that only minors with consecutive rows and columns that include either the first row or the first column need to be checked. These minors are usually referred to as initial minors. The above observations significantly reduce the complexity of the tests for STP matrices. For later use in this paper, we quote this fundamental result in the next theorem.
Theorem 1 (Theorem 4.1 of [
4]).
Let A be a real matrix. Then, A is STP if and only if every initial minor of A is positive. Considering the Cauchy–Binet formula for determinants, it was established that the product of the TP matrices results in another TP matrix (see Theorem 3.1 of [
1]). This property opened the possibility of factorizing TP matrices into products of simpler TP matrices. This endeavor has been the focus of extensive literature [
9,
10,
11,
12,
13,
14]. In this regard, the most definite fact is that a nonsingular TP matrix always admits a bidiagonal decomposition that can be fully determined by its initial minors [
6].
In [
5,
6], it is shown that a nonsingular TP matrix
can be written as
where
D is a diagonal matrix
whose diagonal entries are called pivots, and
(respectively,
),
, are the TP, lower (respectively, upper) triangular bidiagonal matrices of the following form:
The off-diagonal entries
are usually called multipliers, and satisfy
, with
where
denotes the submatrix from
A formed with rows
and columns
. In the following, we shall denote
. Similarly,
, where
and
can be obtained by means of the quotient of minors in (
2) for the transposition
of the matrix
A.
The bidiagonal decomposition (BD) (
1) of nonsingular TP matrices is important for theoretical reasons and practical numerical applications, ensuring great accuracy in linear algebra computations involving matrices whose BD can be computed to high relative accuracy (HRA). There is rich literature delivering algorithms for resolving algebraic problems related to TP matrices to HRA through their BD (cf. [
9,
10,
11,
12,
13,
14,
15]).
Given a basis
of functions defined on an interval
and a set of parameters
with
within
I, the collocation matrix of the basis
F at
X is defined as:
The basis F is TP (STP) if, for any with , the matrix is TP (STP).
In this paper, we will show that the collocation matrices
connect the realm of total positivity and the field of symmetric functions. This relation happens because the initial minors of a collocation matrix
are antisymmetric functions of the variables
and, in turn, can be expressed as the product of a Vandermonde determinant and a symmetric function of
. Consequently, each initial minor of an
collocation matrix can be computed by evaluating a symmetric function. An interesting problem is finding the
symmetric functions that encode the BD of the collocation matrices (
3).
As a precedent, it was found in [
16] that, surprisingly, the elements of the BD decomposition of the Cauchy-polynomial–Vandermonde matrices could be expressed in terms of Schur functions. Later, a systematic line of research was initiated in [
15], where explicit formulas in terms of Schur polynomials were provided for the initial minors of collocation matrices of arbitrary polynomial bases. As a result, the straightforward computation of the BD for any collocation matrix of a polynomial basis can be achieved by evaluating the Schur polynomials at the nodes. The formulas provided in [
15] were also used to determine the maximal interval, for which the polynomial bases are STP. Moreover, the techniques developed in [
15] can be extended to any non-polynomial basis of functions, provided that the corresponding symmetric functions associated with the initial minors of the collocation matrices can be identified.
Intriguingly, the connection between TP bases and symmetric functions may be extended to various mathematical objects, provided that they are related to collocation matrices through certain limits. They include Wronskian matrices, which are the main focus of this work. In this paper, the initial minors of Wronskian matrices will be expressed as the limit of finite differences of certain collocation matrices. This observation allows us to apply the findings of [
15], and derive a concise formula for the initial minors of Wronskian matrices.
The paper is organized as follows.
Section 2 introduces the necessary concepts to make the article as self-contained as possible.
Section 3 shows that any minor with consecutive rows of a Wronskian matrix can be expressed in terms of the limits of determinants of collocation matrices at equally spaced nodes. As a consequence, conditions guaranteeing the total positivity of Wronskian matrices are derived. Moreover, their minors with consecutive rows and columns are expressed in terms of symmetric functions. In particular, for the polynomial case, these minors are written in terms of Schur polynomials in
Section 4. The applicability of the achieved formula is illustrated in
Section 5 for Bernstein bases and recursive polynomial bases, such as Jacobi, Laguerre, Hermite, and Bessel bases. We conclude with an appendix containing the pseudocode of an algorithm for the computation of the minors (
4) in the case of polynomial bases.
2. Preliminary Results
Let us consider the monomial polynomial basis
with
The corresponding collocation matrix at
is the well-known Vandermonde matrix
and its determinant satisfies
Given
, and the equally spaced sequence
the Vandermonde determinant satisfies
This result can be extended to initial minors of
as
Vandermonde matrices are STP if . Thus, we can say that the monomial polynomial basis is STP on the interval .
Now, let us recall that
is a symmetric function if
for any permutation
of the indices
. On the other hand,
is antisymmetric if
where
is the signature of
, taking the value
if
is even and
if
is odd.
Note that any minor of the collocation matrix can be considered as an antisymmetric function of the nodes involved. Moreover, since the Vandermonde determinant is nonzero for different values of the nodes, it is always possible to express any minor of the collocation matrix as the product of a Vandermonde determinant and a symmetric function.
Given a partition
of size
and length
, such that
, the Jacobi definition of the corresponding Schur polynomial in
n variables is, via the Weyl’s formula,
and, by convention,
for the empty partition Ø.
Schur polynomials are symmetric functions in their arguments. In addition, we now list other well-known properties that will be used in the following sections:
- (i)
for positive values of
,
. Additionally,
with
- (ii)
if .
- (iii)
is a homogeneous function of degree
, i.e.,
- (iv)
As running over all the partitions of size , the corresponding Schur polynomials provide a basis for the space of symmetric homogeneous polynomials of degree . When considering all partitions, Schur polynomials furnish a basis of symmetric functions.
For more details, interested readers are referred to [
17].
We finish this section by defining some symmetric functions that will be used in the following:
3. Initial Minors and Total Positivity of Wronskian Matrices
Let
be a system of
functions on
I. The Wronskian matrix of
F at
is defined as:
where
and
,
, denote the first and
k-th derivative of
f at
x.
For a given
, we shall denote
Note that the system needs to not be a basis, since it is not guaranteed that the functions are linearly independent.
Let us recall that the forward finite-difference approximation of the derivative of a function
f at
is given by:
where
is a small step size. Then, we define the forward finite-difference of
f as:
and recursively, for
, the higher-order difference of
f:
The relationship of the higher-order differences with the respective derivatives is straightforward, and can be expressed as follows:
The next result demonstrates that the determinant of a Wronskian matrix can be expressed as a limit of collocation matrices.
Proposition 1. Let be a basis of functions on , and be the Wronskian matrix of F at . Then, we havefor , . Proof. By applying elementary properties of determinants, we can express
and then,
The Formula (
11) follows as we take the limit
in (
12) and consider (
10). □
With similar reasoning, the previous result can be extended to any minor of with consecutive rows.
Proposition 2. Let be the basis of functions on , and be the Wronskian matrix of F at . Then, for any and ,where , . Using the previous result, conditions to guarantee the total positivity property of a Wronskian matrix can be derived.
Theorem 2. Let be the basis of functions on and ; ; the systems are defined in (
9)
. If the Wronskian matrix at is nonsingular, and is TP on , for , then is TP at . Proof. According to Theorem 2.3 of [
18], we only have to check that all minors of
with consecutive rows are nonnegative. Let us consider
and
. The condition that
is TP on
for
implies that, for any pair
such that
, the determinants in (
13) at the nodes
are nonnegative, and remain so when
. □
Now, we shall express the initial minors of in terms of symmetric functions.
Theorem 3. Let be a basis of functions on , and be the Wronskian matrix of F at . Then,where and , , are the symmetric functions in (8). Proof. By applying (
8) in Formula (
13) for the sequence
, and taking the limit
, we deduce that
where in the last equality, we have used (
5). Similarly,
□
Starting from this point, the discussion will be narrowed to the polynomial basis.
4. Initial Minors and Total Positivity of Wronskian Matrices for Polynomial Bases
Let
be the space of polynomials of a degree not greater than
n, defined on
, and
is a basis of
, such that
Let
be the change of the basis matrix from the monomial basis
, with
;
, satisfying
The first derivatives of
P form a system
of polynomials of a degree not greater than
are given by
where
Higher derivative sets
can be defined in a similar fashion from the matrix
A,
with
In [
15], it was shown how to express the BD of the collocation matrix
in terms of Schur polynomials and some minors of
A. Specifically,
Schur polynomials are naturally labeled by partitions, and their product rule and other properties are easily stated in terms of them. Thus, for operativeness, it is convenient to write the linear combination appearing in (
17) in terms of partitions. We shall consider the partitions
, where
, for
. Note that, since
,
is a well-defined partition. So, for the minors of
A, we shall use the notation
Be aware that as the dummy variables satisfy
and
, for
, the partitions they correspond to will have
j parts of maximal length
each. In other words, the sum in (
17) must be over all Young diagrams that fit in a
rectangular box. This fact can be expressed as
Taking into account this notation, we can write
and
For polynomial functions, the symmetric functions
and
are
Using (
19), it is possible to find compact formulae for the initial minors of the Wronskian matrix
. They are shown in the next result.
Theorem 4. For , we havewhere are the values defined in (
7).
Proof. First, we have to evaluate (
19) at a single node. The value of a Schur function at a single node can be found by the use of
The symmetric functions in (
19), evaluated at a single node, take the values
Finally, by inserting the above functions into (
14), we obtain the expressions for the minors of the Wronskian matrices. □
Using (
20) and (
21), we can find sufficient conditions for the total positivity of the Wronskian matrix
.
Theorem 5. Let A be the matrix of change of the basis from the monomial basis of a polynomial basis P and , with .
- (i)
If A is TP then is TP for .
- (ii)
If is TP then is TP for .
Proof. (i) If
A is TP, the matrices
in (
16) are also TP. Now, since all the minors of
and the quantities
in (
7) are non-negative, from (
20) and (
21), the initial minors of
are non-negative for
. (ii) Let us note that for any matrix
M and partition
, we have
If
is TP, then for
, we have
Similarly, since
is TP, we have
So, is TP at . □
In Algorithm A1 (see
Appendix A), the implementation of the code of Formula (
20) is displayed. Note that an algorithm for the implementation of the code of Formula (
21) can be obtained similarly.