Next Article in Journal
Extending L-Topologies to Bipolar L-Fuzzy Topologies
Next Article in Special Issue
Fourier Series Related to p-Trigonometric Functions
Previous Article in Journal
Estimation of the Reliability Function of the Generalized Rayleigh Distribution under Progressive First-Failure Censoring Model
Previous Article in Special Issue
Probabilistic and Average Gel’fand Widths of Sobolev Space Equipped with Gaussian Measure in the Sq-Norm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Fourier Series in the Context of Jacobi Matrices

by
José M. A. Matos
1,*,
Paulo B. Vasconcelos
2 and
José A. O. Matos
2
1
Instituto Superior de Engenharia do Instituto Politécnico do Porto, Centro de Matemática da Universidade do Porto, Rua Dr. António Bernardino de Almeida, 431, 4249-015 Porto, Portugal
2
Faculdade de Economia da Universidade do Porto, Centro de Matemática da Universidade do Porto, Rua Dr. Roberto Frias s/n, 4200-464 Porto, Portugal
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(9), 581; https://doi.org/10.3390/axioms13090581
Submission received: 31 July 2024 / Revised: 23 August 2024 / Accepted: 24 August 2024 / Published: 27 August 2024
(This article belongs to the Special Issue Advanced Approximation Techniques and Their Applications, 2nd Edition)

Abstract

:
We investigate the properties of matrices that emerge from the application of Fourier series to Jacobi matrices. Specifically, we focus on functions defined by the coefficients of a Fourier series expressed in orthogonal polynomials. In the operational formulation of integro-differential problems, these infinite matrices play a fundamental role. We have derived precise calculation formulas for their elements, enabling exact computation of these operational matrices. Numerical results illustrate the effectiveness of our approach.
MSC:
33C45; 42A16; 15A16; 65M70; 65N35

1. Introduction

This work presents a comprehensive investigation of matrices generated through the application of Fourier series to Jacobi matrices, with a particular focus on functions defined by Fourier series coefficients expressed in orthogonal polynomials. Precise formulas for computing the elements of these matrices are derived, which are crucial for the operational formulation of integro-differential problems. The fundamental properties of these matrices are highlighted, emphasizing their algebraic representation within functional operators. In numerical analysis, that representation gives rise to operational methods of which spectral methods are among a wide range of numerical tools to approximate solutions to integro-differential problems.
Let D : E F be, at least approximately, a linear operator, with E and F being adequate function spaces, and let
D y = f
be the problem to be solved, where f F is given and y E is the sought solution, at least approximately. Assuming that P , the space of all real polynomials, is dense in E , then y can be represented by a Fourier series:
y = P a i = 0 a i P i ,
where P = P i i = 0 P is an orthogonal polynomial sequence (OPS) and a = [ a 0 , a 1 , ] T is the coefficients vector. The orthogonality of P ensures the existence of an inner product such that P i , P j = μ i δ i , j , δ i j is the Kronecker delta, and a i = 1 μ i P i , y are the Fourier coefficients of y [1]. In addition, if P is also dense in F , writing f = P f , then the problem of finding y in (1), given D and f, can be reduced to building D = d i , j i , j = 0 , the matrix operator such that
d i , j = 1 μ i P i , D P j
and solving the algebraic problem D a = f . Here, f corresponds, naturally, to the vector of Fourier coefficients of function f.
Building D is often a challenging task, involving numerically unstable and computationally heavy operations. Many mathematical problems can be modeled using derivatives and primitives, together with algebraic operations. For those cases, the matrix D can be built from three building blocks: the matrix M , representing multiplication in P ; the matrix N , representing differentiation; and the matrix O , representing primitivation. Formally,
x P = P M , d d x P = P N and P = P O .
The linear integro-differential operator D can be cast in the form
D = k = η ν f k d k d x k ,
with ν , η 0 , where the negative indices represent primitives and f k the coefficients. Then, in matrix form,
D = k = η ν f k ( M ) N k
where N k corresponds to O k for negative powers. If f k = j = 0 f k , j P j are functions given by its Fourier series, then
f k ( M ) = j = 0 f k , j P j ( M ) .
The paper is structured as follows: In Section 2 we derive properties of matrix polynomials f k ( M ) ; in Section 3 we explore the functions of the matrix multiplication matrix; numerical results are provided to demonstrate the effectiveness of the proposed approach, showcasing the practical applicability of the derived formulas in Section 4; and we conclude in Section 5.

2. Polynomials of Jacobi Matrices

Using the well-known properties of orthogonal polynomials, we derive properties of matrix polynomials. In this context, we consider matrix polynomials as matrices transformed by polynomials with real scalar coefficients.

2.1. Orthogonal Polynomials

One characteristic property of orthogonal polynomials is the three-term recurrence relation. Let P = P i i = 0 be an OPS satisfying
P 1 = 0 , P 0 = 1 x P j = α j P j + 1 + β j P j + γ j P j 1 , j = 0 , 1 , ,
and let A = a i , k n × n be a square n × n real or complex matrix. We call P j = P j ( A ) , j = 0 , 1 , , the set of matrices defined by (4):
P j + 1 = 1 α j ( A β j I ) P j γ j α j P j 1 , j = 0 , 1 , ,
with P 1 = 0 and P 0 = I the null and the identity n × n matrices, respectively. Of course, since A and P j are commutative matrices ( P j is a linear combination of powers of A ), we can equivalently consider
P j + 1 = 1 α j P j ( A β j I ) γ j α j P j 1 , j = 0 , 1 , .
From (5), the rows of P j + 1 can be evaluated by
e i T P j + 1 = 1 α j k = 0 n 1 a i , k e k T P j β j α j e i T P j γ j α j e i T P j 1 , i = 0 , , n 1 ,
and from (6), the columns of P j + 1 are given by
P j + 1 e k = 1 α j i = 0 n 1 a i , k P j e i β j α j P j e k γ j α j P j 1 e k , k = 0 , , n 1 .
For our purpose, we are interested in the case A = M , defined by (2) and (3), that is, M = m i , j i , j = 0 with m i , j = 1 μ i P i , x P j , and so, from (4),
M = β 0 γ 1 α 0 β 1 γ 2 α 1 β 2 α 2
is the Jacobi matrix associated with P .
When considering the powers of such matrices, it is important to remember that they are infinite matrices. However, since each row and column of M contains only a finite number of non-zero elements, their products are well defined. Consequently, each power M k with k N 0 is well defined and has only a finite number of non-zero elements in each row and column.
In computational applications, to deal with M we have to truncate it to an M n × n square matrix for some n N . One consequence of such a truncation operation is that elements outside the main block ( n k ) × ( n k ) of M k can differ from the corresponding ones in ( M n × n ) k . In practice, if we need to evaluate the main block n × n of M k we must evaluate ( M ( n + k ) × ( n + k ) ) k and truncate the resulting square block.
In this work, we aim to evaluate P j ( M ) , where P j is the polynomials defined by (4) and M is its infinite Jacobi matrix defined by (7). As just explained, rather than performing matrix multiplications, we operate with the finite number of non-zero elements in each row and column of these matrices.

2.2. Linearization Coefficients

From now on, P j = P j ( M ) , j = 0 , 1 , are the set of matrices defined by (5), with A = M defined by (7). In that case, P j are matrices of linearization coefficients, in the sense of:
Proposition 1. 
If P = P j j = 0 is an OPS, M is its associated Jacobi matrix and P j = P j ( M ) , j = 0 , 1 , , then
P j = p j i k i , k = 0 , p j i k = 1 μ i P i , P j P k .
Proof. 
Since, formally,
x P = P M , m i k = 1 μ i P i , x P k ,
then x j P = P M j , j = 0 , 1 , , and by linearity
P j ( x ) P = P P j ( M ) , p j i k = 1 μ i P i , P j P k .
In the context of orthogonal polynomial theory, those values p j i k are called linearization coefficients [2,3,4] because they solve the linearization problem
P j P k = i = | j k | j + k p j i k P i .
It has been proven [2,3] that when P is a classical OPS the coefficients satisfy a second-order recurrence relation in the indices k. Furthermore, conditions have been established to ensure that P j has only non-negative entries [4].
In search of specific cases, some authors have derived explicit formulas for these coefficients. Notable examples include the Legendre polynomials [5,6], the Hermite polynomials [7], and the Jacobi polynomials [8]. However, the formulas in some of these cases involve hypergeometric functions, which are not practical for numerical purposes. The linearization coefficients of the Chebyshev polynomials are particularly noteworthy due to their simplicity. Using the standard notation for the first, second, third, and fourth order of Chebyshev polynomials, we have found the following results [9,10] for j = 1 , 2 , ,
T j T k = 1 2 T j + k + T | j k | , U j U k = 1 2 i = | j k | j + k ( 1 + ( 1 ) j + k i ) U i , V j V k = i = | j k | j + k ( 1 ) j + k + i V i , W j W k = i = | j k | j + k W i ,
and, in the four cases, P 0 P k = P k . In Figure 1, we can see the sparse structure of P 15 matrices for those cases.
This band structure, characterized by zeros outside the ± j diagonals and within the triangular block, i = 0 , , | j k | 1 , of matrices P j , is not exclusive to Chebyshev polynomials. In fact, it is a consequence of the general Formula (8), inherited from the three-term recurrence relation (4).
Proposition 2. 
If P = P j j = 0 is an OPS satisfying (8), M is its associated Jacobi matrix and P j = P j ( M ) , j = 0 , 1 , , then
P j = p j i k i , k = 0 , with p j i k = 1 , k = 0 0 , i > j + k 0 , i < | j k | .
Proof. 
Considering k = 0 in Equation (8) results in P j P 0 = p j j 0 P j , and since P 0 = 1 we obtain p j j 0 = 0 . The proof ends considering that, for any k > 0 , Equation (8) is equivalent to
P j P k = i = 0 p j i k P i ,
for i > j + k and for i < | j k | . □
One immediate consequence of those triangular null blocks is that they include square main blocks inside.
Corollary 1. 
Let P j = P j ( M ) , defined as in Proposition 2, and let ( P j ) n × n = p j i k i , k = 0 n be a square main block. If j > 2 n , then ( P j ) n × n = 0 is a null matrix.
Proof. 
Proposition 2 implies that p j i k = 0 for any pair i , k such that i + k < j , and this is valid if i , k n and j > 2 n . □
Three additional consequences of Proposition 1 are as follows:
p k i j = p j i k p i j k = μ i μ j p j i k p j k i = μ i μ k p j i k , i , j , k = 0 , 1 , .
The first one, meaning that each column j of a matrix P k equals the column k of P j ,
P k e j = P j e k , k , j = 0 , 1 , ,
is a general property. The other two, relating each row j of P i with row i of P j ,
e j T P i = μ i μ j e i T P j , i , j = 0 , 1 , ,
and P j with its transpose,
e k T P j e i = μ i μ k e i T P j e k , i , j , k = 0 , 1 , ,
are both verified if the associated inner product satisfies the wide general property P i , P j P k = P j , P i P k .
Next, we study the more general properties of the matrices P j , associated with generic OPSs, not limited to the classical ones.

2.3. General Properties of P j Matrices

First of all, we can verify that any re-normalization in the polynomial sequence leads to similar matrices.
Proposition 3. 
Let P ^ = P ^ j j = 0 such that P ^ j = λ j P j , with λ 0 = 1 and λ j 0 , j = 1 , 2 , ; and let M and M ^ be the Jacobi matrices associated with P and P ^ , respectively. Then, P ^ j = P ^ j ( M ^ ) and P j = P j ( M ) are related by
P ^ j = λ j D 1 P j D ,
with D = diag ( λ j j = 0 ) .
Proof. 
From (4) we obtain that
P ^ 1 = 0 , P ^ 0 = 1 x P ^ j = α ^ j P ^ j + 1 + β ^ j P ^ j + γ ^ j P ^ j 1 , j = 0 , 1 , ,
with
α ^ j = λ j λ j + 1 α j , β ^ j = β j , and γ ^ j + 1 = λ j + 1 λ j γ j + 1 , j = 0 , 1 , .
So, M ^ = D 1 M D . Furthermore, P ^ 1 = P 1 = 0 , P ^ 0 = P 0 = I , both satisfying (10), and
P ^ j + 1 = 1 α ^ j M ^ β ^ j I P ^ j γ ^ j α ^ j P ^ j 1 = λ j + 1 λ j α j D 1 M β j I D P ^ j λ j + 1 γ j λ j 1 α j P ^ j 1 .
Then, by the induction hypothesis,
P ^ j + 1 = λ j + 1 D 1 1 α j M β j I P j γ j α j P j 1 D = λ j + 1 D 1 P j + 1 D .
Proposition 3 can be utilized to establish a connection between matrices P j associated with any generic OPS and matrices P ^ j linked to their respective monic orthogonal polynomial sequences, or to the corresponding orthonormal polynomial sequences.
Corollary 2. 
Let P = P j j = 0 be an OPS with leading coefficients c j = 1 μ j P j , x j , and let P ^ = P ^ j j = 0 be the monic OPS P ^ j = 1 c j P j , then
P ^ j = 1 c j D 1 P j D ,
where D is the diagonal matrix D = 1 c j δ i , j i , j = 0 .
Corollary 3. 
Let P = P j j = 0 be an OPS with polynomial norms P j 2 = μ j = P j , P j , and let P ^ = P ^ j j = 0 be the orthonormal polynomial sequence P ^ j = 1 μ j P j , then
P ^ j = 1 μ j D 1 P j D ,
with D = 1 μ j δ i , j i , j = 0 .
As pointed out in [11], the Jacobi matrix M associated with any OPS is similar to the symmetric matrix J associated with the corresponding orthonormal sequence. Indeed, J = D 1 M D is the symmetric matrix
J = β 0 ρ 0 ρ 0 β 1 ρ 1 ρ 1 β 2 ρ 2 , ρ i = μ i + 1 μ i α i ,
associated with the orthonormal polynomials
P ^ 1 = 0 , P ^ 0 = 1 x P ^ j = ρ j P ^ j + 1 + β j P ^ j + ρ j 1 P ^ j 1 , j = 0 , 1 , .
From (11) and (12), we obtain symmetric matrices.
Proposition 4. 
Let P ^ = P ^ j j = 0 be an orthonormal OPS defined by (12) and let J be its associated Jacobi matrix, then P ^ j = P ^ j ( J ) , j = 0 , 1 , are symmetric matrices.
Proof. 
Since P ^ 0 = 0 and P ^ 1 = I are both symmetric matrices, the proof results from
P ^ j + 1 = 1 ρ j J β j I P ^ j ρ j 1 ρ j P ^ j 1
by the induction hypothesis in j and by the symmetry of J . □
Another general property of P j matrices is invariance under the orthogonality support of P , in the sense of the following proposition:
Proposition 5. 
Let P = P i i = 0 be the OPS defined by (4) and P * = P i * i = 0 be the OPS defined by
P 1 * = 0 , P 0 * = 1 x P j * = α j * P j + 1 * + β j * P j * + γ j * P j 1 * , j = 0 , 1 , ,
with α j * = α j λ , β j * = β j δ λ , and γ j * = γ j λ for some λ , δ R , λ 0 . If M and M * are the Jacobi matrices associated with P and P * , respectively, then
P j * ( M * ) = P j ( M ) , j = 0 , 1 , .
Proof. 
From M * = 1 λ ( M δ I ) , we have
P j + 1 * = 1 α j * ( M * β j * I ) P j * γ j * α j * P j 1 * = λ α j ( 1 λ ( M δ I ) β j δ λ I ) P j * γ j α j P j 1 * = 1 α j ( M β j I ) P j * γ j α j P j 1 * , j = 0 , 1 , ,
and P j + 1 * = P j + 1 , since P 1 * = P 1 and P 0 * = P 0 . □
If [ a , b ] , b > a is the domain of orthogonality of P and [ c , d ] , d > c is the domain of orthogonality of P * , then the conditions of Proposition 5 hold, with λ = d c b a and δ = b c a d b a , and we obtain P j * = P j for all j.

2.4. Recurrence Relations

Returning to recursive Formulas (5) and (6), taking A = M , we obtain
e i T P j + 1 = α i 1 α j e i 1 T P j + 1 α j ( β i β j ) e i T P j + γ i + 1 α j e i + 1 T P j γ j α j e i T P j 1 , P j + 1 e k = γ k α j P j e k 1 + 1 α j ( β k β j ) P j e k + α k α j P j e k + 1 γ j α j P j 1 e k ,
recursive formulas relating the rows and columns of P j + 1 with those of P j and P j 1 . From their element-wise versions, we obtain a recursivity in P j elements,
p j , i , k + 1 = 1 α k α i 1 p j , i 1 , k + ( β i β k ) p j , i , k + γ i + 1 p j , i + 1 , k γ k p j , i , k 1 ,
for i , j , k = 0 , 1 , , where p i , j , k = 0 whenever at least one index is negative.
From (13) we can take values for a matrix sequence P j , j = 1 , 2 , . Recurrence relations (14) are useful in case only a single matrix P j is needed. For the latter case, we need the first two columns for the initial values.
Proposition 6. 
Let P = P i i = 0 be an OPS and M its associated Jacobi matrix. If P j = P j ( M ) and p j , i , k = e i T P j e k , then for i , j = 0 , 1 , ,
(a) 
p j , i , 0 = δ i , j ;
(b) 
p j , i , 1 = γ j α 0 , i = j 1 1 α 0 ( β j β 0 ) , i = j α j α 0 , i = j + 1 0 | j i | > 1 .
Proof. 
The proof of (a) is immediate from Proposition 2. (b) results from (a) and (14). □
Keeping in mind that P j is a band matrix, from (14) we obtain explicit formulas for the sub-diagonals j and j 1 .
Proposition 7. 
Let P = P i i = 0 be an OPS, c j = 1 μ j P j , x j and c j = 1 μ j P j , x j 1 its leading coefficients, and P j = P j ( M ) , then for j , k = 0 , 1 , ,
(a) 
p j , j + k , k = c j c k c j + k ;
(b) 
p j , j + k 1 , k = c j c k c j + k 1 c j c j + c k c k c j + k c j + k .
Proof. 
For the proof, we need the well-known relations [9]
α j = c j c j + 1 , β j = c j c j c j + 1 c j + 1 ,
and, since c 0 = 1 , c 0 = 0 , we have
r = 0 n α r = 1 c n + 1 , r = 0 n β r = c n + 1 c n + 1 .
From (14) and Proposition 2,
p j , j + k , k = α j + k 1 α k 1 p j , j + k 1 , k 1 .
This is a recurrence relation that, with the initial value p j , j , 0 = 1 given in Proposition 6, results in
p j , j + k , k = r = j j + k 1 α r r = 0 k 1 α r = r = 0 j + k 1 α r r = 0 j 1 α r r = 0 k 1 α r .
So (a) is proved.
Again, from (14) and Proposition 2,
p j , j + k 1 , k = α j + k 2 α k 1 p j , j + k 2 , k 1 + 1 α k 1 ( β j + k 1 β k 1 ) p j , j + k 1 , k 1
or
p j , j + k 1 , k = c k c j + k 2 c k 1 c j + k 1 p j , j + k 2 , k 1 + c j c k c j + k 1 ( β j + k 1 β k 1 ) ,
with initial value p j , j , 1 = 1 α 0 ( β j β 0 ) = c 1 ( β j β 0 ) , results in
p j , j + k 1 , k = c j c k c j + k 1 r = j j + k 1 β r r = 0 k 1 β r = c j c k c j + k 1 r = 0 j + k 1 β r r = 0 j 1 β r r = 0 k 1 β r ,
and in (b). □
Similar results can be obtained for the first two non-null P j anti-diagonals.
Proposition 8. 
In the conditions of Proposition 7, for j = 0 , 1 , ,
(a) 
p j , j k , k = μ j c k c j k μ j k c j , for k = 0 , , j ;
(b) 
p j , j k + 1 , k = μ j c k c j k + 1 μ j k + 1 c j c j k + 1 c j k + 1 + c k c k c j + 1 c j + 1 , for k = 1 , , j .
Proof. 
Since p j , j k 1 , k 1 , p j , j k , k 1 , and p j , j k , k 2 belong to the null triangular block of P j , from (14) we have, for k = 0 , , j ,
p j , j k , k = γ j k + 1 α k 1 p j , j k + 1 , k 1 = r = j + 1 k j γ r r = 0 k 1 α r p j , j , 0 .
The proof of (a) results from [9]
γ j = μ j μ j 1 α j 1 ,
from which we have
r = 1 n γ r = μ n μ 0 c n .
We arrive at (b) from
p j , j k + 1 , k = 1 α k 1 ( β j k + 1 β k 1 ) p j , j k + 1 , k 1 + γ j k + 2 α k 1 p j , j k + 2 , k 1 .
Using (a) and iterating in k, we obtain
p j , j k + 1 , k = μ j c j k + 1 c k μ j k + 1 c j r = j k + 1 j β r r = 0 k 1 β r ,
and (b). □
Using property (b) of Proposition 7, we confirm that for any values of p j , j + k 1 , k = 0 , j and k satisfy c j + k c j + k = c j c j + c k c k . The next proposition presents a sufficient condition for the matrix P j to have null intercalated diagonals.
Proposition 9. 
Let P = P i i = 0 be an OPS satisfying (4) with constant β i = β , i = 0 , 1 , , then
p j , j + k 1 2 r , k = 0 , r = 0 , 1 , , ( j + k 1 ) / 2 , k = 0 , 1 , .
Proof. 
Using (14), with β i = β k ,
p j , i , k = 1 α k 1 α i 1 p j , i 1 , k 1 + γ i + 1 p j , i + 1 , k 1 γ k 1 p j , i , k 2 .
The hypothesis implies that c n c n = n β , n = 0 , 1 , , and so, using Proposition 7, we obtain p j , j + k 1 , k = 0 , k = 0 , 1 , . Admitting by the induction hypothesis that, for some r ( j + k 1 ) / 2 we have p j , j + k 1 2 n , k = 0 for n = 0 , , r , then
p j , j + k 3 2 r , k = α j + k 4 2 r α k 1 p j , j + k 4 2 r , k 1 = n = j 1 r j + k 4 2 r α n n = r + 2 k 1 α n p j , j 1 r , r + 2 = μ j c k c j 1 r c j r c r + 2 μ j 1 r c j c j + k 3 2 r c r + 3 c j 1 r c j 1 r + c r + 2 c r + 2 c j + 1 c j + 1 = 0 .
Proposition 9 includes symmetric OPSs, generalizing the one in [12], P j ( x ) = ( 1 ) j P j ( x ) , for which β i = 0 , i = 0 , 1 , .

3. Functions of the Matrix M

So far, we have found recursive formulas to evaluate P j , from P j 1 and P j 2 , or from its first two columns or from its first two non-null diagonals. With P j = P j ( M ) , j = 0 , 1 , already evaluated, we build matrices
F = j = 0 f j P j .
In the case where f = j = 0 f j P j is a Fourier series, then F = f ( M ) is the operational matrix representing the multiplication by f in P . Taking into account that for the P j matrices we know exactly each element, defined as a finite sum, implies that the entries of matrix F are also evaluated explicitly as finite sums. This is explained next.
Having found, in Section 2.2, a particular sparse structure of matrices P j , with well-located non-null entries, this is the core result to prove that each entry of F is evaluated as a finite sum.
Proposition 10. 
Let F be defined by (15), f i k = e i T F e k , and p j i k = e i T P j e k , then for i , k = 0 , 1 , ,
f i k = j = | i k | i + k f j p j i k .
Proof. 
From Proposition 2 we obtain that p j i k = 0 if j < i k , or if j > i + k , or if j < k i , and the result follows from
f i k = j = 0 f j p j i k .
The values for the first two columns of F , as particular cases of (16), arise from Propositions 6 and 10.
Corollary 4. 
In the conditions of Proposition 10:
(a) 
f i , 0 = f i ;
(b) 
f i , 1 = 1 α 0 ( α i 1 f i 1 + ( β i β 0 ) f i + γ i + 1 f i + 1 ) .
Another corollary of Proposition 10 is that each block of F depends only on a finite number of f coefficients.
Corollary 5. 
Let F be defined by (15), F ( k ) = j = 0 k f j P j a truncated series, and F n × n = [ f i k ] i , k = 0 n a square main block, then
(a) 
F n × n depends on f 0 , , f 2 n ;
(b) 
F n × n = F ( k ) n × n , k > 2 n .
Two Fourier series sharing the first coefficients do have identical blocks in their respective matrices, as stated in the next corollary.
Corollary 6. 
Let f = j = 0 f j P j and f ˜ = f + j = 2 n + 1 ϵ j P j be two Fourier series, and let F = f ( M ) and F ˜ = f ˜ ( M ) be the respective matrix functions, then
F ˜ n × n = F n × n .
These results are relevant because they allow working with approximate series when only the first coefficients are known.
Benzi and Golub [13] showed that the entries of a matrix function f ( A ) , when f is analytic and A is any symmetric band matrix, are bounded in an exponentially decaying manner away from the main diagonal. This result also applies to our work since whenever M is not symmetric it can be reduced by similarity to a symmetric matrix J . As proved in Corollary 3, instead of P j , we can work with matrices P ^ j , associated with monic polynomials.
Proposition 11. 
Let f = j = 0 f j P j , F = f ( M ) . If
F ^ = f ( J ) = j = 0 f ^ j P ^ j ( J )
then
F ^ = D F D 1 , with D = 1 μ j δ i , j i , j = 0 .
Proof. 
The proof follows from f ^ j = 1 μ j f , P j = μ j f j and Corollary 3. □
And so, f ( M ) = D 1 f ( J ) D and, when f is analytic, f ( J ) is bounded in an exponentially decaying manner away from the main diagonal.
In the next section, we illustrate the effectiveness of the previous formulas in the evaluation of the entries of matrix F .

4. Applications

We conclude with applications to functions whose Fourier coefficients are given by closed-form expressions. We present two illustrative examples. The first, using Legendre polynomials, a particular case of Jacobi polynomials, where the linearization coefficients are explicitly known. The second example, based on Laguerre polynomials, for which the linearization coefficients are not available, makes use of the recurrence relations presented in Section 2.4. Valuable references on linearization coefficients for Jacobi and Laguerre polynomials can be found in [14,15], along with additional relevant sources cited within these works.

4.1. The Sign Function with Legendre Polynomials

Considering Legendre polynomials P j ( x ) , for which the Legendre–Fourier coefficients of a function f : [ 1 , 1 ] R are
f j = 2 j + 1 2 1 1 f ( x ) P j ( x ) d x , j = 0 , 1 , ,
if f is the sign function
f ( x ) = s i g n ( x ) = 1 , 1 x < 0 1 , 0 < x 1 ,
then
f j = 2 j + 1 2 1 0 P j ( x ) d x + 0 1 P j ( x ) d x .
Using the parity property P j ( x ) = ( 1 ) j P j ( x ) [9], and the primitives formula P j = 1 2 j + 1 ( P j + 1 P j 1 ) [16], then
f j = 1 ( 1 ) j 2 ( P j + 1 ( 1 ) P j 1 ( 1 ) + P j 1 ( 0 ) P j + 1 ( 0 ) ) ,
and so, f 2 j = 0 . Finally, since P j ( 1 ) = 1 and P 2 j ( 0 ) = ( 1 ) j j ! ( 1 2 ) j , it is a straightforward exercise to prove that
f 2 j + 1 = ( 1 ) j ( 4 j + 3 ) 2 ( j + 1 ) ! 1 2 j .
With ( x ) j , x R , j N 0 , we represent the rising factorial ( x ) 0 = 1 and ( x ) j = x ( x + 1 ) ( x + j 1 ) , j > 0 , also known by Pochhammer’s symbol.
We proceed to evaluate
F = j = 0 f 2 j + 1 P 2 j + 1 ( M ) .
where M is the Jacobi matrix associated with Legendre polynomials. To the reader interested in matrix sign functions, we recommend [17,18]. Making use of Corollary 5, we consider the first Legendre–Fourier coefficients f 0 , , f 2 n to evaluate the main block F n × n of F .
The linearization coefficients of Legendre polynomials products are explicitly known [5]:
P j P k = r = 0 min { j , k } p j , j + k 2 r , k P j + k 2 r ,
with
p j , j + k 2 r , k = 2 j + 2 k + 1 4 r 2 j + 2 k + 1 2 r ϕ r ϕ j r ϕ k r ϕ j + k r ,
where ϕ 0 = 1 , ϕ k = ( 1 2 ) k k ! = 2 k 1 k ϕ k 1 , k 1 .
Those coefficients and (16) result in
f i , k = ( 2 i + 1 ) r = 0 i f k + i 2 r k + i r + 1 ϕ k r ϕ i r ϕ r ϕ k + i r + 1 , i k 2 i + 1 2 k + 1 f k , i , i > k .
To compare these formulas with the ability of matrix F ˜ n × n = s i g n ( M n × n ) to mimic the effect of F n × n = ( s i g n ( M ) ) n × n , we use the function g = | x | , x [ 1 , 1 ] . Since g = x s i g n ( x ) , the Legendre–Fourier coefficients of g can be obtained from those of f,
g = j = 0 g 2 j P 2 j , with g 2 j = ( 1 ) j + 1 ( j + 1 ) ! 4 j + 1 4 j 2 1 2 j ,
and, since x = P 1 , for Legendre polynomials, those g coefficients must lie in column F e 1 . In Figure 2, we present the error | x | g ( x ) with coefficients g j obtained in our matrix F n × n and in matrix F ˜ n × n = s i g n ( M n × n ) , with selected values of matrix dimensions n. Matrix F ˜ n × n is obtained with Newton–Schultz iteration [18].
In Figure 3, we present in log scale the error | g j F j , 1 | , with coefficients g j obtained with (17) and F j , 1 , the values F n × n e 1 , and in matrix s i g n ( M n × n ) e 1 , with selected values of matrix dimensions n.
A similar test can be made using the fact that P 0 = 1 , and so, F e 0 = f ; we can verify if F 1 f = e 0 . In Figure 4, we plot the norms | | F 1 f e 0 | | , illustrating the fact that while F n × n 1 behaves like F 1 , s i g n ( M ) n × n 1 does not. We remark that F n × n is regular for even n only.

4.2. Trigonometric Functions with Laguerre Polynomials

From the generating function of Laguerre polynomials [9],
e a x = 1 1 + a j = 0 a 1 + a j L j ( x )
we obtain
cos ( x ) = j = 0 ( 1 ) j 2 2 j + 1 ( L 4 j ( x ) + L 4 j + 1 ( x ) + 1 2 L 4 j + 2 ( x ) ) .
The recurrence relation of Laguerre polynomials,
L 1 = 0 , L 0 = 1 , x L j = ( j + 1 ) L j + 1 + ( 2 j + 1 ) L j j L j 1 , j = 0 , 1 , ,
results in the particular case of recurrence relation (14):
p j , i , 0 = δ i , j , p j , i , 1 = i 2 ( 1 + 3 ( 1 ) j i ) , | j i | 1 , 0 , | j i | > 1 , p j , i , k + 1 = 1 k + 1 [ i p j , i 1 , k + 2 ( k i ) p j , i , k + ( i + 1 ) p j , i + 1 , k k p j , i , k 1 ] , | j i | k + 1 0 | j i | > k + 1 .
Illustrating an application of these formulas, we build square matrices,
cos ( M ) = j = 0 ( 1 ) j 2 2 j + 1 ( L 4 j ( M ) + L 4 j + 1 ( M ) + 1 2 L 4 j + 2 ( M ) )
truncated to the respective n × n main blocks, where M is the Jacobi matrix of Laguerre polynomials for several values of n. Given a , the coefficients vector of a Fourier series y = L a in Laguerre polynomials, then cos ( M ) a is the coefficients vector of cos ( x ) y in the same polynomial basis. In order to illustrate the effectiveness of this procedure, we choose y = x m , for several given m, whose Fourier–Laguerre coefficients are given by
x m = j = 0 m m j L j ( x ) .
In Figure 5, we present in log scale the error | x m cos ( x ) y n ( x ) | , for m = 4 and m = 5 , where y n results from cos ( M ) a truncated to dimension n.

5. Conclusions

In this work, we reveal fundamental properties of matrices resulting from the image of Jacobi matrices transformed by generalized Fourier series. These are useful matrices in the context of algebraic representation of functional operators. Among the properties associated with matrix polynomials, we highlight the relationship between linearization coefficients and operational matrices. Additionally, we explore symmetry properties, invariance over domain linear displacements, and certain recurrence relations. We conclude that the properties exhibited by matrices transformed through polynomials are inherited by matrices transformed using Fourier series with orthogonal polynomials.
In addition to efficiently addressing the challenge of operating with infinite matrices, this work offers effective calculation formulas applicable to any finite block of these matrices. The examples illustrate the successful and efficient performance of the presented calculation formulas.

Author Contributions

Conceptualization, J.M.A.M.; software, J.M.A.M. and J.A.O.M.; investigation, J.M.A.M.; writing—original draft preparation, J.M.A.M., P.B.V. and J.A.O.M.; writing—review and editing, J.M.A.M. and P.B.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by CMUP, member of LASI, which is financed by national funds through FCT - Fundação para a Ciência e a Tecnologia, I.P., under the projects with reference UIDB/00144/2020 and UIDP/00144/2020.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author (Matlab codes can be provided upon request).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gautschi, W. Orthogonal polynomials: Applications and computation. Acta Numer. 1996, 5, 45–119. [Google Scholar] [CrossRef]
  2. Lewanowicz, S. Second-order recurrence relation for the linearization coefficients of the classical orthogonal polynomials. J. Comput. Appl. Math. 1996, 69, 159–170. [Google Scholar] [CrossRef]
  3. Ronveaux, A.; Hounkonnou, M.N.; Belmehdi, S. Generalized linearization problems. J. Phys. A Math. Gen. 1995, 28, 4423. [Google Scholar] [CrossRef]
  4. Szwarc, R. Linearization and connection coefficients of orthogonal polynomials. Monatshefte Für Math. 1992, 113, 319. [Google Scholar] [CrossRef]
  5. Adams, J.C., III. On the expression of the product of any two Legendre’s coefficients by means of a series of Legendre’s coefficients. Proc. R. Soc. Lond. 1878, 27, 63–71. [Google Scholar]
  6. Park, S.B.; Kim, J.H. Integral evaluation of the linearization coeficients of the product of two Legendre polynomials. J. Phys. A Math. Gen. 2006, 20, 623–635. [Google Scholar]
  7. Chaggara, H.; Koepf, W. On linearization and connection coefficients for generalized Hermite polynomials. J. Comput. Appl. Math. 2011, 236, 65–73. [Google Scholar] [CrossRef]
  8. Chaggara, H.; Koepf, W. On linearization coefficients of Jacobi polynomials. Appl. Math. Lett. 2010, 23, 609–614. [Google Scholar] [CrossRef]
  9. Olver, F.W.J.; Olde Daalhuis, A.B.; Lozier, D.W.; Schneider, B.I.; Boisvert, R.F.; Clark, C.W.; Miller, B.R.; Saunders, B.V.; Cohl, H.S.; McClain, M.A. (Eds.) NIST Digital Library of Mathematical Functions. Release 1.1.10 of 2023-06-15. Available online: https://dlmf.nist.gov/ (accessed on 30 July 2024).
  10. Doha, E.; Abd-Elhameed, W. New linearization formulae for the products of Chebyshev polynomials of third and fourth kinds. Rocky Mt. J. Math. 2016, 46, 443–460. [Google Scholar] [CrossRef]
  11. Gene, H.; Golub, J.H.W. Calculation of Gauss quadrature rules. Math. Comput. 1969, 23, 221–230. [Google Scholar] [CrossRef]
  12. Chihara, T.S. An Introduction to Orthogonal Polynomials; Dover Publications, Inc.: Mineola, NY, USA, 2011. [Google Scholar]
  13. Benzi, M.; Golub, G. Bounds for the Entries of Matrix Functions with Applications to Preconditioning. BIT Numer. Math. 1999, 39, 417–438. [Google Scholar] [CrossRef]
  14. Abd-Elhameed, W. New product and linearization formulae of Jacobi polynomials of certain parameters. Integral Transform. Spec. Funct. 2015, 26, 586–599. [Google Scholar] [CrossRef]
  15. Ahmed, H.M. Computing expansions coefficients for Laguerre polynomials. Integral Transform. Spec. Funct. 2021, 32, 271–289. [Google Scholar] [CrossRef]
  16. Matos, J.M.A.; Rodrigues, M.J.; de Matos, J.C. Explicit formulae for derivatives and primitives of orthogonal polynomials. arXiv 2017, arXiv:1703.00743. [Google Scholar]
  17. Roberts, J.D. Linear model reduction and solution of the algebraic Riccati equation by use of the sign function. Int. J. Control 1980, 32, 677–687. [Google Scholar] [CrossRef]
  18. Wang, X.; Cao, Y. A numerically stable high-order Chebyshev-Halley type multipoint iterative method for calculating matrix sign function. AIMS Math. 2023, 8, 12456–12471. [Google Scholar] [CrossRef]
Figure 1. Sparse structure of P 15 , truncated to their 31 × 31 main block, for T, U, V, and W, the first-, second-, third-, and fourth-kind Chebyshev polynomials, respectively.
Figure 1. Sparse structure of P 15 , truncated to their 31 × 31 main block, for T, U, V, and W, the first-, second-, third-, and fourth-kind Chebyshev polynomials, respectively.
Axioms 13 00581 g001
Figure 2. Error | x | g ( x ) with coefficients g j obtained in F n × n e 1 and in s i g n ( M n × n ) e 1 , with n = 15 , n = 50 , and n = 500 .
Figure 2. Error | x | g ( x ) with coefficients g j obtained in F n × n e 1 and in s i g n ( M n × n ) e 1 , with n = 15 , n = 50 , and n = 500 .
Axioms 13 00581 g002
Figure 3. Error | x | g ( x ) with coefficients g j obtained in F n × n e 1 and in s i g n ( M n × n ) e 1 , with n = 15 , n = 50 , and n = 500 .
Figure 3. Error | x | g ( x ) with coefficients g j obtained in F n × n e 1 and in s i g n ( M n × n ) e 1 , with n = 15 , n = 50 , and n = 500 .
Axioms 13 00581 g003
Figure 4. Norm | | F 1 f e 0 | | for even n = 2 , 4 , , 500 .
Figure 4. Norm | | F 1 f e 0 | | for even n = 2 , 4 , , 500 .
Axioms 13 00581 g004
Figure 5. Absolute error | y ( x ) y n ( x ) | , for x [ 0 , 5 π ] , with y = x m cos ( x ) for m = 4 , 5 ; and y n is the truncated Laguerre series evaluated by truncated matrices cos ( M ) a .
Figure 5. Absolute error | y ( x ) y n ( x ) | , for x [ 0 , 5 π ] , with y = x m cos ( x ) for m = 4 , 5 ; and y n is the truncated Laguerre series evaluated by truncated matrices cos ( M ) a .
Axioms 13 00581 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matos, J.M.A.; Vasconcelos, P.B.; Matos, J.A.O. On Fourier Series in the Context of Jacobi Matrices. Axioms 2024, 13, 581. https://doi.org/10.3390/axioms13090581

AMA Style

Matos JMA, Vasconcelos PB, Matos JAO. On Fourier Series in the Context of Jacobi Matrices. Axioms. 2024; 13(9):581. https://doi.org/10.3390/axioms13090581

Chicago/Turabian Style

Matos, José M. A., Paulo B. Vasconcelos, and José A. O. Matos. 2024. "On Fourier Series in the Context of Jacobi Matrices" Axioms 13, no. 9: 581. https://doi.org/10.3390/axioms13090581

APA Style

Matos, J. M. A., Vasconcelos, P. B., & Matos, J. A. O. (2024). On Fourier Series in the Context of Jacobi Matrices. Axioms, 13(9), 581. https://doi.org/10.3390/axioms13090581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop