Next Article in Journal
Mathematical Analysis and Applications
Next Article in Special Issue
Stability Issues for Selected Stochastic Evolutionary Problems: A Review
Previous Article in Journal
On the Fixed-Circle Problem and Khan Type Contractions
Previous Article in Special Issue
Efficient Implementation of ADER Discontinuous Galerkin Schemes for a Scalable Hyperbolic PDE Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Generalized Schur Algorithm and Some Applications

by
Teresa Laudadio
1,*,
Nicola Mastronardi
1 and
Paul Van Dooren
2
1
Istituto per le Applicazioni del Calcolo “M. Picone”, CNR, Sede di Bari, via G. Amendola 122/D, 70126 Bari, Italy
2
Catholic University of Louvain, Department of Mathematical Engineering, Avenue Georges Lemaitre 4, B-1348 Louvain-la-Neuve, Belgium
*
Author to whom correspondence should be addressed.
Axioms 2018, 7(4), 81; https://doi.org/10.3390/axioms7040081
Submission received: 2 October 2018 / Revised: 5 November 2018 / Accepted: 7 November 2018 / Published: 9 November 2018
(This article belongs to the Special Issue Advanced Numerical Methods in Applied Sciences)

Abstract

:
The generalized Schur algorithm is a powerful tool allowing to compute classical decompositions of matrices, such as the Q R and L U factorizations. When applied to matrices with particular structures, the generalized Schur algorithm computes these factorizations with a complexity of one order of magnitude less than that of classical algorithms based on Householder or elementary transformations. In this manuscript, we describe the main features of the generalized Schur algorithm. We show that it helps to prove some theoretical properties of the R factor of the Q R factorization of some structured matrices, such as symmetric positive definite Toeplitz and Sylvester matrices, that can hardly be proven using classical linear algebra tools. Moreover, we propose a fast implementation of the generalized Schur algorithm for computing the rank of Sylvester matrices, arising in a number of applications. Finally, we propose a generalized Schur based algorithm for computing the null-space of polynomial matrices.

1. Introduction

The generalized Schur algorithm (GSA) allows computing well-known matrix decompositions, such as Q R and L U factorizations [1]. In particular, if the involved matrix is structured, i.e., Toeplitz, block-Toeplitz or Sylvester, the GSA computes the R factor of the Q R factorization with complexity of one order of magnitude less than that of the classical Q R algorithm [2], since it relies only on the knowledge of the so-called generators [2] associated to the given matrix, rather than on the knowledge of the matrix itself. The stability properties of the GSA are described in [3,4,5], where it is proven that the algorithm is weakly stable provided the involved hyperbolic rotations are performed in a stable way.
In this manuscript, we first show that, besides the efficiency properties, the GSA provides new theoretical insights on the bounds of the entries of the R factor of the Q R factorization of some structured matrices. In particular, if the involved matrix is a symmetric positive definite (SPD) Toeplitz or a Sylvester matrix, we prove that all or some of the diagonal entries of R monotonically decrease in absolute value.
We then propose a faster implementation of the algorithm described in [6] for computing the rank of a Sylvester matrix S R ( m + n ) × ( m + n ) , whose entries are the coefficients of two polynomials of degree m and n, respectively. This new algorithm is based on the GSA for computing the R factor of the Q R factorization of S . The proposed modification of the GSA-based method has a computational cost of O ( r l ) floating point operations, where l = min { n , m } and r is the computed numerical rank.
It is well known that the upper triangular factor R factor of the Q R factorization of a matrix A R n × n is equal to the upper triangular Cholesky factor R c R n × n of A T A , up to a diagonal sign matrix D, i.e., R = D R c , D = diag ( ± 1 , , ± 1 ) R n × n . In this manuscript, we assume, without loss of generality, that the diagonal entries of R and R c are positive and since the matrices are then equal, we denote both matrices by R.
Finally, we propose a GSA-based approach for computing a null-space basis of a polynomial matrix, which is an important problem in several systems and control applications [7,8]. For instance, the computation of the null-space of a polynomial matrix arises when solving the column reduction problem of a polynomial matrix [9,10].
The manuscript is structured as follows. The main features of the GSA are provided in Section 2. In Section 3, a GSA implementation for computing the Cholesky factor R of a SPD Toeplitz matrix is described, which allows proving that the diagonal entries of R monotonically decrease. In Section 4, a GSA-based algorithm for computing the rank of a Sylvester matrix S is introduced, based on the computation of the Cholesky factor R of S T S . In addition, in this case, it is proven that the first diagonal entries of R monotonically decrease. The GSA-based method to compute the null-space of polynomial matrices is proposed in Section 5. The numerical examples are reported in Section 6 followed by the conclusions in Section 7.

2. The Generalized Schur Algorithm

Many of the classical factorizations of a symmetric matrix, e.g., Q R and L D L T , can be obtained by the GSA. If the matrix is Toeplitz-like, the GSA computes these factorizations in a fast way. For the sake of completeness, the basic concepts of the GSA for computing the R factor of the Q R factorization of structured matrices, such as Toeplitz and block-Toeplitz matrices, are introduced in this Section. A comprehensive treatment of the topic can be found in [1,2].
Let A R n × n be a symmetric positive definite (SPD) matrix. The semidefinite case is considered in Section 4 and Section 5. The displacement of A with respect to a matrix Z of order n , is defined as
Z A = A Z A Z T ,
while the displacement rank k of A with respect to Z is defined as the rank of Z A . If rank ( Z A ) = k , Equation (1) can be written as the sum of k rank-one matrices,
Z A = i = 1 k 1 g i ( p ) g i ( p ) T i = 1 k 2 g i ( n ) g i ( n ) T ,
where ( k 1 , n k 1 k 2 , k 2 ) is the inertia of Z A , k = k 1 + k 2 , and the vectors g i ( p ) R n , i = 1 , , k 1 , g i ( n ) R n , i = 1 , , k 2 , are called the positive and the negative generators of A with respect to Z , respectively, conversely, if there is no ambiguity, simply the positive and negative generators of A. The matrix G [ g 1 ( p ) , g 2 ( p ) , , g k 1 ( p ) , g 1 ( n ) , g 2 ( n ) , , g k 2 ( n ) ] T is called the generator matrix.
The matrix Z is a nilpotent matrix. In particular, for Toeplitz and block-Toeplitz matrices, the matrix Z can be chosen as the shift and the block shift matrix
Z 1 = 0 0 0 1 0 1 0 , Z 2 = 0 0 0 Z 1 0 Z 1 0 ,
respectively.
The implementation of the GSA relies only on the knowledge of the generators of A rather than on the knowledge of the matrix itself [1].
Let
J = diag ( 1 , 1 , , 1 k 1 , 1 , 1 , , 1 k 2 ) .
Since
A Z A Z T = G T J G , Z A Z T Z 2 A Z 2 T = Z G T J G Z T , Z n 2 A Z n 2 T Z n 1 A Z n 1 T = Z n 2 G T J G Z n 2 T , Z n 1 A Z n 1 T = Z n 1 G T J G Z n 1 T ,
then, adding all members of the left and right-hand sides of Equation (2) yields
A = j = 0 n 1 Z j G T J G Z j T ,
which expresses the matrix A in terms of its generators.
Exploiting Equation (2), we show how the GSA computes R by describing its first iteration. Observe that the matrix products involved in the right-hand side of Equation (2) have their first row equal to zero, with the exception of the first product, G T J G .
A key role in GSA is played by J-orthogonal matrices [11,12], i.e., matrices Φ satisfying Φ T J Φ = J .
Any such matrix Φ can be constructed in different ways [11,12,13,14]. For instance, it can be considered as the product of Givens and hyperbolic rotations. In particular, a Givens rotation acting on rows i and j of the generator matrix is chosen if J ( i , i ) J ( j , j ) > 0 , i , j { 1 , , n } , i j . Otherwise, a hyperbolic rotation is considered. Indeed, suitable choices of Φ allow efficient implementations of GSA, as shown in Section 4.
Let G 0 G and Φ 1 be a J-orthogonal matrix such that
G ˜ 1 = Φ 1 G 0 , G ˜ 1 e 1 = [ α 1 , 0 , , 0 ] T , with α 1 > 0 ,
and e i , i = 1 , , n , be the ith column of the identity matrix. Furthermore, let g ˜ 1 T and Γ ˜ 1 be the first and last k 1 rows of G ˜ 1 , respectively, i.e., G ˜ 1 = g ˜ 1 T Γ ˜ 1 .
From Equation (4), it turns out that the first column of Γ ˜ 1 is zero. Let J ˜ be the matrix obtained by deleting the first row and column from J. Then, Equation (2) can be written as follows,
A = j = 0 n 1 Z j G 0 T J G 0 Z j T = j = 0 n 1 Z j G 0 T Φ 1 T J Φ 1 G 0 Z j T = j = 0 n 1 Z j g ˜ 1 T Γ ˜ 1 T J g ˜ 1 T Γ ˜ 1 Z j T = g ˜ 1 g ˜ 1 T + j = 1 n 1 Z j g ˜ 1 g ˜ 1 T Z j T + j = 0 n 2 Z j Γ ˜ 1 T J ˜ Γ ˜ 1 Z j T + Z n 1 Γ ˜ 1 T J ˜ Γ ˜ 1 Z n 1 T = 0 = g ˜ 1 g ˜ 1 T + j = 0 n 2 Z j g ˜ 1 T Z T Γ ˜ 1 T J g ˜ 1 T Z T Γ ˜ 1 Z j T = g ˜ 1 g ˜ 1 T + j = 0 n 2 Z j G 1 T J G 1 Z j T , = g ˜ 1 g ˜ 1 T + A 1 ,
where G 1 [ Z g ˜ 1 , Γ ˜ 1 T ] T , that is, G 1 is obtained from G ˜ 1 by multiplying g ˜ 1 with Z , and  A 1 j = 0 n 2 Z j G 1 T J G 1 Z j T . If A is a Toeplitz matrix, this multiplication with Z corresponds to displacing the entries of g ˜ 1 one position downward, while it corresponds to a block-displacement downward in the first generator if A is a block-Toeplitz matrix.
Thus, the first column of G 1 is zero and, hence, g ˜ 1 T is the first row of the R factor of the Q R factorization of A . The above procedure is recursively applied to A 1 to compute the other rows of R .
The jth iteration of GSA, j = 1 , , n , involves the products Φ j G j 1 and Z g ˜ 1 . The former multiplication can be computed in O k ( n j ) operations [11,12], and the latter is done for free if Z is either a shift or a block–shift matrix. Therefore, if the displacement rank k of A is small compared to n , the GSA computes the R factor in O ( k n 2 ) rather than in O ( n 3 ) operations, as required by standard algorithms [15].
For the sake of completeness, the described GSA implementation is reported in the following matlab style function. (The function givens is the matlab function having as input two scalars, x 1 and x 2 , and as output an orthogonal 2 × 2 matrix Θ such that Θ x 1 x 2 = x 1 2 + x 2 2 0 . The function Hrotate computes the coefficients of the 2 × 2 hyperbolic rotation Φ such that, given two scalars x 1  and x 2 , | x 1 | > | x 2 | , Φ x 1 x 2 = x 1 2 x 2 2 0 . The function Happly applies Φ to two rows of the generator matrix. Both functions are defined in [12]).
function [ R ] = GSA ( G , n ) ;
fori = 1 : n,
  forj = 2 : k1,
     Θ = givens ( G ( 1 , i ) , G ( j , i ) ) ;
     G ( [ 1 , j ] , i : n ) = Θ G ( [ 1 , j ] , i : n ) ;
  end % for
  forj = k1 + 2 : k1 + k2,
    Θ = govens(G(k1 + 1, i), G(j, i));
    G([k1 + 1, j], i : n) = Θ ∗ G[k1 + 1, j], i : n);
  end % for
  [c1, s1] = Hrotate(G(1, i), G(k1 + 1, i));
  G([1, k1 + 1], i : n) = Happly(c1, s1, G([1, k1 + 1], i : n), ni + 1);
  R(i, i : n) = G(1, i : n);
  G(1, i + 1 : n) = G(1, i : n − 1); G(1, i) = 0;
end % for
The GSA has been proven to be weakly stable [3,4], provided the hyperbolic transformations involved in the construction of the matrices Φ j are performed in a stable way [3,11,12].

3. GSA for SPD Toeplitz Matrices

In this section, we describe the GSA for computing the R factor of the Cholesky factorization of a SPD Toeplitz matrix A, with R upper triangular, i.e., A = R T R . Moreover, we show that the diagonal entries of R decrease monotonically.
Let A R n × n and Z R n × n be a SPD Toeplitz matrix and a shift matrix, respectively, i.e.,
A = t 1 t 2 t n t 2 t 2 t n t 2 t 1 , Z n = 0 0 0 1 0 1 0 ,
and let t = A ( : , 1 ) . Then,
Z A = t 1 t 2 t n t 2 0 0 t n 0 0 ,
i.e., Z A is a symmetric rank-2 matrix. Moreover, the generator matrix G is given by
G = g 1 T g 2 T , with g 1 = t t 1 , g 2 = [ 0 , g 1 ( 2 : n ) T ] T .
In this case, the GSA can be implemented in matlab-like style as follows.
function [ R ] = GSA_chol ( G 0 )
fori = 1 : n,
  [c1, s1] = Hrotate(Gi − 1(1, i), G(i)(2, i)); Gi − 1(:, i : n) = Happly(c1, s1, Gi − 1(:, i : n), ni + 1);
  R(i, i : n) = Gi − 1(1, i : n);
  Gi(1, i + 1 : n) = Gi − 1(1, i : n − 1); Gi(2, i + 1 : n) = Gi − 1(2, i + 1 : n − 1);
end % for
The following lemma holds.
Lemma 1.
Let A be a SPD Toeplitz matrix and let R be its Cholesky factor, with R upper triangular. Then,
R ( i 1 , i 1 ) R ( i , i ) , i = 2 , , n .
Proof. 
At each step i of GSA_chol, i = 1 , , n , first a hyperbolic rotation is applied to G i 1 in order to annihilate the element G i ( 2 , i ) . Hence, the first row of G i 1 becomes the row i of R . Finally, G i ( 1 , : ) is obtained displacing the entries of the first row of G i 1 one position right, while G i ( 2 , : ) is equal to G i 1 ( 2 , : ) . Taking into account that G i 1 ( 2 , 1 ) = 0 , the diagonal entries of R are
R ( 1 , 1 ) = G 0 ( 1 , 1 ) R ( 2 , 2 ) = G 1 2 ( 1 , 2 ) G 1 2 ( 2 , 2 ) = R 2 ( 1 , 1 ) G 1 2 ( 2 , 2 ) R ( 1 , 1 ) ; R ( i , i ) = G i 1 2 ( 1 , i ) G i 1 2 ( 2 , i ) = R 2 ( i 1 , i 1 ) G i 1 2 ( 2 , i ) R ( i 1 , i 1 ) ; R ( n , n ) = G n 1 2 ( 1 , n ) G n 1 2 ( 2 , n ) = R 2 ( n 1 , n 1 ) G n 1 2 ( 2 , n ) R ( n 1 , n 1 ) .
 □

4. Computing the Rank of Sylvester Matrices

In this section, we focus on the computation of the rank of Sylvester matrices. The numerical rank of a Sylvester matrix is a useful information for determining the degree of the greatest common divisor of the involved polynomials [6,16,17].
A GSA-based algorithm for computing the rank of S has been recently proposed in [6]. It is based on the computation of the Cholesky factor R of S T S , with R upper triangular, i.e., R T R = S T S .
Here, we propose a more efficient variant of this algorithm that allows proving that the first entries of R monotonically decrease.
Let w i R , i = 0 , 1 , , n , and let y i R , i = 0 , 1 , , m . Denote by w ( x ) and y ( x ) two univariate polynomials,
w ( x ) = w n x n + w n 1 x n 1 + + w 1 x + w 0 , w n 0 , y ( x ) = y m x m + y m 1 x m 1 + + y 1 x + y 0 , y m 0 .
Let S R ( m + n ) × ( m + n ) be the Sylvester matrix defined as follows,
S = W Y , W = w n w n 1 w n w n 1 w 1 w n w 0 w 1 w n 1 w 0 w 1 w 0 , Y = y m y m 1 y m y m 1 y 1 y m y 0 y 1 y m 1 y 0 y 1 y 0 ,
with W R ( m + n ) × m and Y R ( m + n ) × n band Toeplitz matrices.
We now describe how the GSA-based algorithm proposed in [6] for computing the rank of S can be implemented in a faster way. This variant is based on the computation of the Cholesky factor R R ( m + n ) × ( m + n ) of S T S , with R upper triangular, i.e., R T R = S T S .
Defining
Z = Z m Z n , with Z k = 0 0 0 1 0 1 0 k × k , k N ,
the generator matrix G of S T S with respect to Z is then given by [6]
G = g 1 g 2 g 3 g 4 T
where
g 1 = x 1 / S ( : , 1 ) 2 , g 2 ( [ 2 : n + m ] ) = x 2 ( [ 2 : n + m ] ) / S ( : , m + 1 ) 2 , g 2 ( 1 ) = 0 , g 3 ( 2 : n + m ) = g 1 ( 2 : n + m ) , g 3 ( 1 ) = 0 , g 4 ( [ 1 : m , m + 2 : n + m ] ) = g 2 ( [ 1 : m , m + 2 : n + m ] ) , g 4 ( m + 1 ) = 0 ,
with x 1 = S T S e 1 ,   x 2 = S T S e m + 1 ,   e j the jth vector of the canonical basis of R m + n , and  J = diag ( 1 , 1 , 1 , 1 ) .
The algorithm proposed in [6] is based on the following GSA implementation for computing the R factor of the Q R factorization of S .
function [ R ] = GSA_chol2 ( G )
fori = 1 : n,
  Θ1 = givens(G(1, i), G(2, i));    Θ2 = givens(G(3, i), G(4, i));
  G(1 : 2, i : n) = Θ1G(1 : 2, i : n);   G(3 : 4, i : n) = Θ2G(3 : 4, i : n);
  [c1, s1] = Hrotate(G(1, i), G(3, i));
  G([1,3], i : n) = Happly(c1, s1, G([1,3], i : n), ni + 1);
  R(i, i : n) = Gi(1, i : n);
  G(1, i + 1 : n) = G(1, i : n − 1)ZT;   G(2, i + 1 : n) = G(2 : 4, i + 1 : n − 1);
end % for
At the ith iteration of the algorithm, i = 1 , , n , the Givens rotations Θ 1 and Θ 2 are computed and applied, respectively, to the first and second generators, and to the third and fourth generators,   to annihilate G ( 2 , i ) and G ( 4 , i ) . Hence, the hyperbolic rotation c 1 s 1 s 1 c 1 is applied to the first and the third row of G to annihilate G ( 3 , i ) . Finally, the first row of G becomes the ith row of R and the first row of G is multiplied by Z T .
Summarizing, at the first step of the ith iteration of GSA, all entries of the ith column but the first one of G, are annihilated. If the number of rows of G is greater than 2 , this can be accomplished in different ways (see [5,14]).
Analyzing the pattern of the generators in Equation (8), we are able to derive a different implementation of G S A that costs O ( r l ) , with l = min { n , m } . Moreover, this implementation allows proving that the first l diagonal entries of R are monotonically decreasing.
We observe that the matrix W T W in Equation (6) is the SPD Toeplitz matrix
W T W = t 1 t 2 t n t n + 1 t 2 t 1 t 2 t n t 2 t n + 1 t n t n t n + 1 t n t 2 t 2 t 1 t 2 t n + 1 t n t 2 t 1 m × m ,
with
t i = j = i n + 1 w j 1 w j i , i = 1 , 2 , , n + 1 .
Since
S T S = W T W W T Y Y T W Y T Y ,
if n m , from Equation (9), it turns out that G ( [ 1 , 3 ] , n + 2 : m ) = 0 . Moreover, the rows G ( 2 , : ) and G ( 4 , : ) have their first entry equal to zero and differ only in their entry in column m + 1 . This particular pattern of G is close to the ones described in [13,14,18], allowing to design an alternative GSA implementation with respect to that considered in [6], and thereby reducing the complexity from O ( r ( n + m ) ) to O ( r l ) , where r is the computed rank of S and l = min { n , m } .
Since the description of the above GSA implementation is quite cumbersome and similar to the algorithms reported in [13,14,18], we omit it here. The corresponding matlab pseudo–code can be obtained from the authors upon request.
If the matrix S has rank r < ( n + m ) , at the k = ( n + m r + 1 ) st iteration, it turns out that G 2 ( 1 , k ) G 2 ( 3 , k ) = 0 in exact arithmetic [6]. Therefore, at each iteration of the algorithm we check whether
G 2 ( 1 , k ) G 2 ( 3 , k ) > t o l ,
where t o l is a fixed tolerance. If Equation (10) is not satisfied, we stop the computation considering k as the computed numerical rank of S .
The R factor of the Q R factorization of S is unique if the diagonal entries of R are positive. The considered GSA implementation, yielding the rank of S and based on computing the R factor of the Q R factorization of S, allows us to prove that the first l entries of the diagonal of R are ordered in a decreasing order, with l = min { m , n } . In fact, the following theorem holds.
Theorem 1.
Let R T R = S T S be the Cholesky factorization of S T S with S the Sylvester matrix defined in Equation (6) with rank r l = min { m , n } . Then,
R ( i 1 , i 1 ) R ( i , i ) 0 , i = 2 , , l .
Proof. 
Each entry i of the diagonal of R is determined by the ith entry of the first row of G at the end of iteration i , for i = 1 , , m + n . Let us define G ^ G ( : , 1 : l ) and consider the following alternative implementation of the GSA for computing the first l columns of the Cholesky factor of S T S .
for i = 1 : l ,
   Θ = givens ( G ^ ( 1 , i ) , G ^ ( 2 , i ) ) ;
   G ^ ( 1 : 2 , i : l ) = Θ G ^ ( 1 : 2 , i : l ) ;
   [ c 1 , s 1 ] = Hrotate ( G ^ ( 1 , i ) , G ^ ( 4 , i ) ) ; G ^ ( [ 1 , 4 ] , : ) = Happly ( c 1 , s 1 , G ^ ( [ 1 , 4 ] , : ) , l ) ;
   [ c 2 , s 2 ] = Hrotate ( G ^ ( 1 , i ) , G ^ ( 3 , i ) ) ; G ^ ( [ 1 , 3 ] , : ) = Happly ( c 2 , s 2 , G ^ ( [ 1 , 3 ] , : ) , l ) ;
   R ( i , i : l ) = G ^ ( 1 , i : l ) ;
   G ^ ( 1 , i + 1 : l ) = G ^ ( 1 , i : l 1 ) ;
   G ^ ( 1 , i ) = 0 ;
end % for
We observe that, for i = 1 , G ^ ( 1 , 1 ) is the only entry in the first column of G ^ different from 0 . Hence,  R ( 1 , i ) = G ^ ( 1 , 1 : l ) and the first iteration amounts only to shifting G ^ ( 1 , 1 : l ) one position rightward, i.e., G ^ ( 1 , 2 : l ) = G ^ ( 1 , 1 : l 1 ) , G ^ ( 1 , 1 ) = 0 .
At the beginning of iteration i = 2 , the second and the fourth row of G ^ are equal Equation (8). Hence, when applying a Givens rotation to the first and the second row in order to annihilate the entry G ^ ( 2 , i ) and when subsequently applying a hyperbolic rotation to the first and fourth row of G ^ in order to annihilate G ^ ( 4 , i ) , it turns out that G ^ ( 2 , i : l ) and G ^ ( 4 , i : l ) are then modified but still equal to each other, while G ^ ( 1 , i : l ) remains unchanged. The equality between G ^ ( 2 , : ) and G ^ ( 4 , : ) is maintained throughout the iterations 1 , 2 , , l .
Therefore, the second and the fourth row of G ^ do not play any role in computing R ( 1 : l , 1 : l ) and can be neglected. Hence, the GSA for computing R ( 1 : l , 1 : l ) reduces only to applying a hyperbolic rotation to the first and the third generators, as described in the following algorithm.
for i = 1 : l ,
   [ c 2 , s 2 ] = Hrotate ( G ^ ( 1 , i ) , G ^ ( 3 , i ) ) ; G ^ ( [ 1 , 3 ] , : ) = Happly ( c 2 , s 2 , G ^ ( [ 1 , 3 ] , : ) , l ) ;
   R ( i , i : l ) = G ^ ( 1 , i : l ) ;
   G ^ ( 1 , i + 1 : l ) = G ^ ( 1 , i : l 1 ) ;
   G ^ ( 1 , i ) = 0 ;
end % for
Since at the beginning of iteration i , i = 2 , , i , G ^ ( 1 , i : l ) = R ( i 1 , i 1 : l 1 ) , then the involved hyperbolic rotation Φ = c 2 s 2 s 2 c 2 is such that
Φ G ^ ( 1 , i ) G ^ ( 3 , i ) = Φ R ( i 1 , i 1 ) G ^ ( 3 , i ) = G ^ ( 1 , i ) 0 = R ( i , i ) 0 ,
where the updated G ^ ( 1 , i ) is equal to G ^ ( 1 , i ) 2 G ^ ( 3 , i ) 2 0 . Therefore, R ( i , i ) = R ( i 1 , i 1 ) 2 G ^ ( 3 , i ) 2 0 , and thus R ( i , i ) R ( i 1 , i 1 ) .  □
Remark 1.
The above GSA implementation allows to prove the inequality Equation (11). This property is difficult to obtain if the Q R factorization is performed via Householder transformations or if the classical Cholesky factorization of S T S is used.

5. GSA for Computing the Null-Space of Polynomial Matrices

In this section, we consider the problem of computing a polynomial basis X ( s ) R n × ( n ρ ) of the null-space of an m × n polynomial matrix of degree δ and rank ρ min ( m , n ) ,
M ( s ) = i = 0 δ M i s i , M i R m × n , i = 0 , , δ .
As described in [8,19,20], the above problem is equivalent to that of computing the null-space of a related block-Toeplitz matrix. Algorithms to solve this problem are proposed in [8,19] but they do not explicitly exploit the structure of the involved matrix. Algorithms to solve related problems have also been described in the literature, e.g., in [8,19,21,22].
In this paper, we propose an algorithm for computing the null-space of polynomial matrices based on a variant of the GSA for computing the null-space of a related band block-Toeplitz matrix [8].

5.1. Null-Space of Polynomial Matrices

A polynomial vector v ( s ) = i = 0 γ v i s i , v i R n , i = 0 , , γ , γ N , is said to belong to the null-space of (12) if
M ( s ) v ( s ) = 0 j = 0 δ M j s j i = 0 γ v i s i = 0 .
The polynomial vector v ( s ) belongs to the null-space of M ( s ) iff v = [ v 0 T , v 1 T , , v γ T ] T , v i R n , i = 0 , , γ , is a vector belonging to the null-space of the band block-Toeplitz matrix
T = M 0 M 1 M 0 M 1 M δ M 0 M δ M 1 M 0 M 1 M δ M δ m ^ × n ^ ,
where m ^ = m ( δ + n b ) , n ^ = n n b , with n b = γ + 1 the number of block columns of T , that can be determined, e.g., by the algorithm described in [8]. Hence, the problem of computing the null-space of the polynomial matrix in Equation (12) is equivalent to the problem of computing the null-space of the matrix in Equation (13). To normalize the entries in this matrix, it is appropriate to first perform a QR factorization of each block column of T:
M 0 M 1 M δ = Q 0 Q 1 Q δ U , where i Q i T Q i = I n ,
and to absorb the upper triangular factor U in the vector u ( s ) : = U v ( s ) . The convolution equation M ( s ) v ( s ) = 0 then becomes an equation of the type Q ( s ) u ( s ) = 0 , but where the coefficient matrices Q i of Q ( s ) form together an orthonormalized matrix.
Remark 2.
Above, we have assumed that there are no constant vectors v in the kernel of M ( s ) . If there are, then, the block column of M i matrices has rank less than n and the above factorization will discover it in the sense that the matrix U is nonsquare and the matrices Q i have less columns than M i , i = 0 , 1 , , δ . This trivial null-space can be eliminated and we therefore assume that the rank was full. For simplicity, from now on, we also assume that the coefficient matrices of the polynomial matrix M ( s ) were already normalized in this way and the norm of the block columns of T are thus orthonormalized. This normalization proves to be very useful in the sequel.
Denote by
Z n b = 0 n I n 0 n I n 0 n n ^ × n ^ and Z = Z n b Z n b 2 n ^ × 2 n ^ ,
where 0 n is the null–matrix of order n N .
If v i 0 , 0 < i < γ , and v j = 0 , j = i + 1 , , γ , i.e.,
v = [ v 0 T , v 1 T , , v i T , 0 , , 0 γ i ] T ,
and v ker ( T ) , then also Z n b k v ker ( T ) , k = 0 , 1 , , γ i . In this case, the vector v is said to be a generator vector of a chain of length γ i + 1 of the null-space of T .
The proposed algorithm for the computation of the null-space of polynomial matrices is based on the GSA for computing the R factor of the Q R -factorization of the matrix T in Equation (13) and, if R is full column rank, its inverse R 1 .
Let us first assume that the matrix T is full rank, i.e., rank ( T ) = ρ = min { m ^ , n ^ } . Without loss of generality, we suppose m ^ n ^ . If m ^ < n ^ , the algorithm still computes the R factor in trapezoidal form [23]. Moreover, in this case, we compute the first m ^ rows of the inverse of the matrix obtained appending the last n ^ m ^ rows of the identity matrix of order n ^ to R .
Let us consider the SPD block-Toeplitz matrix
T ^ = T T T = T ^ 0 T ^ 1 T ^ δ T ^ 1 T ^ 0 T ^ 1 T ^ 1 T ^ δ T ^ δ T ^ 0 T ^ 1 T ^ δ T ^ 1 T ^ 0 R n ^ × n ^ ,
whose blocks are
T ^ i j = k = 0 δ | i j | M k + | i j | T M k , if i j k = 0 δ | i j | M k T M k + | i j | , if i > j .
Notice that, because of the normalization introduced before, we have that T ^ 0 = I n and T ^ i 2 1 . This is used below. The matrix
W = T ^ I n ^ I n ^ 0 n ^
can be factorized in the following way,
W = R ^ T J ^ R ^ R T R 1 R 1 I n ^ I n ^ R R T R T ,
where R R n ^ × n ^ is the factor R of the Q R -factorization of T , i.e., the Cholesky factor of T ^ . Hence, R and its inverse R 1 can be retrieved from the first n ^ columns of the matrix R ^ T .
The displacement matrix and the displacement rank of W with respect to Z , are given by
Z ( W ) = W Z W Z T = I n T ^ 1 T ^ δ 0 n 0 n I n 0 n 0 n T ^ 1 T ^ δ 0 n 0 n I n 0 n 0 n
and ρ ( W , Z ) = rank ( Z ( W ) ) , respectively, with Z ( W ) R 2 n ^ × 2 n ^ .
Then, taking the order n of the matrices T ^ i , i = 0 , 1 , , δ , into account, it turns out that ρ ( W , Z ) 2 n .
Hence, Equation (18) can be written as the difference of two matrices of rank at most n, i.e.,
Z ( W ) = G ( + ) T G ( + ) G ( ) T G ( ) = G T J G , where G : = G ( + ) G ( ) and J = diag ( I n , I n ) .
Since T ^ 0 = I n , the construction of G does not require any computation: it is easy to check that G is given by
G : = G ( + ) G ( ) = I n T ^ 1 T ^ δ 0 n 0 n I n 0 n 0 n 0 n T ^ 1 T ^ δ 0 n 0 n 0 n 0 n 0 n .
Remark 3.
Observe that increasing n b , with n b δ + 1 , the structures of W and Z ( W ) do not change due to the block band structure of the matrix W. Consequently, the length of the corresponding generators changes but their structure remains the same since only T 0 , T 1 , , T δ and I n are different from zero in the first block row.
The computation by the GSA of the R factor of T and of its inverse R 1 is made by only using the matrix G rather than the matrix T. Its implementation is a straightforward block matrix extension of the GSA described in Section 2.
Remark 4.
By construction, the initial generator matrix G 0 has the first δ + 1 block rows and the block row n b + 1 different from zero. Therefore, the multiplication of G 0 by the J-orthogonal matrix H 1 does not modify the structure of the generator matrix.
Let G 0 = G . At each iteration i (for i = 1 , , n b , ), we start from the generator matrix G i 1 having the blocks (of length n) i , i + 1 , , i + δ and n b + 1 , , n b + i different form zero. We then look for a J-orthogonal matrix H i such that the product H i G i 1 has in position ( 1 : n , ( i 1 ) n + 1 : i n ) and ( n + 1 : 2 n , ( i 1 ) n + 1 : i n ) a nonsingular upper triangular and zero matrix, respectively.
Then, G i is obtained from G ˜ i ( + ) G ˜ i ( ) H i G i 1 by multiplying the first n columns with Z , i.e.,
G i = G ˜ i ( + ) Z T G ˜ i ( ) .
The computation of the J-orthogonal matrix H i at the ith iteration of the GSA can be constructed as a product of n Householder matrices H ^ i , j and n hyperbolic rotations Y ^ i , j , j = 1 , , n .
The multiplication by the Householder matrices H ^ i , j modifies the last n columns of the generator matrix, annihilating the last n entries but the ( n + 1 ) st in the row ( i 1 ) n + j , j = 1 , , n , while the multiplication by the hyperbolic rotations Y ^ i , j acts on the columns i and n + 1 , annihilating the entry in position ( ( i 1 ) n + j , n + 1 ) .
Given υ 1 , υ 2 R , | υ 1 | > | υ 2 | , a hyperbolic matrix Y R 2 × 2 can be computed
Y = c s s c , with c = υ 1 υ 1 2 υ 2 2 , s = υ 2 υ 1 2 υ 2 2 ,
such that [ υ 1 , υ 2 ] Y = [ υ 1 2 υ 2 2 , 0 ] .
The modification of the sparsity pattern of the generator matrix after the first and ith iteration of the GSA are displayed in Figure 1 and  Figure 2, respectively.
The reliability of the GSA strongly depends on the way the hyperbolic rotation is computed. In [4,5,24], it is proven that the GSA is weakly stable if the hyperbolic rotations are implemented in an appropriate manner [3,11,12,24].
Let
H i , j = I n H ^ i , j Y i , j = I j 1 c j s j I n j s j c j I n 1 .
Then,
H i = H i , 1 Y i , 1 H i , n 1 Y i , n 1 H i , n Y i , n .
As previously mentioned, GSA relies only on the knowledge of the generators of W rather than on the matrix T ^ itself. Its computation involves the product T ^ T T ^ , which can be accomplished with δ 2 n 3 flops. The ith iteration of the GSA involves the multiplication of n Householder matrices of size n times a matrix of size ( ( i + δ + 1 ) n × n ) . Therefore, since the cost of the multiplication by the hyperbolic rotation is negligible with respect to that of the multiplication by the Householder matrices, the computational cost at iteration i is 4 n 3 ( δ + i ) . Hence, the computational cost of GSA is 2 n 3 n b ( 2 δ + n b 2 / 2 ) .

5.2. GSA for Computing the Right Null-Space of Semidefinite Block Toeplitz Matrices

As already mentioned in Section 5.1, the number of desired blocks n b of the matrix T in Equation (13) can be computed as described in [8]. For the sake of simplicity, in the considered examples, we choose n b  large enough to compute the null-space of T .
The structure and the computation via the GSA of the R factor of the Q R factorization of the singular block Toeplitz matrix T with rank ρ < n m , is considered in [23].
A modification of the GSA for computing the null-space of Toeplitz matrices is described in [25]. In this paper, we extend the latter results to compute the null-space of T by modifying GSA.
Without loss of generality, let us assume that the first n ^ 1 columns of T are linear independent and suppose that the n ^ th column linearly depends on the previous ones. Therefore, the first n ^ 1  principal minors of T ^ are positive while the n ^ th one is zero. Let T ^ = Q Λ Q T be the spectral decomposition of T ^ , with Q = [ q 1 , , q n ^ ] orthogonal and Λ = diag ( λ 1 , , λ n ^ 1 , λ n ^ ) , with
λ 1 λ 2 λ n ^ 1 > λ n ^ = 0 ,
and let T ^ ε = Q Λ ε Q T , with Λ ε = diag ( λ 1 , , λ n ^ 1 , ε 2 ) , with ε R + . Hence,
T ^ ε 1 = 1 ε 2 i = 1 n ^ 1 ε 2 λ i q i q i T + q n ^ q n ^ T .
Let R ε be the Cholesky factor of T ^ ε , with R ε upper triangular, i.e., T ^ ε = R ε T R ε . Then,
ε 2 T ^ ε 1 e n ^ ( 2 n ^ ) = i = 1 n ^ 1 ε 2 q i T e n ^ ( 2 n ^ ) λ i q i + ( q n ^ T e n ^ ( 2 n ^ ) ) q n ^ .
On the other hand,
ε 2 T ^ ε 1 e n ^ ( 2 n ^ ) = ε 2 R ε 1 R ε T e n ^ ( 2 n ^ ) = ε 2 r n ^ , n ^ 1 R ε 1 e n ^ ( 2 n ^ ) ,
where r n ^ , n ^ = e n ^ ( 2 n ^ ) T R ε e n ^ ( 2 n ^ ) . Hence, as ε 0 + , the last column of R ε 1 becomes closer and closer to a multiple of q n ^ , the eigenvector corresponding to the 0 eigenvalue of T ^ .
Therefore, given
W ε = T ^ ε I n ^ I n ^ 0 n ^ ,
we have that
Z ( W ε ) = G ( + ) T G ( + ) G ( ) T G ( ) + ε 2 e n ^ ( 2 n ^ ) e n ^ ( 2 n ^ ) T .
Let
G 0 , ε = G ( + ) G ( ) ε e n ^ ( 2 n ^ ) T .
Define
J ε = diag ( 1 , 1 , , 1 n ^ , 1 , 1 , , 1 n ^ , 1 ) .
Hence,
Z ( W ε ) = G 0 , ε T J ε G 0 , ε .
We observe that column n ^ + 1 of the generator matrix is not involved in the GSA until the very last iteration, since only its n ^ th entry is different from 0 . At the very last iteration, the hyperbolic rotation
Y = c s s c ,
with
c = G ( + ) 2 ( n , n ^ ) + ε 2 G ( + ) 2 ( n , n ^ ) + ε 2 G ( ) 2 ( 1 , n ^ ) , s = G ( ) ( 1 , n ^ ) G ( + ) 2 ( n , n ^ ) + ε 2 G ( ) 2 ( 1 , n ^ )
is applied to the n ^ th and ( n ^ + 1 ) st rows of G , i.e., to the nth row of G ( + ) and the first one of G ( ) . Since  T ^ is singular, it turns out that | G ( + ) ( n , n ^ ) | = | G ( ) ( 1 , n ^ ) | (see [23,25]). Thus,
Y = 1 G ( + ) 2 ( n , n ^ ) + ε 2 G ( ) 2 ( 1 , n ^ ) G ( + ) 2 ( n , n ^ ) + ε 2 G ( ) ( 1 , n ^ ) G ( ) ( 1 , n ^ ) G ( + ) 2 ( n , n ^ ) + ε 2 = | G ( + ) ( n , n ^ ) | ε 1 + ε G ( + ) ( n , n ^ ) 2 θ θ 1 + ε G ( + ) ( n , n ^ ) 2 ,
where
θ = G ( ) ( 1 , n ^ ) | G ( + ) ( n , n ^ ) | = sign ( G ( ) ( 1 , n ^ ) ) .
We observe that, as ε 0 + ,
| G ( + ) ( n , n ^ ) | ε , 1 + ε G ( + ) ( n , n ^ ) 2 1 .
Since a vector of the right null-space of T is determined except for the multiplication by a constant, neglecting the term | G ( + ) ( n , n ^ ) | / ε , such a vector can be computed at the last iteration as the first column of the product
1 θ θ 1 G ( + ) ( n , n ^ + 1 : 2 n ^ ) G ( ) ( 1 , n ^ + 1 : 2 n ^ ) .
When detecting a vector of the null-space as a linear combination of row n of G ( + ) and row one of G ( ) , the new generator matrix G for the GSA is obtained removing the latter columns from G [23,25].
The implementation of the modified GSA for computing the null-space of band block-Toeplitz matrices in Equation (13) is rather technical and can be obtained from the authors upon request.
The stability properties of the GSA have been studied in [4,5,24]. The proposed algorithm inherits the stability properties of the GSA, which means that it is weakly stable.

6. Numerical Examples

All the numerical experiments were carried out in matlab with machine precision ε 2.22 × 10 16 . Example 1 concerns the computation of the rank of a Sylvester matrix, while Examples 2 and 3 concern the computation of the null-space of polynomial matrices.
Example 1.
Let x i , i = 1 , , 12 , y i , i = 1 , , 15 , and z i , i = 1 , , 3 , be random numbers generated by thematlabfunctionrandn. Let w ( x ) and y ( x ) be the two polynomials of degree 15 and 18, constructed by the matlab functionpoly, whose roots are, respectively, x i and z j , i = 1 , , 12 , j = 1 , , 3 , and y i and z j , i = 1 , , 15 , j = 1 , , 3 .
The greatest common divisor of w and y has degree 3 and, therefore, the Sylvester matrix S R 33 × 33 constructed from w ( x ) anf y ( x ) has rank 30 . The diagonal entries of the R factor computed by the G S A implementation described in Section 4 are displayed in Figure 3. Observe that the rank of the matrix can be retrieved by the number of entries of R above a certain tolerance. Moreover, it can be noticed that the first m = 15 diagonal entries monotonically decrease.
Example 2.
As second example, we consider the computation of the coprime factorization of a transfer function matrix, described in [9,26]. The results obtained by the proposed GSA-based algorithm were compared with those obtained computing the null-space of the considered matrix by the functionsvdofmatlab.
Let H ( s ) = N r ( s ) D r 1 ( s ) be the transfer function with
D r ( s ) = 1 s 0 0 0 0 1 s 0 0 0 s 1 s 0 0 0 0 1 s , N r ( s ) = s 2 0 0 0 0 0 0 0 0 0 0 0 0 0 s 0 0 0 0 s .
Let
M ( s ) = N r T ( s ) D r T ( s ) = s 2 0 0 0 0 s 1 0 0 0 0 0 0 0 0 0 s 1 s 0 0 0 0 s 0 0 0 s 1 0 0 0 0 0 s 0 0 0 s 1 .
As reported in [9,26], a minimal polynomial basis for the right null-space of M ( s ) is
N ( s ) = 1 s 0 0 0 0 0 0 0 0 0 ( 1 s ) 2 0 0 0 1 s s 2 0 0 0 s 2 0 0 s s 2 0 0 0 s .
Let us consider n b = 3 . Then, T R 20 × 21 is the block-Toeplitz matrix constructed from M ( s ) as described in Section 5.1. Let rank ( M ) = 17 and U Σ V T be the rank and the singular value decomposition of T computed bymatlab, respectively, and let us define V 1 = V ( : , 1 : 17 ) and V 2 = V ( : , 18 : 21 ) the matrices of the right singular vectors corresponding to the nonzero and zero singular values of T , respectively. The modified GSA applied to T yields four vectors v 1 , Z n b v 1 , v 2 , v 3 R 21 belonging to the right null-space of M ( s ) , with  Z n b = diag ( ones ( 14 , 1 ) , 7 ) . Let X = [ v 1 Z n b v 1 v 2 v 3 ] . In Table 1, the relative norm of T V 2 , the relative norm of T X , the norm of V 1 T V 2 and the norm of V 1 T X , are reported in Columns 1–4, respectively.
Such values show that the results provided bysvdofmatlaband by the algorithm based on a modification of GSA are comparable in terms of accuracy.
Example 3.
This example can be found in [9,26]. Let H ( s ) = D l 1 ( s ) N r ( s ) be the transfer function with
D l ( s ) = ( s + 2 ) 2 ( s + 3 ) 1 0 0 1 , N l ( s ) = 3 s + 8 2 s 2 + 6 s + 2 s 2 + 6 s + 2 3 s 2 + 7 s + 8 .
Let M ( s ) = [ D l ( s ) , N L ( s ) ] . A right coprime pair for M ( s ) is given by
N r = 3 2 s + 2 3 , D r = s 2 + 3 s + 4 2 2 s + 4 ,
Let us choose n b = 4 . Then, T R 14 × 16 is the block-Toeplitz matrix constructed from M ( s ) as described in Section 5.1. Let rank ( M ) = 11 and U Σ V T be the rank and the singular value decomposition of T computed bymatlab, respectively, and let define V 1 = V ( : , 1 : 11 ) and V 2 = V ( : , 12 : 16 ) the matrices of the right singular vectors corresponding to the nonzero and zero singular values of T , respectively. The modified GSA applied to T yields the vectors v 1 , Z n b v 1 , Z n b 2 v 1 , v 2 , Z n b v 2 , v 3 R 16 of the right null-space, with  Z n b = diag ( ones ( 12 , 1 ) , 4 ) . Let X = [ v 1 , Z n b v 1 , Z n b 2 v 1 , v 2 , Z n b v 2 , v 3 ] . In Table 2, the relative norm of T V 2 , the relative norm of T X , the norm of V 1 T V 2 and the norm of V 1 T X , are reported in Columns 1–4, respectively.
As in Example 2, the results yielded by the considered algorithms are comparable in accuracy.

7. Conclusions

The Generalized Schur Algorithm is a powerful tool allowing to compute classical decompositions of matrices, such as the Q R and L U factorizations. If the involved matrices have a particular structure, such as Toeplitz or Sylvester, the GSA computes the latter factorizations with a complexity of one order of magnitude less than that of classical algorithms based on Householder or elementary transformations.
After having emphasized the main features of the GSA, we have shown in this manuscript that the GSA helps to prove some theoretical properties of the R factor of the Q R factorization of some structured matrices. Moreover, a fast implementation of the GSA for computing the rank of Sylvester matrices and the null-space of polynomial matrices is proposed, which relies on a modification of the GSA for computing the R factor and its inverse of the Q R factorization of band block-Toeplitz matrices with full column rank. The numerical examples show that the proposed approach yields reliable results comparable to those ones provided by the function svd of matlab.

Author Contributions

All authors contributed equally to this work.

Funding

This research was partly funded by INdAM-GNCS and by CNR under the Short Term Mobility Program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kailath, T.; Sayed, A.H. Fast Reliable Algorithms for Matrices with Structure; SIAM: Philadelphia, PA, USA, 1999. [Google Scholar]
  2. Kailath, T.; Sayed, A. Displacement Structure: Theory and Applications. SIAM Rev. 1995, 32, 297–386. [Google Scholar] [CrossRef]
  3. Chandrasekaran, S.; Sayed, A. Stabilizing the Generalized Schur Algorithm. SIAM J. Matrix Anal. Appl. 1996, 17, 950–983. [Google Scholar] [CrossRef]
  4. Stewart, M.; Van Dooren, P. Stability issues in the factorization of structured matrices. SIAM J. Matrix Anal. Appl. 1997, 18, 104–118. [Google Scholar] [CrossRef]
  5. Mastronardi, N.; Van Dooren, P.; Van Huffel, S. On the stability of the generalized Schur algorithm. Lect. Notes Comput. Sci. 2001, 1988, 560–567. [Google Scholar]
  6. Li, B.; Liu, Z.; Zhi, L. A structured rank-revealing method for Sylvester matrix. J. Comput. Appl. Math. 2008, 213, 212–223. [Google Scholar] [CrossRef]
  7. Forney, G. Minimal bases of rational vector spaces with applications to multivariable linear systems. SIAM J. Control Optim. 1975, 13, 493–520. [Google Scholar] [CrossRef]
  8. Zúñiga Anaya, J.; Henrion, D. An improved Toeplitz algorithm for polynomial matrix null-space computation. Appl. Math. Comput. 2009, 207, 256–272. [Google Scholar] [CrossRef] [Green Version]
  9. Beelen, T.; van der Hurk, G.; Praagman, C. A new method for computing a column reduced polynomial matrix. Syst. Control Lett. 1988, 10, 217–224. [Google Scholar] [CrossRef] [Green Version]
  10. Neven, W.; Praagman, C. Column reduction of polynomial matrices. Linear Algebra Its Appl. 1993, 188, 569–589. [Google Scholar] [CrossRef]
  11. Bojańczyk, A.; Brent, R.; Van Dooren, P.; de Hoog, F. A note on downdating the Cholesky factorization. SIAM J. Sci. Stat. Comput. 1987, 8, 210–221. [Google Scholar] [CrossRef]
  12. Higham, N.J. J-orthogonal matrices: properties and generation. SIAM Rev. 2003, 45, 504–519. [Google Scholar] [CrossRef]
  13. Lemmerling, P.; Mastronardi, N.; Van Huffel, S. Fast algorithm for solving the Hankel/Toeplitz Structured Total Least Squares Problem. Numer. Algorithm. 2000, 23, 371–392. [Google Scholar] [CrossRef]
  14. Mastronardi, N.; Lemmerling, P.; Van Huffel, S. Fast structured total least squares algorithm for solving the basic deconvolution problem. SIAM J. Matrix Anal. Appl. 2000, 22, 533–553. [Google Scholar] [CrossRef]
  15. Golub, G.H.; Van Loan, C.F. Matrix Computations, 4th ed.; Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
  16. Boito, P. Structured Matrix Based Methods for Approximate Polynomial GCD; Edizioni Della Normale: Pisa, Italy, 2011. [Google Scholar]
  17. Boito, P.; Bini, D.A. A Fast Algorithm for Approximate Polynomial GCD Based on Structured Matrix Computations. In Numerical Methods for Structured Matrices and Applications; Bini, D., Mehrmann, V., Olshevsky, V., Tyrtyshnikov, E., Van Barel, M., Eds.; Birkhauser: Basel, Switzerland, 2010; Volume 199, pp. 155–173. [Google Scholar]
  18. Mastronardi, N.; Lernmerling, P.; Kalsi, A.; O’Leary, D.; Van Huffel, S. Implementation of the regularized structured total least squares algorithms for blind image deblurring. Linear Algebra Its Appl. 2004, 391, 203–221. [Google Scholar] [CrossRef]
  19. Basilio, J.; Moreira, M. A robust solution of the generalized polynomial Bezout identity. Linear Algebra Its Appl. 2004, 385, 287–303. [Google Scholar] [CrossRef]
  20. Kailath, T. Linear Systems; Prentice Hall: Englewood Cliffs, NJ, USA, 1980. [Google Scholar]
  21. Bueno, M.; De Terán, F.; Dopico, F. Recovery of Eigenvectors and Minimal Bases of Matrix Polynomials from Generalized Fiedler Linearizations. SIAM J. Matrix Anal. Appl. 2011, 32, 463–483. [Google Scholar] [CrossRef]
  22. De Terán, F.; Dopico, F.; Mackey, D. Fiedler companion linearizations and the recovery of minimal indices. SIAM J. Matrix Anal. Appl. 2010, 31, 2181–2204. [Google Scholar] [CrossRef]
  23. Gallivan, K.; Thirumalai, S.; Van Dooren, P.; Vermaut, V. High performance algorithms for Toeplitz and block Toeplitz matrices. Linear Algebra Its Appl. 1996, 241–243, 343–388. [Google Scholar] [CrossRef]
  24. Stewart, M. Cholesky Factorization of Semi-definite Toeplitz Matrices. Linear Algebra Its Appl. 1997, 254, 497–526. [Google Scholar] [CrossRef]
  25. Mastronardi, N.; Van Barel, M.; Vandebril, R. On the computation of the null space of Toeplitz-like matrices. Electron. Trans. Numer. Anal. 2009, 33, 151–162. [Google Scholar]
  26. Antoniou, E.; Vardulakis, A.; Vologiannidis, S. Numerical computation of minimal polynomial bases: A generalized resultant approach. Linear Algebra Its Appl. 2005, 405, 264–278. [Google Scholar] [CrossRef]
Figure 1. Modification of the sparsity pattern of the generator matrix G after the first iteration.
Figure 1. Modification of the sparsity pattern of the generator matrix G after the first iteration.
Axioms 07 00081 g001
Figure 2. Modification of the sparsity pattern of the generator matrix G after the ith iteration.
Figure 2. Modification of the sparsity pattern of the generator matrix G after the ith iteration.
Axioms 07 00081 g002
Figure 3. Diagonal entries of R.
Figure 3. Diagonal entries of R.
Axioms 07 00081 g003
Table 1. Relative norm of T V 2 , relative norm of T X , norm of V 1 T V 2 and norm of V 1 T X , for Example 2.
Table 1. Relative norm of T V 2 , relative norm of T X , norm of V 1 T V 2 and norm of V 1 T X , for Example 2.
T V 2 2 T 2 T X 2 T 2 V 1 T V 2 2 V 1 T X 2
4.23 × 10 16 2.89 × 10 16 9.66 × 10 16 2.59 × 10 15
Table 2. Relative norm of T V 2 , relative norm of T X , norm of V 1 T V 2 and norm of V 1 T X , for Example 3.
Table 2. Relative norm of T V 2 , relative norm of T X , norm of V 1 T V 2 and norm of V 1 T X , for Example 3.
T V 2 2 T 2 T X 2 T 2 V 1 T V 2 2 V 1 T X 2
1.33 × 10 16 2.04 × 10 16 5.56 × 10 16 4.91 × 10 15

Share and Cite

MDPI and ACS Style

Laudadio, T.; Mastronardi, N.; Van Dooren, P. The Generalized Schur Algorithm and Some Applications. Axioms 2018, 7, 81. https://doi.org/10.3390/axioms7040081

AMA Style

Laudadio T, Mastronardi N, Van Dooren P. The Generalized Schur Algorithm and Some Applications. Axioms. 2018; 7(4):81. https://doi.org/10.3390/axioms7040081

Chicago/Turabian Style

Laudadio, Teresa, Nicola Mastronardi, and Paul Van Dooren. 2018. "The Generalized Schur Algorithm and Some Applications" Axioms 7, no. 4: 81. https://doi.org/10.3390/axioms7040081

APA Style

Laudadio, T., Mastronardi, N., & Van Dooren, P. (2018). The Generalized Schur Algorithm and Some Applications. Axioms, 7(4), 81. https://doi.org/10.3390/axioms7040081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop