Next Article in Journal
Certain Notions of Energy in Single-Valued Neutrosophic Graphs
Next Article in Special Issue
A Gradient System for Low Rank Matrix Completion
Previous Article in Journal
Some Exact Solutions to Non-Fourier Heat Equations with Substantial Derivative
Previous Article in Special Issue
Optimal B-Spline Bases for the Numerical Solution of Fractional Differential Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Block Generalized Locally Toeplitz Sequences: From the Theory to the Applications

by
Carlo Garoni
1,2,*,
Mariarosa Mazza
3 and
Stefano Serra-Capizzano
2,4
1
Institute of Computational Science, University of Italian Switzerland, 6900 Lugano, Switzerland
2
Department of Science and High Technology, University of Insubria, 22100 Como, Italy
3
Division of Numerical Methods in Plasma Physics, Max Planck Institute for Plasma Physics, 85748 Garching bei München, Germany
4
Department of Information Technology, Uppsala University, P.O. Box 337, SE-751 05 Uppsala, Sweden
*
Author to whom correspondence should be addressed.
Axioms 2018, 7(3), 49; https://doi.org/10.3390/axioms7030049
Submission received: 9 May 2018 / Revised: 6 July 2018 / Accepted: 16 July 2018 / Published: 19 July 2018
(This article belongs to the Special Issue Advanced Numerical Methods in Applied Sciences)

Abstract

:
The theory of generalized locally Toeplitz (GLT) sequences is a powerful apparatus for computing the asymptotic spectral distribution of matrices A n arising from virtually any kind of numerical discretization of differential equations (DEs). Indeed, when the mesh fineness parameter n tends to infinity, these matrices A n give rise to a sequence { A n } n , which often turns out to be a GLT sequence or one of its “relatives”, i.e., a block GLT sequence or a reduced GLT sequence. In particular, block GLT sequences are encountered in the discretization of systems of DEs as well as in the higher-order finite element or discontinuous Galerkin approximation of scalar DEs. Despite the applicative interest, a solid theory of block GLT sequences has been developed only recently, in 2018. The purpose of the present paper is to illustrate the potential of this theory by presenting a few noteworthy examples of applications in the context of DE discretizations.

1. Introduction

The theory of generalized locally Toeplitz (GLT) sequences stems from Tilli’s work on locally Toeplitz (LT) sequences [1] and from the spectral theory of Toeplitz matrices [2,3,4,5,6,7,8,9,10,11,12]. It was then carried forward in [13,14,15,16], and was recently extended by Barbarino [17]. This theory is a powerful apparatus for computing the asymptotic spectral distribution of matrices arising from the numerical discretization of continuous problems, such as integral equations (IEs) and, especially, differential equations (DEs). The experience reveals that virtually any kind of numerical methods for the discretization of DEs gives rise to structured matrices A n whose asymptotic spectral distribution, as the mesh fineness parameter n tends to infinity, can be computed through the theory of GLT sequences. We refer the reader to ([13] Section 10.5), ([14] Section 7.3), and [15,16,18] for applications of the theory of GLT sequences in the context of finite difference (FD) discretizations of DEs; to ([13] Section 10.6), ([14] Section 7.4), and [16,18,19] for the finite element (FE) case; to [20] for the finite volume (FV) case; to ([13] Section 10.7), ([14] Sections 7.5–7.7), and [21,22,23,24,25,26] for the case of isogeometric analysis (IgA) discretizations, both in the collocation and Galerkin frameworks; and to [27] for a further recent application to fractional DEs. We also refer the reader to ([13] Section 10.4) and [28,29] for a look at the GLT approach for sequences of matrices arising from IE discretizations.
It is worth emphasizing that the asymptotic spectral distribution of DE discretization matrices, whose computation is the main objective of the theory of GLT sequences, is not only interesting from a theoretical viewpoint, but can also be used for practical purposes. For example, it is known that the convergence properties of mainstream iterative solvers, such as multigrid and preconditioned Krylov methods, strongly depend on the spectral features of the matrices to which they are applied. The spectral distribution can then be exploited to design efficient solvers of this kind and to analyze/predict their performance. In this regard, we recall that noteworthy estimates on the superlinear convergence of the conjugate gradient method obtained by Beckermann and Kuijlaars in [30] are closely related to the asymptotic spectral distribution of the considered matrices. Furthermore, in the context of Galerkin and collocation IgA discretizations of elliptic DEs, the spectral distribution computed through the theory of GLT sequences in a series of recent papers [21,22,23,24,25] was exploited in [31,32,33] to devise and analyze optimal and robust multigrid solvers for IgA linear systems.
In the very recent work [34], starting from the original intuition by the third author ([16] Section 3.3), the theory of block GLT sequences has been developed in a systematic way as an extension of the theory of GLT sequences. Such an extension is of the utmost importance in practical applications. In particular, it provides the necessary tools for computing the spectral distribution of block structured matrices arising from the discretization of systems of DEs ([16] Section 3.3) and from the higher-order finite element or discontinuous Galerkin approximation of scalar/vectorial DEs [35,36,37]. The purpose of this paper is to illustrate the potential of the theory of block GLT sequences [34] and of its multivariate version—which combines the results of [34] with the “multivariate technicalities” from [14]—by presenting a few noteworthy examples of applications. Actually, the present paper can be seen as a necessary completion of the purely theoretical work [34].
The paper is organized as follows. In Section 2, we report a summary of the theory of block GLT sequences. In Section 3, we focus on the FD discretization of a model system of univariate DEs; through the theory of block GLT sequences, we compute the spectral distribution of the related discretization matrices. In Section 4, we focus on the higher-order FE approximation of the univariate diffusion equation; again, we compute the spectral distribution of the associated discretization matrices through the theory of block GLT sequences. In Section 5, we summarize the multivariate version of the theory of block GLT sequences, also known as the theory of multilevel block GLT sequences. In Section 6, we describe the general GLT approach for computing the spectral distribution of matrices arising from the discretization of systems of partial differential equations (PDEs). In Section 7, we focus on the B-spline IgA approximation of a bivariate variational problem for the curl–curl operator, which is of interest in magnetostatics; through the theory of multilevel block GLT sequences, we compute the spectral distribution of the related discretization matrices. Final considerations are collected in Section 8.

2. The Theory of Block GLT Sequences

In this section, we summarize the theory of block GLT sequences, which was originally introduced in ([16] Section 3.3) and has been recently revised and systematically developed in [34].
Sequences of Matrices and Block Matrix-Sequences. Throughout this paper, a sequence of matrices is any sequence of the form { A n } n , where A n is a square matrix of size d n and d n as n . Let s 1 be a fixed positive integer independent of n; an s-block matrix-sequence (or simply a matrix-sequence if s can be inferred from the context or we do not need/want to specify it) is a special sequence of matrices { A n } n in which the size of A n is d n = s n .
Singular Value and Eigenvalue Distribution of a Sequence of Matrices. Let μ k be the Lebesgue measure in R k . Throughout this paper, all the terminology from measure theory (such as “measurable set”, “measurable function”, “a.e.”, etc.) is referred to the Lebesgue measure. A matrix-valued function f : D R k C r × r is said to be measurable (resp., continuous, Riemann-integrable, in L p ( D ) , etc.) if its components f α β : D C , α , β = 1 , , r , are measurable (resp., continuous, Riemann-integrable, in L p ( D ) , etc.). We denote by C c ( R ) (resp., C c ( C ) ) the space of continuous complex-valued functions with bounded support defined on R (resp., C ). If A C m × m , the singular values and the eigenvalues of A are denoted by σ 1 ( A ) , , σ m ( A ) and λ 1 ( A ) , , λ m ( A ) , respectively.
Definition 1.
Let { A n } n be a sequence of matrices, with A n of size d n , and let f : D R k C r × r be a measurable function defined on a set D with 0 < μ k ( D ) < .
  • We say that { A n } n has a (asymptotic) singular value distribution described by f, and we write { A n } n σ f , if
    lim n 1 d n i = 1 d n F ( σ i ( A n ) ) = 1 μ k ( D ) D i = 1 r F ( σ i ( f ( x ) ) ) r d x , F C c ( R ) .
    In this case, f is referred to as a singular value symbol of { A n } n .
  • We say that { A n } n has a (asymptotic) spectral (or eigenvalue) distribution described by f, and we write { A n } n λ f , if
    lim n 1 d n i = 1 d n F ( λ i ( A n ) ) = 1 μ k ( D ) D i = 1 r F ( λ i ( f ( x ) ) ) r d x , F C c ( C ) .
    In this case, f is referred to as a spectral (or eigenvalue) symbol of { A n } n .
If { A n } n has both a singular value and an eigenvalue distribution described by f, we write { A n } n σ , λ f .
We note that Definition 1 is well-posed because the functions x i = 1 r F ( σ i ( f ( x ) ) ) and x i = 1 r F ( λ i ( f ( x ) ) ) are measurable ([34] Lemma 2.1). Whenever we write a relation such as { A n } n σ f or { A n } n λ f , it is understood that f is as in Definition 1; that is, f is a measurable function defined on a subset D of some R k with 0 < μ k ( D ) < , and f takes values in C r × r for some r 1 .
Remark 1.
The informal meaning behind the spectral distribution (2) is the following: assuming that f possesses r Riemann-integrable eigenvalue functions λ i ( f ( x ) ) , i = 1 , , r , the eigenvalues of A n , except possibly for o ( d n ) outliers, can be subdivided into r different subsets of approximately the same cardinality; and, for n large enough, the eigenvalues belonging to the ith subset are approximately equal to the samples of the ith eigenvalue function λ i ( f ( x ) ) over a uniform grid in the domain D. For instance, if k = 1 , d n = n r , and D = [ a , b ] , then, assuming we have no outliers, the eigenvalues of A n are approximately equal to
λ i f a + j b a n , j = 1 , , n , i = 1 , , r ,
for n large enough; similarly, if k = 2 , d n = n 2 r , and D = [ a 1 , b 1 ] × [ a 2 , b 2 ] , then, assuming we have no outliers, the eigenvalues of A n are approximately equal to
λ i f a 1 + j 1 b 1 a 1 n , a 2 + j 2 b 2 a 2 n , j 1 , j 2 = 1 , , n , i = 1 , , r ,
for n large enough; and so on for k 3 . A completely analogous meaning can also be given for the singular value distribution (1).
Remark 2.
Let D = [ a 1 , b 1 ] × × [ a k , b k ] R k and let f : D C r × r be a measurable function possessing r real-valued Riemann-integrable eigenvalue functions λ i ( f ( x ) ) , i = 1 , , r . Compute for each ρ N the uniform samples
λ i f a 1 + j 1 b 1 a 1 ρ , , a k + j k b k a k ρ , j 1 , , j k = 1 , , ρ , i = 1 , , r ,
sort them in non-decreasing order and put them in a vector ( ς 1 , ς 2 , , ς r ρ k ) . Let ϕ ρ : [ 0 , 1 ] R be the piecewise linear non-decreasing function that interpolates the samples ( ς 0 = ς 1 , ς 1 , ς 2 , , ς r ρ k ) over the nodes ( 0 , 1 r ρ k , 2 r ρ k , , 1 ) , i.e.,
ϕ ρ i r ρ k = ς i , i = 0 , , r ρ k , ϕ ρ   l i n e a r   o n   i r ρ k , i + 1 r ρ k   f o r   i = 0 , , r ρ k 1 .
Suppose ϕ ρ converges in measure over [ 0 , 1 ] to some function ϕ as ρ (this is always the case in real-world applications). Then,
0 1 F ( ϕ ( t ) ) d t = 1 μ k ( D ) D i = 1 r F ( λ i ( f ( x ) ) ) r d x , F C c ( C ) .
This result can be proved by adapting the argument used in ([13] solution of Exercise 3.1). The function ϕ is referred to as the canonical rearranged version of f. What is interesting about ϕ is that, by (3), if { A n } n λ f then { A n } n λ ϕ , i.e., if f is a spectral symbol of { A n } n then the same is true for ϕ. Moreover, ϕ is a univariate scalar function and hence it is much easier to handle than f. According to Remark 1, assuming that ϕ is Riemann-integrable, if we have { A n } n λ f (and hence also { A n } n λ ϕ ), then, for n large enough, the eigenvalues of A n , with the possible exception of o ( d n ) outliers, are approximately equal to the samples of ϕ over a uniform grid in [ 0 , 1 ] .
The next two theorems are useful tools for computing the spectral distribution of sequences formed by Hermitian or perturbed Hermitian matrices. For the related proofs, we refer the reader to ([38] Theorem 4.3) and ([39] Theorem 1.1). In the following, the conjugate transpose of the matrix A is denoted by A * . If A C m × m and 1 p , we denote by A p the Schatten p-norm of A, i.e., the p-norm of the vector ( σ 1 ( A ) , , σ m ( A ) ) . The Schatten ∞-norm A is the largest singular value of A and coincides with the spectral norm A . The Schatten 1-norm A 1 is the sum of the singular values of A and is often referred to as the trace-norm of A. The Schatten 2-norm A 2 coincides with the Frobenius norm of A. For more on Schatten p-norms, see [40].
Theorem 1.
Let { X n } n be a sequence of matrices, with X n Hermitian of size d n , and let { P n } n be a sequence such that P n C d n × δ n , P n * P n = I δ n , δ n d n and δ n / d n 1 as n . Then, { X n } n σ , λ κ if and only if { P n * X n P n } n σ , λ κ .
Theorem 2.
Let { X n } n and { Y n } n be sequences of matrices, with X n and Y n of size d n . Assume that:
  • the matrices X n are Hermitian and { X n } n λ κ ;
  • Y n 2 = o ( d n ) ;
then { X n + Y n } n λ κ .
Block Toeplitz Matrices. Given a function f : [ π , π ] C s × s in L 1 ( [ π , π ] ) , its Fourier coefficients are denoted by
f k = 1 2 π π π f ( θ ) e i k θ d θ C s × s , k Z ,
where the integrals are computed componentwise. The nth block Toeplitz matrix generated by f is defined as
T n ( f ) = [ f i j ] i , j = 1 n C s n × s n .
It is not difficult to see that all the matrices T n ( f ) are Hermitian when f is Hermitian a.e.
Block Diagonal Sampling Matrices. For n N and a : [ 0 , 1 ] C s × s , we define the block diagonal sampling matrix D n ( a ) as the diagonal matrix
D n ( a ) = diag i = 1 , , n a i n = a ( 1 n ) a ( 2 n ) a ( 1 ) C s n × s n .
Zero-Distributed Sequences. A sequence of matrices { Z n } n such that { Z n } n σ 0 is referred to as a zero-distributed sequence. Note that, for any r 1 , { Z n } n σ 0 is equivalent to { Z n } n σ O r (throughout this paper, O m and I m denote the m × m zero matrix and the m × m identity matrix, respectively). Proposition 1 provides an important characterization of zero-distributed sequences together with a useful sufficient condition for detecting such sequences. Throughout this paper, we use the natural convention 1 / = 0 .
Proposition 1.
Let { Z n } n be a sequence of matrices, with Z n of size d n .
  • { Z n } n is zero-distributed if and only if Z n = R n + N n with rank ( R n ) / d n 0 and N n 0 .
  • { Z n } n is zero-distributed if there exists a p [ 1 , ] such that Z n p / ( d n ) 1 / p 0 .
Approximating Classes of Sequences. The notion of approximating classes of sequences (a.c.s.) is the fundamental concept on which the theory of block GLT sequences is based.
Definition 2.
Let { A n } n be a sequence of matrices, with A n of size d n , and let { { B n , m } n } m be a sequence of sequences of matrices, with B n , m of size d n . We say that { { B n , m } n } m is an approximating class of sequences (a.c.s.) for { A n } n if the following condition is met: for every m there exists n m such that, for n n m ,
A n = B n , m + R n , m + N n , m , rank ( R n , m ) c ( m ) d n , N n , m ω ( m ) ,
where n m , c ( m ) , ω ( m ) depend only on m and lim m c ( m ) = lim m ω ( m ) = 0 .
Roughly speaking, { { B n , m } n } m is an a.c.s. for { A n } n if, for large m, the sequence { B n , m } n approximates { A n } n in the sense that A n is eventually equal to B n , m plus a small-rank matrix (with respect to the matrix size d n ) plus a small-norm matrix. It turns out that, for each fixed sequence of positive integers d n such that d n , the notion of a.c.s. is a notion of convergence in the space
E = { { A n } n : A n C d n × d n } .
More precisely, there exists a pseudometric d a . c . s . in E such that { { B n , m } n } m is an a.c.s. for { A n } n if and only if d a . c . s . ( { B n , m } n , { A n } n ) 0 as m . We therefore use the convergence notation { B n , m } n a . c . s . { A n } n to indicate that { { B n , m } n } m is an a.c.s. for { A n } n . A useful criterion to identify an a.c.s. is provided in the next proposition ([13] Corollary 5.3).
Proposition 2.
Let { A n } n be a sequence of matrices, with A n of size d n , let { { B n , m } n } m be a sequence of sequences of matrices, with B n , m of size d n , and let p [ 1 , ] . Suppose that for every m there exists n m such that, for n n m ,
A n B n , m p ε ( m , n ) ( d n ) 1 / p ,
where lim m lim   sup n   ε ( m , n ) = 0 . Then, { B n , m } n a . c . s . { A n } n .
If X C m 1 × m 2 and Y C 1 × 2 are any two matrices, the tensor (Kronecker) product of X and Y is the m 1 1 × m 2 2 matrix defined as follows:
X Y = [ x i j Y ] i = 1 , , m 1 j = 1 , , m 2 = x 11 Y x 1 m 2 Y x m 1 1 Y x m 1 m 2 Y .
We recall that the tensor product operation ⊗ is associative and bilinear. Moreover,
X Y = X Y ,
rank ( X Y ) = rank ( X ) rank ( Y ) ,
( X Y ) T = X T Y T .
Finally, if X 1 , X 2 can be multiplied and Y 1 , Y 2 can be multiplied, then
( X 1 Y 1 ) ( X 2 Y 2 ) = ( X 1 X 2 ) ( Y 1 Y 2 ) .
Lemma 1.
For i , j = 1 , , s , let { A n , i j } n be a sequence of matrices and suppose that { B n , i j ( m ) } n a . c . s . { A n , i j } n . Then,
[ B n , i j ( m ) ] i , j = 1 s a . c . s . [ A n , i j ] i , j = 1 s .
Proof. 
Let E i j be the s × s matrix having 1 in position ( i , j ) and 0 elsewhere. Note that
[ A n , i j ] i , j = 1 s = i , j = 1 s E i j A n , i j , [ B n , i j ( m ) ] i , j = 1 s = i , j = 1 s E i j B n , i j ( m ) .
Since { B n , i j ( m ) } n a . c . s . { A n , i j } n , it is clear from (4), (5) and the definition of a.c.s. that
E i j B n , i j ( m ) a . c . s . E i j A n , i j , i , j = 1 , , s .
Now, if { B n , m [ k ] } n a . c . s . { A n [ k ] } n for k = 1 , , K then { k = 1 K B n , m [ k ] } n a . c . s . { k = 1 K A n [ k ] } n (this is an obvious consequence of the definition of a.c.s.). Thus, the thesis follows from (8) and (9). ☐
Block GLT Sequences. Let s 1 be a fixed positive integer. An s-block GLT sequence (or simply a GLT sequence if s can be inferred from the context or we do not need/want to specify it) is a special s-block matrix-sequence { A n } n equipped with a measurable function κ : [ 0 , 1 ] × [ π , π ] C s × s , the so-called symbol. We use the notation { A n } n GLT κ to indicate that { A n } n is a GLT sequence with symbol κ . The symbol of a GLT sequence is unique in the sense that if { A n } n GLT κ and { A n } n GLT ς then κ = ς a.e. in [ 0 , 1 ] × [ π , π ] . The main properties of s-block GLT sequences proved in [34] are listed below. If A is a matrix, we denote by A the Moore–Penrose pseudoinverse of A (recall that A = A 1 whenever A is invertible). If f m , f : D R k C r × r are measurable matrix-valued functions, we say that f m converges to f in measure (resp., a.e., in L p ( D ) , etc.) if ( f m ) α β converges to f α β in measure (resp., a.e., in L p ( D ) , etc.) for all α , β = 1 , , r .
GLT 1.
If { A n } n GLT κ then { A n } n σ κ . If, moreover, each A n is Hermitian, then { A n } n λ κ .
GLT 2.
We have:
  • { T n ( f ) } n GLT κ ( x , θ ) = f ( θ ) if f : [ π , π ] C s × s is in L 1 ( [ π , π ] ) ;
  • { D n ( a ) } n GLT κ ( x , θ ) = a ( x ) if a : [ 0 , 1 ] C s × s is Riemann-integrable;
  • { Z n } n GLT κ ( x , θ ) = O s if and only if { Z n } n σ 0 .
GLT 3.
If { A n } n GLT κ and { B n } n GLT ς , then:
  • { A n * } n GLT κ * ;
  • { α A n + β B n } n GLT α κ + β ς for all α , β C ;
  • { A n B n } n GLT κ ς ;
  • { A n } n GLT κ 1 provided that κ is invertible a.e.
GLT 4.
{ A n } n GLT κ if and only if there exist s-block GLT sequences { B n , m } n GLT κ m such that { B n , m } n a . c . s . { A n } n and κ m κ in measure.
Remark 3.
The reader might be astonished by the fact that we have talked so far about block GLT sequences without defining them. Actually, we intentionally avoided giving a definition for two reasons. First, the definition is rather cumbersome as it requires introducing other related (and complicated) concepts such as “block LT operators” and “block LT sequences”. Second, from a practical viewpoint, the definition is completely useless because everything that can be derived from it can also be derived fromGLT 1GLT 4(and in a much easier way). The reader who is interested in the formal definition of block GLT sequences can find it in ([34] Section 5) along with the proofs of properties GLT 1GLT 4.

3. FD Discretization of a System of DEs

Consider the following system of DEs:
a 11 ( x ) u 1 ( x ) + a 12 ( x ) u 2 ( x ) = f 1 ( x ) , x ( 0 , 1 ) , a 21 ( x ) u 1 ( x ) + a 22 ( x ) u 2 ( x ) = f 2 ( x ) , x ( 0 , 1 ) , u 1 ( 0 ) = 0 , u 1 ( 1 ) = 0 , u 2 ( 0 ) = 0 , u 2 ( 1 ) = 0 .
In this section, we consider the classical central FD discretization of (10). Through the theory of block GLT sequences, we show that the corresponding sequence of (normalized) FD discretization matrices enjoys a spectral distribution described by a 2 × 2 matrix-valued function. We remark that the number 2, which identifies the matrix space C 2 × 2 where the spectral symbol takes values, coincides with the number of equations that compose the system (10). In what follows, we use the following notation:
tridiag j = 1 , , n β j α j γ j = α 1 γ 1 β 2 α 2 γ 2 β n 1 α n 1 γ n 1 β n α n .
The parameters α j , β j , γ j may be either scalars or s × s blocks for some s > 1 , in which case the previous matrix is a block tridiagonal matrix.

3.1. FD Discretization

Let n 1 , and set h = 1 n + 1 0 and x j = j h for j = 0 , , n + 1 . Using the classical central FD schemes ( 1 , 2 , 1 ) and 1 2 0 ( 1 , 0 , 1 ) for the discretization of, respectively, the (negative) second derivative and the first derivative, for each j = 1 , , n we obtain the following approximations:
[ a 11 ( x ) u 1 ( x ) + a 12 ( x ) u 2 ( x ) ] x = x j a 11 ( x j ) u 1 ( x j + 1 ) + 2 u 1 ( x j ) u 1 ( x j 1 ) h 2 + a 12 ( x j ) u 2 ( x j + 1 ) u 2 ( x j 1 ) 2 h , [ a 21 ( x ) u 1 ( x ) + a 22 ( x ) u 2 ( x ) ] x = x j a 21 ( x j ) u 1 ( x j + 1 ) u 1 ( x j 1 ) 2 h + a 22 ( x j ) u 2 ( x j ) .
This means that the nodal values of the solutions u 1 , u 2 of (10) satisfy approximately the equations
a 11 ( x j ) u 1 ( x j + 1 ) + 2 u 1 ( x j ) u 1 ( x j 1 ) + h 2 a 12 ( x j ) u 2 ( x j + 1 ) u 2 ( x j 1 ) = h 2 f 1 ( x j ) , 1 2 a 21 ( x j ) u 1 ( x j + 1 ) u 1 ( x j 1 ) + h a 22 ( x j ) u 2 ( x j ) = h f 2 ( x j ) ,
for j = 1 , , n . We then approximate the solution u 1 (resp., u 2 ) by the piecewise linear function that takes the value u 1 , j (resp., u 2 , j ) at x j for all j = 0 , , n + 1 , where u 1 , 0 = u 1 , n + 1 = u 2 , 0 = u 2 , n + 1 = 0 and the vectors u 1 = ( u 1 , 1 , , u 1 , n ) T and u 2 = ( u 2 , 1 , , u 2 , n ) T solve the linear system
a 11 ( x j ) u 1 , j + 1 + 2 u 1 , j u 1 , j 1 + h 2 a 12 ( x j ) u 2 , j + 1 u 2 , j 1 = h 2 f 1 ( x j ) , j = 1 , , n , 1 2 a 21 ( x j ) u 1 , j + 1 u 1 , j 1 + h a 22 ( x j ) u 2 , j = h f 2 ( x j ) , j = 1 , , n .
This linear system can be rewritten in matrix form as follows:
A n u 1 u 2 = h 2 f 1 h f 2 ,
where f 1 = [ f 1 ( x j ) ] j = 1 n , f 2 = [ f 2 ( x j ) ] j = 1 n ,
A n = K n ( a 11 ) h H n ( a 12 ) H n ( a 21 ) h M n ( a 22 ) = K n ( a 11 ) H n ( a 12 ) H n ( a 21 ) M n ( a 22 ) I n O n O n h I n ,
and
K n ( a 11 ) = tridiag j = 1 , , n a 11 ( x j ) | 2 a 11 ( x j ) | a 11 ( x j ) = diag j = 1 , , n a 11 ( x j ) T n ( 2 2 cos θ ) , H n ( a 12 ) = tridiag j = 1 , , n 1 2 0 a 12 ( x j ) | 0 | 1 2 0 a 12 ( x j ) = diag j = 1 , , n a 12 ( x j ) T n ( i sin θ ) , H n ( a 21 ) = tridiag j = 1 , , n 1 2 0 a 21 ( x j ) | 0 | 1 2 0 a 21 ( x j ) = diag j = 1 , , n a 21 ( x j ) T n ( i sin θ ) , M n ( a 22 ) = diag j = 1 , , n a 22 ( x j ) .
In view of (13), the linear system (12) is equivalent to
B n v 1 v 2 = h 2 f 1 h f 2 ,
where v 1 = u 1 , v 2 = h u 2 , and
B n = K n ( a 11 ) H n ( a 12 ) H n ( a 21 ) M n ( a 22 ) .
Let v 1 , 1 , , v 1 , n and v 2 , 1 , , v 2 , n be the components of v 1 and v 2 , respectively. When writing the linear system (11) in the form (14), we are implicitly assuming the following.
  • The unknowns are sorted as follows:
    [ v 1 , j ] j = 1 , , n [ v 2 , j ] j = 1 , , n = v 1 , 1 v 1 , 2 v 1 , n v 2 , 1 v 2 , 2 v 2 , n .
  • The equations are sorted as follows, in accordance with the ordering (16) for the unknowns:
    a 11 ( x j ) v 1 , j + 1 + 2 v 1 , j v 1 , j 1 + 1 2 0 a 12 ( x j ) v 2 , j + 1 v 2 , j 1 = h 2 f 1 ( x j ) j = 1 , , n 1 2 0 a 21 ( x j ) u 1 , j + 1 u 1 , j 1 + a 22 ( x j ) v 2 , j = h f 2 ( x j ) j = 1 , , n .
Suppose we decide to change the ordering for both the unknowns and the equations. More precisely, suppose we opt for the following orderings.
  • The unknowns are sorted as follows:
    v 1 , j v 2 , j j = 1 , , n = v 1 , 1 v 2 , 1 v 1 , 2 v 2 , 2 v 1 , n v 2 , n .
  • The equations are sorted as follows, in accordance with the ordering (18) for the unknowns:
    a 11 ( x j ) v 1 , j + 1 + 2 v 1 , j v 1 , j 1 + 1 2 0 a 12 ( x j ) v 2 , j + 1 v 2 , j 1 = h 2 f 1 ( x j ) 1 2 0 a 21 ( x j ) v 1 , j + 1 v 1 , j 1 + a 22 ( x j ) v 2 , j = h f 2 ( x j ) j = 1 , , n .
The matrix C n associated with the linear system (11) assuming the new orderings (18) and (19) is the 2 × 2 block tridiagonal matrix given by
C n = tridiag j = 1 , , n a 11 ( x j ) 1 2 0 a 12 ( x j ) 1 2 0 a 21 ( x j ) 0 | 2 a 11 ( x j ) 0 0 a 22 ( x j ) | a 11 ( x j ) 1 2 0 a 12 ( x j ) 1 2 0 a 21 ( x j ) 0 .
The matrix C n is similar to B n . Indeed, by permuting both rows and columns of B n according to the permutation 1 , n + 1 , 2 , n + 2 , , n , 2 n we obtain C n . More precisely, let e 1 , , e n and e ˜ 1 , , e ˜ 2 n be the vectors of the canonical basis of R n and R 2 n , respectively, and let Π n be the permutation matrix associated with the permutation 1 , n + 1 , 2 , n + 2 , , n , 2 n , that is,
Π n = e ˜ 1 T e ˜ n + 1 T e ˜ 2 T e ˜ n + 2 T e ˜ n T e ˜ 2 n T = I 2 e 1 T I 2 e 2 T I 2 e n T .
Then, C n = Π n B n Π n T .

3.2. GLT Analysis of the FD Discretization Matrices

The main result of this section (Theorem 3) shows that { C n } n is a block GLT sequence whose spectral distribution is described by a 2 × 2 matrix-valued symbol, which is obtained by replacing the matrix-sequences { K n ( a 11 ) } n , { H n ( a 12 ) } n , { H n ( a 21 ) } n , { M n ( a 22 ) } n appearing in the expression (15) of B n with the corresponding symbols a 11 ( x ) ( 2 2 cos θ ) , i a 12 ( x ) sin θ , i a 21 ( x ) sin θ , a 22 ( x ) . In this regard, we note that, assuming for instance a 11 , a 12 , a 21 , a 22 C ( [ 0 , 1 ] ) , we have
{ K n ( a 11 ) } n GLT a 11 ( x ) ( 2 2 cos θ ) ,
{ H n ( a 12 ) } n GLT i a 12 ( x ) sin θ ,
{ H n ( a 21 ) } n GLT i a 21 ( x ) sin θ ,
{ M n ( a 22 ) } n GLT a 22 ( x ) .
To prove (22), it suffices to observe that
K n ( a 11 ) D n ( a 11 ) T n ( 2 2 cos θ ) diag j = 1 , , n a 11 ( x j ) D n ( a 11 ) T n ( 2 2 cos θ ) = max j = 1 , , n a 11 ( x j ) a 11 j n T n ( 2 2 cos θ ) 4 ω a 11 ( h ) ,
where ω a 11 ( · ) is the modulus of continuity of a 11 . Since ω a 11 ( h ) 0 as n , it follows from Proposition 1 that { K n ( a 11 ) D n ( a 11 ) T n ( 2 2 cos θ ) } n σ 0 , and so GLT 2 and GLT 3 immediately yield (22). The relations (23)–(25) are proved in the same way.
Theorem 3.
Suppose that a 11 , a 12 , a 21 , a 22 C ( [ 0 , 1 ] ) . Then,
{ C n } n GLT κ ( x , θ ) = a 11 ( x ) ( 2 2 cos θ ) i a 12 ( x ) sin θ i a 21 ( x ) sin θ a 22 ( x )
and
{ C n } n σ κ ( x , θ ) .
If, moreover, a 21 = a 12 , then we also have
{ C n } n λ κ ( x , θ ) .
Proof. 
From (20), we have
C n = tridiag j = 1 , , n a 11 ( x j ) 1 2 0 a 12 ( x j ) 1 2 0 a 21 ( x j ) 0 | 2 a 11 ( x j ) 0 0 a 22 ( x j ) | a 11 ( x j ) 1 2 0 a 12 ( x j ) 1 2 0 a 21 ( x j ) 0 = tridiag j = 1 , , n a 11 ( x j ) 0 0 0 | 2 a 11 ( x j ) 0 0 0 | a 11 ( x j ) 0 0 0 + tridiag j = 1 , , n 0 1 2 0 a 12 ( x j ) 0 0 | 0 0 0 0 | 0 1 2 0 a 12 ( x j ) 0 0 + tridiag j = 1 , , n 0 0 1 2 0 a 21 ( x j ) 0 | 0 0 0 0 | 0 0 1 2 0 a 21 ( x j ) 0 + tridiag j = 1 , , n 0 0 0 0 | 0 0 0 a 22 ( x j ) | 0 0 0 0 = diag j = 1 , , n a 11 ( x j ) I 2 · tridiag j = 1 , , n 1 0 0 0 | 2 0 0 0 | 1 0 0 0 + diag j = 1 , , n a 12 ( x j ) I 2 · tridiag j = 1 , , n 0 1 2 0 0 0 | 0 0 0 0 | 0 1 2 0 0 0 + diag j = 1 , , n a 21 ( x j ) I 2 · tridiag j = 1 , , n 0 0 1 2 0 0 | 0 0 0 0 | 0 0 1 2 0 0 + diag j = 1 , , n a 22 ( x j ) I 2 · tridiag j = 1 , , n 0 0 0 0 | 0 0 0 1 | 0 0 0 0 = diag j = 1 , , n a 11 ( x j ) I 2 · T n ( ( 2 2 cos θ ) E 11 ) + diag j = 1 , , n a 12 ( x j ) I 2 · T n ( ( i sin θ ) E 12 ) + diag j = 1 , , n a 21 ( x j ) I 2 · T n ( ( i sin θ ) E 21 ) + diag j = 1 , , n a 22 ( x j ) I 2 · T n ( E 22 ) ,
where E p q is the 2 × 2 matrix having 1 in position ( p , q ) and 0 elsewhere. It is clear that, for every p , q = 1 , 2 ,
diag j = 1 , , n a p q ( x j ) I 2 D n ( a p q I 2 ) ω a p q ( h ) 0
as n ; hence, by Proposition 1, GLT 2 and GLT 3,
diag j = 1 , , n a p q ( x j ) I 2 n GLT a ( x ) I 2 , p , q = 1 , 2 .
Consequently, the decomposition (29), GLT 2 and GLT 3 imply (26), which in turn implies (27) by GLT 1. It only remains to prove (28) in the case where a 21 = a 12 . In this case, we have
C n = tridiag j = 1 , , n a 11 ( x j ) 1 2 1 a 12 ( x j ) 1 2 1 a 12 ( x j ) 0 | 2 a 11 ( x j ) 0 0 a 22 ( x j ) | a 11 ( x j ) 1 2 1 a 12 ( x j ) 1 2 1 a 12 ( x j ) 0 .
Consider the symmetric approximation of C n given by
C ˜ n = tridiag j = 1 , , n a 11 ( x j 1 ) 1 2 1 a 12 ( x j 1 ) 1 2 1 a 12 ( x j 1 ) 0 | 2 a 11 ( x j ) 0 0 a 22 ( x j ) | a 11 ( x j ) 1 2 1 a 12 ( x j ) 1 2 1 a 12 ( x j ) 0 .
It is not difficult to see that C n C ˜ n 0 as n by invoking the inequality
X max i = 1 , , n j = 1 n | x i j | max j = 1 , , n i = 1 n | x i j | , X C n × n ;
see, e.g., ([13] Section 2.4.1). Therefore:
  • in view of the decomposition C ˜ n = C n + ( C ˜ n C n ) , we have { C ˜ n } n GLT κ ( x , θ ) by (26), Proposition 1, GLT 2 and GLT 3, so in particular { C ˜ n } n λ κ ( x , θ ) by GLT 1 as C ˜ n is symmetric;
  • C n C ˜ n 2 n C n C ˜ n = o ( 2 n ) as n .
Thus, (28) follows from Theorem 2. ☐
Example 1.
Suppose that a 11 , a 12 , a 21 , a 22 C ( [ 0 , 1 ] ) and a 21 = a 12 , so that { C n } n λ κ ( x , θ ) by Theorem 3. The eigenvalue functions of κ ( x , θ ) are given by
λ 1 , 2 ( κ ( x , θ ) ) = a 11 ( x ) ( 2 2 cos θ ) + a 22 ( x ) ± ( a 11 ( x ) ( 2 2 cos θ ) a 22 ( x ) ) 2 + 4 ( a 12 ( x ) sin θ ) 2 2
and are continuous on [ 0 , 1 ] × [ π , π ] . Let ϕ be the canonical rearranged version of κ ( x , θ ) obtained as the limit of the piecewise linear functions ϕ ρ , according to the construction in Remark 2. Figure 1 shows the graph of ϕ and the eigenvalues λ 1 , , λ 2 n of C n for a 11 ( x ) = 2 + cos ( π x ) , a 12 ( x ) = a 21 ( x ) = e x sin ( π x ) , a 22 ( x ) = 2 x + sin ( π x ) and n = 40 . The graph of ϕ has been obtained by plotting the graph of ϕ ρ corresponding to a large value of ρ. The eigenvalues of C n , which turn out to be real, although C n is not symmetric, have been sorted in non-decreasing order and placed at the points ( t q , λ q ) with t q = q 2 n , q = 1 , , 2 n . We clearly see from the figure an excellent agreement between ϕ and the eigenvalues of C n , as predicted by Remark 2. In particular, we observe no outliers in this case.

4. Higher-Order FE Discretization of the Diffusion Equation

Consider the diffusion equation
( a ( x ) u ( x ) ) = f ( x ) , x ( 0 , 1 ) , u ( 0 ) = u ( 1 ) = 0 .
In this section, we consider the higher-order FE discretization of (31). Through the theory of block GLT sequences, we show that the corresponding sequence of (normalized) FE discretization matrices enjoys a spectral distribution described by a ( p k ) × ( p k ) matrix-valued function, where p and k represent, respectively, the degree and the smoothness of the piecewise polynomial functions involved in the FE approximation. Note that this result represents a remarkable argument in support of ([35] Conjecture 2).

4.1. FE Discretization

The weak form of (31) reads as follows: find u H 0 1 ( [ 0 , 1 ] ) such that
0 1 a ( x ) u ( x ) w ( x ) d x = 0 1 f ( x ) w ( x ) d x , w H 0 1 ( [ 0 , 1 ] ) .
In the FE method, we fix a set of basis functions { φ 1 , , φ N } H 0 1 ( [ 0 , 1 ] ) and we look for an approximation of the exact solution in the space W = span ( φ 1 , , φ N ) by solving the following discrete problem: find u W W such that
0 1 a ( x ) u W ( x ) w ( x ) d x = 0 1 f ( x ) w ( x ) d x , w W .
Since { φ 1 , , φ N } is a basis of W , we can write u W = j = 1 N u j φ j for a unique vector u = ( u 1 , , u N ) T . By linearity, the computation of u W (i.e., of u ) reduces to solving the linear system
A u = f ,
where f = 0 1 f ( x ) φ 1 ( x ) d x , , 0 1 f ( x ) φ N ( x ) d x T and A is the stiffness matrix,
A = 0 1 a ( x ) φ j ( x ) φ i ( x ) d x i , j = 1 N .

4.2. p-Degree C k B-spline Basis Functions

Following the higher-order FE approach, the basis functions φ 1 , , φ N will be chosen as piecewise polynomials of degree p 1 . More precisely, for p , n 1 and 0 k p 1 , let B 1 , [ p , k ] , , B n ( p k ) + k + 1 , [ p , k ] : R R be the B-splines of degree p and smoothness C k defined on the knot sequence
{ τ 1 , , τ n ( p k ) + p + k + 2 } = 0 , , 0 p + 1 , 1 n , , 1 n p k , 2 n , , 2 n p k , , n 1 n , , n 1 n p k , 1 , , 1 p + 1 .
We collect here a few properties of B 1 , [ p , k ] , , B n ( p k ) + k + 1 , [ p , k ] that we shall use in this paper. For the formal definition of B-splines, as well as for the proof of the properties listed below, see [41,42].
  • The support of the ith B-spline is given by
    supp ( B i , [ p , k ] ) = [ τ i , τ i + p + 1 ] , i = 1 , , n ( p k ) + k + 1 .
  • Except for the first and the last one, all the other B-splines vanish on the boundary of [ 0 , 1 ] , i.e.,
    B i , [ p , k ] ( 0 ) = B i , [ p , k ] ( 1 ) = 0 , i = 2 , , n ( p k ) + k .
  • { B 1 , [ p , k ] , , B n ( p k ) + k + 1 , [ p , k ] } is a basis for the space of piecewise polynomial functions of degree p and smoothness C k , that is,
    V n , [ p , k ] = v C k ( [ 0 , 1 ] ) : v | i n , i + 1 n P p for   all i = 0 , , n 1 ,
    where P p is the space of polynomials of degree p . Moreover, { B 2 , [ p , k ] , , B n ( p k ) + k , [ p , k ] } is a basis for the space
    W n , [ p , k ] = { w V n , [ p , k ] : w ( 0 ) = w ( 1 ) = 0 } .
  • The B-splines form a non-negative partition of unity over [ 0 , 1 ] :
    B i , [ p , k ] 0 over R , i = 1 , , n ( p k ) + k + 1 ,
    i = 1 n ( p k ) + k + 1 B i , [ p , k ] = 1 over [ 0 , 1 ] .
  • The derivatives of the B-splines satisfy
    i = 1 n ( p k ) + k + 1 | B i , [ p , k ] | c p n over [ 0 , 1 ] ,
    where c p is a constant depending only on p. Note that the derivatives B i , [ p , k ] may not be defined at some of the grid points 0 , 1 n , 2 n , , n 1 n , 1 in the case of C 0 smoothness ( k = 0 ). In (38), it is assumed that the undefined values are excluded from the summation.
  • All the B-splines, except for the first k + 1 and the last k + 1 , are uniformly shifted-scaled versions of p k fixed reference functions β 1 , [ p , k ] , , β p k , [ p , k ] , namely the first p k B-splines defined on the reference knot sequence
    0 , , 0 p k , 1 , , 1 p k , , η , , η p k , η = p + 1 p k .
    In formulas, setting
    ν = k + 1 p k ,
    for the B-splines B k + 2 , [ p , k ] , , B k + 1 + ( n ν ) ( p k ) , [ p , k ] , we have
    B k + 1 + ( p k ) ( r 1 ) + q , [ p , k ] ( x ) = β q , [ p , k ] ( n x r + 1 ) , r = 1 , , n ν , q = 1 , , p k .
    We point out that the supports of the reference B-splines β q , [ p , k ] satisfy
    supp ( β 1 , [ p , k ] ) supp ( β 2 , [ p , k ] ) supp ( β p k , [ p , k ] ) = [ 0 , η ] .
    Figure 2 and Figure 3 show the graphs of the B-splines B 1 , [ p , k ] , , B n ( p k ) + k + 1 , [ p , k ] for the degree p = 3 and the smoothness k = 1 , and the graphs of the associated reference B-splines β 1 , [ p , k ] , β 2 , [ p , k ] .
The basis functions φ 1 , , φ N are defined as follows:
φ i = B i + 1 , [ p , k ] , i = 1 , , n ( p k ) + k 1 .
In particular, with the notations of Section 4.1, we have N = n ( p k ) + k 1 and W = W n , [ p , k ] .

4.3. GLT Analysis of the Higher-Order FE Discretization Matrices

The stiffness matrix (32) resulting from the choice of the basis functions as in (41) will be denoted by A n , [ p , k ] ( a ) ,
A n , [ p , k ] ( a ) = 0 1 a ( x ) B j + 1 , [ p , k ] ( x ) B i + 1 , [ p , k ] ( x ) d x i , j = 1 n ( p k ) + k 1 .
The main result of this section (Theorem 4) gives the spectral distribution of the normalized sequence { n 1 A n , [ p , k ] ( a ) } n . The proof of Theorem 4 is entirely based on the theory of block GLT sequences and it is therefore referred to as “GLT analysis”. It also requires the following lemma, which provides an approximate construction of the matrix A n , [ p , k ] ( 1 ) corresponding to the constant-coefficient case where a ( x ) = 1 identically. In view of what follows, define the ( p k ) × ( p k ) blocks
K [ p , k ] [ ] = R β j , [ p , k ] ( t ) β i , [ p , k ] ( t ) d t i , j = 1 p k , Z ,
and the ( p k ) × ( p k ) matrix-valued function κ [ p , k ] : [ π , π ] C ( p k ) × ( p k ) ,
κ [ p , k ] ( θ ) = Z K [ p , k ] [ ] e i θ = K [ p , k ] [ 0 ] + > 0 K [ p , k ] [ ] e i θ + ( K [ p , k ] [ ] ) T e i θ .
Due to the compact support of the reference functions β 1 , [ p , k ] , , β p k , [ p , k ] , there is only a finite number of nonzero blocks K [ p , k ] [ ] and, consequently, the series in (44) is actually a finite sum.
Lemma 2.
Let p , n 1 and 0 k p 1 . Define A ˜ n , [ p , k ] ( 1 ) as the principal submatrix of A n , [ p , k ] ( 1 ) of size ( n ν ) ( p k ) corresponding to the indices k + 1 , , k + ( n ν ) ( p k ) , where ν = ( k + 1 ) / ( p k ) as in (39). Then, A ˜ n , [ p , k ] ( 1 ) = n T n ν ( κ [ p , k ] ) .
Proof. 
By (34) and (40), for all r , R = 1 , , n ν and q , Q = 1 , , p k we have
( A ˜ n , [ p , k ] ( 1 ) ) ( p k ) ( r 1 ) + q , ( p k ) ( R 1 ) + Q = 0 1 B k + 1 + ( p k ) ( R 1 ) + Q , [ p , k ] ( x ) B k + 1 + ( p k ) ( r 1 ) + q , [ p , k ] ( x ) d x = R B k + 1 + ( p k ) ( R 1 ) + Q , [ p , k ] ( x ) B k + 1 + ( p k ) ( r 1 ) + q , [ p , k ] ( x ) d x = n 2 R β Q , [ p , k ] ( n x R + 1 ) β q , [ p , k ] ( n x r + 1 ) d x = n R β Q , [ p , k ] ( y ) β q , [ p , k ] ( y r + R ) d y
and
( T n ν ( κ [ p , k ] ) ) ( p k ) ( r 1 ) + q , ( p k ) ( R 1 ) + Q = ( K [ p , k ] [ r R ] ) q , Q = R β Q , [ p , k ] ( y ) β q , [ p , k ] ( y r + R ) d y ,
which completes the proof. ☐
Theorem 4.
Let a L 1 ( [ 0 , 1 ] ) , p 1 and 0 k p 1 . Then, { n 1 A n , [ p , k ] ( a ) } n σ , λ a ( x ) κ [ p , k ] ( θ ) .
Proof. 
The proof consists of four steps. Throughout this proof, we use the following notation.
  • ν = ( k + 1 ) / ( p k ) as in (39).
  • For every square matrix A of size n ( p k ) + k 1 , we denote by A ˜ the principal submatrix of A corresponding to the row and column indices i , j = k + 1 , , k + ( n ν ) ( p k ) .
  • P n , [ p , k ] is the ( n ( p k ) + k 1 ) × ( n ν ) ( p k ) matrix having I ( n ν ) ( p k ) as the principal submatrix corresponding to the row and column indices i , j = k + 1 , , k + ( n ν ) ( p k ) and zeros elsewhere. Note that P n , [ p , k ] T P n , [ p , k ] = I ( n ν ) ( p k ) and P n , [ p , k ] T A P n , [ p , k ] = A ˜ for every square matrix A of size n ( p k ) + k 1 .
Step 1. Consider the linear operator A n , [ p , k ] ( · ) : L 1 ( [ 0 , 1 ] ) R ( n ( p k ) + k 1 ) × ( n ( p k ) + k 1 ) ,
A n , [ p , k ] ( g ) = 0 1 g ( x ) B j + 1 , [ p , k ] ( x ) B i + 1 , [ p , k ] ( x ) d x i , j = 1 n ( p k ) + k 1 .
The next three steps are devoted to show that
{ P n , [ p , k ] T ( n 1 A n , [ p , k ] ( g ) ) P n , [ p , k ] } n = { n 1 A ˜ n , [ p , k ] ( g ) } n GLT g ( x ) κ [ p , k ] ( θ ) , g L 1 ( [ 0 , 1 ] ) .
Once this is done, the theorem is proven. Indeed, from (45), we immediately obtain the relation { P n , [ p , k ] T ( n 1 A n , [ p , k ] ( a ) ) P n , [ p , k ] } n GLT a ( x ) κ [ p , k ] ( θ ) . We infer that { P n , [ p , k ] T ( n 1 A n , [ p , k ] ( a ) ) P n , [ p , k ] } n σ , λ a ( x ) κ [ p , k ] ( θ ) by GLT 1 and { n 1 A n , [ p , k ] ( a ) } n σ , λ a ( x ) κ [ p , k ] ( θ ) by Theorem 1.
Step 2. We first prove (45) in the constant-coefficient case where g ( x ) = 1 identically. In this case, by Lemma 2, we have n 1 A ˜ n , [ p , k ] ( 1 ) = T n ν ( κ [ p , k ] ) . Hence, the desired relation { n 1 A ˜ n , [ p , k ] ( 1 ) } n GLT κ [ p , k ] ( θ ) follows from GLT 2.
Step 3. Now we prove (45) in the case where g C ( [ 0 , 1 ] ) . Let
Z n , [ p , k ] ( g ) = n 1 A ˜ n , [ p , k ] ( g ) n 1 D n ν ( g I p k ) A ˜ n , [ p , k ] ( 1 ) .
By (33), (34) and (38), for all r , R = 1 , , n ν and q , Q = 1 , , p k , we have
| ( n Z n , [ p , k ] ( g ) ) ( p k ) ( r 1 ) + q , ( p k ) ( R 1 ) + Q | = ( A ˜ n , [ p , k ] ( g ) ) ( p k ) ( r 1 ) + q , ( p k ) ( R 1 ) + Q ( D n ν ( g I p k ) A ˜ n , [ p , k ] ( 1 ) ) ( p k ) ( r 1 ) + q , ( p k ) ( R 1 ) + Q = 0 1 g ( x ) g r n ν B k + 1 + ( p k ) ( R 1 ) + Q , [ p , k ] ( x ) B k + 1 + ( p k ) ( r 1 ) + q , [ p , k ] ( x ) d x = τ k + 1 + ( p k ) ( r 1 ) + q τ k + 1 + ( p k ) ( r 1 ) + q + p + 1 g ( x ) g r n ν B k + 1 + ( p k ) ( R 1 ) + Q , [ p , k ] ( x ) B k + 1 + ( p k ) ( r 1 ) + q , [ p , k ] ( x ) d x c p 2 n 2 ( r 1 ) / n ( r + p ) / n g ( x ) g r n ν d x c p 2 ( p + 1 ) n ω g ν + p n ,
where ω g ( · ) is the modulus of continuity of g and the last inequality is justified by the fact that the distance of the point r / ( n ν ) from the interval [ ( r 1 ) / n , ( r + p ) / n ] is not larger than ( ν + p ) / n . It follows that each entry of Z n , [ p , k ] ( g ) is bounded in modulus by C p ω g ( 1 / n ) , where C p is a constant depending only on p. Moreover, by (34), the matrix Z n , [ p , k ] ( g ) is banded with bandwidth bounded by a constant w p depending only on p. Thus, by (30), Z n , [ p , k ] ( g ) w p C p ω g ( 1 / n ) 0 as n , and so { Z n , [ p , k ] ( g ) } n is zero-distributed by Proposition 1. Since
n 1 A ˜ n , [ p , k ] ( g ) = n 1 D n ν ( g I p k ) A ˜ n , [ p , k ] ( 1 ) + Z n , [ p , k ] ( g ) ,
we conclude that { n 1 A ˜ n , [ p , k ] ( g ) } n GLT g ( x ) κ [ p , k ] ( θ ) by GLT 2, GLT 3 and Step 2.
Step 4. Finally, we prove (45) in the general case where g L 1 ( [ 0 , 1 ] ) . By the density of C ( [ 0 , 1 ] ) in L 1 ( [ 0 , 1 ] ) , there exist functions g m C ( [ 0 , 1 ] ) such that g m g in L 1 ( [ 0 , 1 ] ) . By Step 3,
{ n 1 A ˜ n , [ p , k ] ( g m ) } n GLT g m ( x ) κ [ p , k ] ( θ ) .
Moreover,
g m ( x ) κ [ p , k ] ( θ ) g ( x ) κ [ p , k ] ( θ ) in   measure .
We show that
{ n 1 A ˜ n , [ p , k ] ( g m ) } n a . c . s . { n 1 A ˜ n , [ p , k ] ( g ) } n .
Once this is done, the thesis (45) follows immediately from GLT 4. To prove (48), we recall that
X 1 i , j = 1 N | x i j | , X C N × N ;
see, e.g., ([13] Section 2.4.3). By (38), we obtain
A ˜ n , [ p , k ] ( g ) A ˜ n , [ p , k ] ( g m ) 1 i , j = 1 n ( p k ) + k 1 0 1 g ( x ) g m ( x ) B j + 1 , [ p , k ] ( x ) B i + 1 , [ p , k ] ( x ) d x 0 1 g ( x ) g m ( x ) i , j = 1 n ( p k ) + k 1 | B j + 1 , [ p , k ] ( x ) | | B i + 1 , [ p , k ] ( x ) | d x c p 2 n 2 g g m L 1 .
Thus, the a.c.s. convergence (48) follows from Proposition 2. ☐
Remark 4.
By following step by step the proof of Theorem 4, we can give an alternative (much simpler) proof of ([36] Theorem A.6) based on the theory of block GLT sequences.

5. The Theory of Multilevel Block GLT Sequences

As illustrated in Section 3 and Section 4, the theory of block GLT sequences allows the computation of the singular value and eigenvalue distribution of block structured matrices arising from the discretization of univariate DEs. In order to cope with multivariate DEs, i.e., PDEs, we need the multivariate version of the theory of block GLT sequences, also known as the theory of multilevel block GLT sequences. The present section is devoted to a careful presentation of this theory, which is obtained by combining the results of [34] with the necessary technicalities for tackling multidimensional problems [14].
Multi-Index Notation. The multi-index notation is an essential tool for dealing with sequences of matrices arising from the discretization of PDEs. A multi-index i Z d , also called a d-index, is simply a (row) vector in Z d ; its components are denoted by i 1 , , i d .
  • 0 , 1 , 2 , are the vectors of all zeros, all ones, all twos, etc. (their size will be clear from the context).
  • For any d-index m , we set N ( m ) = j = 1 d m j and we write m to indicate that min ( m ) .
  • If h , k are d-indices, h k means that h r k r for all r = 1 , , d .
  • If h , k are d-indices such that h k , the multi-index range h , , k is the set { j Z d : h j k } . We assume for this set the standard lexicographic ordering:
    ( j 1 , , j d ) j d = h d , , k d j d 1 = h d 1 , , k d 1 j 1 = h 1 , , k 1 .
    For instance, in the case d = 2 , the ordering is the following: ( h 1 , h 2 ) , ( h 1 , h 2 + 1 ) , , ( h 1 , k 2 ) , ( h 1 + 1 , h 2 ) , ( h 1 + 1 , h 2 + 1 ) , , ( h 1 + 1 , k 2 ) , , ( k 1 , h 2 ) , ( k 1 , h 2 + 1 ) , , ( k 1 , k 2 ) .
  • When a d-index j varies over a multi-index range h , , k (this is sometimes written as j = h , , k ), it is understood that j varies from h to k following the specific ordering (50). For instance, if m N d and if we write x = [ x i ] i = 1 m , then x is a vector of size N ( m ) whose components x i , i = 1 , , m , are ordered in accordance with (50): the first component is x 1 = x ( 1 , , 1 , 1 ) , the second component is x ( 1 , , 1 , 2 ) , and so on until the last component, which is x m = x ( m 1 , , m d ) . Similarly, if X = [ x i j ] i , j = 1 m , then X is a N ( m ) × N ( m ) matrix whose components are indexed by two d-indices i , j , both varying from 1 to m according to the lexicographic ordering (50).
  • Given h , k Z d with h k , the notation j = h k indicates the summation over all j in h , , k .
  • Operations involving d-indices that have no meaning in the vector space Z d must be interpreted in the componentwise sense. For instance, i j = ( i 1 j 1 , , i d j d ) , i / j = ( i 1 / j 1 , , i d / j d ) , etc.
Multilevel Block Matrix-Sequences. Given d , s 1 , a d-level s-block matrix-sequence (or simply a matrix-sequence if d and s can be inferred from the context or we do not need/want to specify them) is a sequence of matrices of the form { A n } n , where:
  • n varies in some infinite subset of N ;
  • n = n ( n ) is a d-index in N d which depends on n and satisfies n as n ;
  • An is a square matrix of size N ( n ) s .
Multilevel Block Toeplitz Matrices. Given a function f : [ π , π ] d C s × s in L 1 ( [ π , π ] d ) , its Fourier coefficients are denoted by
f k = 1 ( 2 π ) d [ π , π ] d f ( θ ) e i k · θ d θ C s × s , k Z d ,
where k · θ = k 1 θ 1 + + k d θ d and the integrals are computed componentwise. For n N d , the nth multilevel block Toeplitz matrix generated by f is defined as
T n ( f ) = [ f i j ] i , j = 1 n C N ( n ) s × N ( n ) s .
It is not difficult to see that the map f T n ( f ) is linear. Moreover, it can be shown that
T n ( f * ) = ( T n ( f ) ) * ,
where the transpose conjugate function f * is defined by f * ( θ ) = ( f ( θ ) ) * ; in particular, all the matrices T n ( f ) are Hermitian whenever f is Hermitian a.e. We also recall that, if n N d and f 1 , f 2 , , f d : [ π , π ] C belong to L 1 ( [ π , π ] ) , then
T n 1 ( f 1 ) T n 2 ( f 2 ) T n d ( f d ) = T n ( f ) ,
where f : [ π , π ] d C is defined by f ( θ ) = f ( θ 1 ) f ( θ 2 ) f ( θ d ) ; see, e.g., ([14] Lemma 3.3).
Multilevel Block Diagonal Sampling Matrices. For n N d and a : [ 0 , 1 ] d C s × s , we define the multilevel block diagonal sampling matrix D n ( a ) as the block diagonal matrix
D n ( a ) = diag i = 1 , , n a i n C N ( n ) s × N ( n ) s .
Multilevel Block GLT Sequences. Let d , s 1 be fixed positive integers. A d-level s-block GLT sequence (or simply a GLT sequence if d and s can be inferred from the context or we do not need/want to specify them) is a special d-level s-block matrix-sequence { A n } n equipped with a measurable function κ : [ 0 , 1 ] d × [ π , π ] d C s × s , the so-called symbol. We use the notation { A n } n GLT κ to indicate that { A n } n is a GLT sequence with symbol κ . The symbol of a GLT sequence is unique in the sense that if { A n } n GLT κ and { A n } n GLT ς then κ = ς a.e. in [ 0 , 1 ] d × [ π , π ] d . The main properties of d-level s-block GLT sequences are listed below.
GLT 1.
If { A n } n GLT κ then { A n } n σ κ . If, moreover, each An is Hermitian then { A n } n λ κ .
GLT 2.
We have:
  • { T n ( f ) } n GLT κ ( x , θ ) = f ( θ ) if f : [ π , π ] d C s × s is in L 1 ( [ π , π ] d ) ;
  • { D n ( a ) } n GLT κ ( x , θ ) = a ( x ) if a : [ 0 , 1 ] d C s × s is Riemann-integrable;
  • { Z n } n GLT κ ( x , θ ) = O s if and only if { Z n } n σ 0 .
GLT 3.
If { A n } n GLT κ and { B n } n GLT ς then:
  • { A n * } n GLT κ * ;
  • { α A n + β B n } n GLT α κ + β ς for all α , β C ;
  • { A n B n } n GLT κ ς ;
  • { A n } n GLT κ 1 provided that κ is invertible a.e.
GLT 4.
{ A n } n GLT κ if and only if there exist GLT sequences { B n , m } n GLT κ m such that { B n , m } n a . c . s . { A n } n and κ m κ in measure.

6. Discretizations of Systems of PDEs: The General GLT Approach

In this section, we outline the main ideas of a multidimensional block GLT analysis for general discretizations of PDE systems. What we are going to present here is then a generalization of what is shown in Section 3. We begin by proving a series of auxiliary results. In the following, given n N d and s 1 , we denote by Π n , s the permutation matrix given by
Π n , s = I s e 1 T I s e 2 T I s e n T = k = 1 n e k I s e k T ,
where e i , i = 1 , , n , are the vectors of the canonical basis of R N ( n ) , which, for convenience, are indexed by a d-index i = 1 , , n instead of a linear index i = 1 , , N ( n ) . Note that Π n , 2 coincides with the matrix Π n in (21).
Lemma 3.
Let n N d , let f i j : [ π , π ] d C be in L 1 ( [ π , π ] d ) for i , j = 1 , , s , and set f = [ f i j ] i , j = 1 s . The block matrix T n = [ T n ( f i j ) ] i , j = 1 s is similar via the permutation (53) to the multilevel block Toeplitz matrix T n ( f ) , that is, Π n , s T n Π n , s T = T n ( f ) .
Proof. 
Let E i j be the s × s matrix having 1 in position ( i , j ) and 0 elsewhere. Since T n = i , j = 1 s E i j T n ( f i j ) and T n ( f ) = i , j = 1 s T n ( f i j E i j ) by the linearity of the map T n ( · ) , it is enough to show that
Π n , s ( E T n ( g ) ) Π n , s T = T n ( g E ) , g L 1 ( [ π , π ] d ) , E C s × s .
By (6) and (7),
Π n , s ( E T n ( g ) ) Π n , s T = k = 1 n e k I s e k T ( E T n ( g ) ) = 1 n e T I s e = k , = 1 n ( e k I s e k T ) ( E T n ( g ) ) ( e T I s e ) = k , = 1 n e k e T E e k T T n ( g ) e = k , = 1 n e k e T ( T n ( g ) ) k E = T n ( g E ) ,
as required. ☐
Lemma 4.
Let n N d , let a i j : [ 0 , 1 ] d C for i , j = 1 , , s , and set a = [ a i j ] i , j = 1 s . The block matrix D n = [ D n ( a i j ) ] i , j = 1 s is similar via the permutation (53) to the multilevel block diagonal sampling matrix D n ( a ) , that is, Π n , s D n Π n , s T = D n ( a ) .
Proof. 
With obvious adaptations, it is the same as the proof of Lemma 3. ☐
We recall that a d-variate trigonometric polynomial is a finite linear combination of the d-variate Fourier frequencies e i k · θ , k Z d .
Theorem 5.
For i , j = 1 , , s , let { A n , i j } n be a d-level 1-block GLT sequence with symbol κ i j : [ 0 , 1 ] d × [ π , π ] d C . Set A n = [ A n , i j ] i , j = 1 s and κ = [ κ i j ] i , j = 1 s . Then, the matrix-sequence { Π n , s A n Π n , s T } n is a d-level s-block GLT sequence with symbol κ.
Proof. 
The proof consists of the following two steps.
Step 1. We first prove the theorem under the additional assumption that A n , i j is of the form
A n , i j = = 1 L i j D n ( a , i j ) T n ( f , i j ) ,
where L i j N , a , i j : [ 0 , 1 ] d C is Riemann-integrable, and f , i j : [ π , π ] d C belongs to L 1 ( [ π , π ] d ) . Note that the symbol of { A n , i j } n is
κ i j ( x , θ ) = = 1 L i j a , i j ( x ) f , i j ( θ ) .
By setting L = max i , j = 1 , , s L i j and by adding zero matrices of the form D n ( 0 ) T n ( 0 ) in the summation (54) whenever L i j < L , we can assume, without loss of generality, that
A n , i j = = 1 L D n ( a , i j ) T n ( f , i j ) , κ i j ( x , θ ) = = 1 L a , i j ( x ) f , i j ( θ ) ,
with L independent of i , j . Let E i j be the s × s matrix having 1 in position ( i , j ) and 0 elsewhere. Then,
Π n , s A n Π n , s T = = 1 L Π n , s D n ( a , i j ) T n ( f , i j ) i , j = 1 s Π n , s T = = 1 L Π n , s i , j = 1 s ( E i j D n ( a , i j ) ) ( E i j T n ( f , i j ) ) Π n , s T = = 1 L i , j = 1 s Π n , s ( E i j D n ( a , i j ) ) Π n , s T Π n , s ( E i j T n ( f , i j ) ) Π n , s T .
By Lemmas 3 and 4,
Π n , s ( E i j D n ( a , i j ) ) Π n , s T = D n ( a , i j E i j ) , Π n , s ( E i j T n ( f , i j ) ) Π n , s T = T n ( f , i j E i j ) .
It follows that
Π n , s A n Π n , s T = = 1 L i , j = 1 s D n ( a , i j E i j ) T n ( f , i j E i j ) .
Thus, by GLT 2 and GLT 3, { Π n , s A n Π n , s T } n is a d-level s-block GLT sequence with symbol
κ ( x , θ ) = = 1 L i , j = 1 s a , i j ( x ) f , i j ( θ ) E i j = [ κ i j ( x , θ ) ] i , j = 1 s .
Step 2. We now prove the theorem in its full generality. Since { A n , i j } n GLT κ i j , by ([14] Theorem 5.6) there exist functions a , i j ( m ) , f , i j ( m ) , = 1 , , L i j ( m ) , such that
  • a , i j ( m ) C ( [ 0 , 1 ] d ) and f , i j ( m ) is a d-variate trigonometric polynomial,
  • κ i j ( m ) ( x , θ ) = = 1 L i j ( m ) a , i j ( m ) ( x ) f , i j ( m ) ( θ ) κ i j ( x , θ ) a.e.;
  • A n , i j ( m ) = = 1 L i j ( m ) D n ( a , i j ( m ) ) T n ( f , i j ( m ) ) n a . c . s . { A n , i j } n .
Set A n ( m ) = [ A n , i j ( m ) ] i , j = 1 s and κ ( m ) ( x , θ ) = [ κ i j ( m ) ( x , θ ) ] i , j = 1 s . We have:
  • { Π n , s A n ( m ) Π n , s T } n GLT κ ( m ) by Step 1;
  • κ ( m ) κ a.e. (and hence also in measure);
  • { Π n , s A n ( m ) Π n , s T } n a . c . s . { Π n , s A n Π n , s T } n because { A n ( m ) } n a . c . s . { A n } n by Lemma 1.
We conclude that { Π n , s A n Π n , s T } n GLT κ by GLT 4. ☐
Now, suppose we have a system of linear PDEs of the form
j = 1 s L 1 j u j ( x ) = f 1 ( x ) , j = 1 s L 2 j u j ( x ) = f 2 ( x ) , j = 1 s L s j u j ( x ) = f s ( x ) ,
where x ( 0 , 1 ) d . The matrices An resulting from any standard discretization of (55) are parameterized by a d-index n = ( n 1 , , n d ) , where n i is related to the discretization step h i in the ith direction, and n i if and only if h i 0 (usually, h i 1 / n i ). By choosing each n i as a function of a unique discretization parameter n N , as it normally happens in practice where the most natural choice is n i = n for all i = 1 , , d , we see that n = n ( n ) and, consequently, { A n } n is a (d-level) matrix-sequence. Moreover, it turns out that, after a suitable normalization that we ignore in this discussion—the normalization we are talking about is the analog of the normalization that we have seen in Section 3, which allowed us to pass from the matrix A n in (13) to the matrix B n in (15)—, An has the following block structure:
A n = [ A n , i j ] i , j = 1 s ,
where A n , i j is the (normalized) matrix arising from the discretization of the differential operator L i j . In the simplest case where the operators L i j have constant coefficients and we use equispaced grids in each direction, the matrix A n , i j takes the form
A n , i j = T n ( f i j ) + Z n , i j ,
where f i j is a d-variate trigonometric polynomial, while the perturbation Z n , i j is usually a low-rank correction due to boundary conditions and, in any case, we have { Z n , i j } n σ 0 . Hence,
{ A n , i j } n GLT f i j
by GLT 2 and GLT 3, and it follows from Theorem 5 that
{ Π n , s A n Π n , s T } n GLT [ f i j ] i , j = 1 s .
In the case where the operators L i j have variable coefficients, the matrix A n , i j usually takes the form
A n , i j = = 1 L i j D n ( a , i j ) T n ( f , i j ) + Z n , i j ,
where L i j N , f , i j is a d-variate trigonometric polynomial, { Z n , i j } n σ 0 , and the functions a , i j : [ 0 , 1 ] d R , = 1 , , L i j , are related to the coefficients of L i j (for example, in Section 3, while proving (22), we have seen that K n ( a 11 ) , which plays there the same role as the matrix A n , 11 here, is equal to D n ( a 11 ) T n ( 2 2 cos θ ) + Z n for some zero-distributed sequence { Z n } n ). Hence,
{ A n , i j } n GLT κ i j ( x , θ ) = = 1 L i j a , i j ( x ) f , i j ( θ )
by GLT 2 and GLT 3, and it follows from Theorem 5 that
{ Π n , s A n Π n , s T } n GLT [ κ i j ] i , j = 1 s .

7. B-Spline IgA Discretization of a Variational Problem for the Curl–Curl Operator

For any function u ( x 1 , x 2 ) = [ u 1 ( x 1 , x 2 ) , u 2 ( x 1 , x 2 ) ] T , defined over some open set Ω R 2 and taking values in R 2 , the curl operator is formally defined as follows:
( × u ) ( x 1 , x 2 ) = u 2 x 1 ( x 1 , x 2 ) u 1 x 2 ( x 1 , x 2 ) , ( x 1 , x 2 ) Ω .
Clearly, this definition has meaning when the components u 1 , u 2 belong to H 1 ( Ω ) , so that their partial derivatives exist in the Sobolev sense. Now, let Ω = ( 0 , 1 ) 2 , set
( L 2 ( Ω ) ) 2 = { u : Ω R 2 : u 1 , u 2 L 2 ( Ω ) } , H ( curl , Ω ) = { u ( L 2 ( Ω ) ) 2 : × u exists   in   the   Sobolev   sense , × u L 2 ( Ω ) } ,
and consider the following variational problem: find u H ( curl , Ω ) such that
( × u , × v ) = ( f , v ) , v H ( curl , Ω ) ,
where f ( x 1 , x 2 ) = [ f 1 ( x 1 , x 2 ) , f 2 ( x 1 , x 2 ) ] T is a vector field in ( L 2 ( Ω ) ) 2 and
( × u , × v ) = Ω ( × u ) ( x 1 , x 2 ) ( × v ) ( x 1 , x 2 ) d x 1 d x 2 , ( f , v ) = Ω f 1 ( x 1 , x 2 ) v 1 ( x 1 , x 2 ) + f 2 ( x 1 , x 2 ) v 2 ( x 1 , x 2 ) d x 1 d x 2 .
Variational problems of the form of (56) arise in important applications, such as time harmonic Maxwell’s equations and magnetostatic problems. In this section, we consider a so-called compatible B-spline IgA discretization of (56); see [43] for details. We show that the corresponding sequence of discretization matrices enjoys a spectral distribution described by a 2 × 2 matrix-valued function whose determinant is zero everywhere. The results of this section have already been obtained in [38], but the derivation presented here is entirely based on the theory of multilevel block GLT sequences and turns out to be simpler and more lucid than that in [38]. For simplicity, throughout this section, the B-splines B i , [ p , p 1 ] , i = 1 , , n + p , and the associated reference B-spline β 1 , [ p , p 1 ] , are denoted by B i , [ p ] , i = 1 , , n + p , and β [ p ] , respectively. The function β [ p ] is the so-called cardinal B-spline of degree p over the knot sequence { 0 , 1 , , p + 1 } . In view of the following, we recall from [42] and ([23] Lemma 4) that the cardinal B-spline β [ q ] is defined for all degrees q 0 , belongs to C q 1 ( R ) , and satisfies the following properties:
supp ( β [ q ] ) = [ 0 , q + 1 ]
for q 1 ,
β [ q ] ( t ) = β [ q 1 ] ( t ) β [ q 1 ] ( t 1 ) ,
for t R and q 1 , and
R β [ q 1 ] ( r 1 ) ( τ ) β [ q 2 ] ( r 2 ) ( τ + t ) d τ = ( 1 ) r 1 β [ q 1 + q 2 + 1 ] ( r 1 + r 2 ) ( q 1 + 1 + t ) = ( 1 ) r 2 β [ q 1 + q 2 + 1 ] ( r 1 + r 2 ) ( q 2 + 1 t )
for t R and q 1 , q 2 , r 1 , r 2 0 . Moreover, property (40) in the case k = p 1 simplifies to
B i , [ p ] ( x ) = β [ p ] ( n x i + p + 1 ) , i = p + 1 , , n .

7.1. Compatible B-Spline IgA Discretization

Let n = ( n 1 , n 2 ) N 2 , let p 2 , and define the space
V n , [ p ] ( curl , Ω ) = span B i 1 , [ p 1 ] ( x 1 ) B i 2 , [ p ] ( x 2 ) B j 1 , [ p ] ( x 1 ) B j 2 , [ p 1 ] ( x 2 ) : i 1 = 1 , , n 1 + p 1 , i 2 = 1 , , n 2 + p , j 1 = 1 , , n 1 + p , j 2 = 1 , , n 2 + p 1 B i 1 , [ p 1 ] ( x 1 ) B i 2 , [ p ] ( x 2 ) B j 1 , [ p ] ( x 1 ) B j 2 , [ p 1 ] ( x 2 ) .
Following a compatible B-spline approach [43], we look for an approximation of the solution in the space V n , [ p ] ( curl , Ω ) by solving the following discrete problem: find u V V n , [ p ] ( curl , Ω ) such that
( × u V , × v ) = ( f , v ) , v V n , [ p ] ( curl , Ω ) .
After choosing a suitable ordering on the basis functions of V n , [ p ] ( curl , Ω ) displayed in (61), by linearity the computation of u V reduces to solving a linear system whose coefficient matrix is given by
A n , [ p ] = A n , [ p ] , 11 A n , [ p ] , 12 A n , [ p ] , 21 A n , [ p ] , 22 = M n 1 , [ p 1 ] K n 2 , [ p ] H n 1 , [ p ] ( H n 2 , [ p ] ) T ( H n 1 , [ p ] ) T H n 2 , [ p ] K n 1 , [ p ] M n 2 , [ p 1 ] ,
where
( M n , [ p 1 ] ) i j = 0 1 B j , [ p 1 ] ( x ) B i , [ p 1 ] ( x ) d x , i , j = 1 , , n + p 1 , ( K n , [ p ] ) i j = 0 1 B j , [ p ] ( x ) B i , [ p ] ( x ) d x , i , j = 1 , , n + p , ( H n , [ p ] ) i j = 0 1 B j , [ p ] ( x ) B i , [ p 1 ] ( x ) d x , i = 1 , , n + p 1 , j = 1 , , n + p .
Note that M n , [ p 1 ] is a square matrix of size n + p 1 , K n , [ p ] is a square matrix of size n + p , while H n , [ p ] is a rectangular matrix of size ( n + p 1 ) × ( n + p ) .

7.2. GLT Analysis of the B-Spline IgA Discretization Matrices

In the main result of this section (Theorem 6), assuming that n = n ν for a fixed vector ν , we show that the spectral distribution of the sequence { A n , [ p ] } n is described by a 2 × 2 matrix-valued function whose determinant is zero everywhere (Remark 5). To prove Theorem 6, some preliminary work is necessary. We first note that, in view of the application of Theorem 5, the matrix A n , [ p ] has an unpleasant feature: the anti-diagonal blocks A n , [ p ] , 12 and A n , [ p ] , 21 are not square and the square diagonal blocks A n , [ p ] , 11 and A n , [ p ] , 22 do not have the same size whenever n 1 n 2 . Let us then introduce the nicer matrix
A ˜ n , [ p ] = A ˜ n , [ p ] , 11 A ˜ n , [ p ] , 12 A ˜ n , [ p ] , 21 A ˜ n , [ p ] , 22 = M ˜ n 1 , [ p 1 ] K n 2 , [ p ] H ˜ n 1 , [ p ] ( H ˜ n 2 , [ p ] ) T ( H ˜ n 1 , [ p ] ) T H ˜ n 2 , [ p ] K n 1 , [ p ] M ˜ n 2 , [ p 1 ] ,
where M ˜ n , [ p 1 ] and H ˜ n , [ p ] are square matrices of size n + p given by
M ˜ n , [ p 1 ] = 0 M n , [ p 1 ] 0 0 0 0 , H ˜ n , [ p ] = H n , [ p ] 0 0 .
Each block A ˜ n , [ p ] , i j of the matrix A ˜ n , [ p ] is now a square block of size ( n 1 + p ) ( n 2 + p ) . Moreover,
M n , [ p 1 ] = P n , [ p ] M ˜ n , [ p 1 ] ( P n , [ p ] ) T , H n , [ p ] = P n , [ p ] H ˜ n , [ p ] ,
where the matrix
P n , [ p ] = 0 I n + p 1 0
satisfies P n , [ p ] ( P n , [ p ] ) T = I n + p 1 . By (6) and (7),
A n , [ p ] , 11 = ( P n 1 , [ p ] I n 2 + p ) A ˜ n , [ p ] , 11 ( P n 1 , [ p ] I n 2 + p ) T , A n , [ p ] , 12 = ( P n 1 , [ p ] I n 2 + p ) A ˜ n , [ p ] , 12 ( I n 1 + p P n 2 , [ p ] ) T , A n , [ p ] , 21 = ( I n 1 + p P n 2 , [ p ] ) A ˜ n , [ p ] , 21 ( P n 1 , [ p ] I n 2 + p ) T , A n , [ p ] , 22 = ( I n 1 + p P n 2 , [ p ] ) A ˜ n , [ p ] , 22 ( I n 1 + p P n 2 , [ p ] ) T ,
and so
A n , [ p ] = P n , [ p ] A ˜ n , [ p ] ( P n , [ p ] ) T , P n , [ p ] = P n 1 , [ p ] I n 2 + p I n 1 + p P n 2 , [ p ] .
In view of the application of Theorem 1, we note that
P n , [ p ] R [ ( n 1 + p 1 ) ( n 2 + p ) + ( n 1 + p ) ( n 2 + p 1 ) ] × 2 ( n 1 + p ) ( n 2 + p ) ,
P n , [ p ] ( P n , [ p ] ) T = I ( n 1 + p 1 ) ( n 2 + p ) + ( n 1 + p ) ( n 2 + p 1 ) ,
lim n ( n 1 + p 1 ) ( n 2 + p ) + ( n 1 + p ) ( n 2 + p 1 ) 2 ( n 1 + p ) ( n 2 + p ) = lim n n 1 + p 1 2 ( n 1 + p ) + n 2 + p 1 2 ( n 2 + p ) = 1 .
Lemma 5.
Let p 2 and n 1 . Then,
n 1 K n , [ p ] = T n + p ( f p ) + Q n , [ p ] , rank ( Q n , [ p ] ) 4 p ,
H ˜ n , [ p ] = T n + p ( g p ) + R n , [ p ] , rank ( R n , [ p ] ) 4 p ,
n M ˜ n , [ p 1 ] = T n + p ( h p ) + S n , [ p ] , rank ( S n , [ p ] ) 4 p ,
where
f p ( θ ) = k Z β [ 2 p + 1 ] ( p + 1 k ) e i k θ ,
g p ( θ ) = k Z β [ 2 p ] ( p k ) e i k θ ,
h p ( θ ) = k Z β [ 2 p 1 ] ( p k ) e i k θ ,
and we note that the three series are actually finite sums because of (57).
Proof. 
For every i , j = p + 1 , , n , since [ i + p + 1 , n i + p + 1 ] [ 0 , p + 1 ] = supp ( β [ p ] ) and [ i + p , n i + p ] [ 0 , p ] = supp ( β [ p 1 ] ) , by (59) and (60) we obtain
( K n , [ p ] ) i j = 0 1 B j , [ p ] ( x ) B i , [ p ] ( x ) d x = n 2 0 1 β [ p ] ( n x j + p + 1 ) β [ p ] ( n x i + p + 1 ) d x = n i + p + 1 n i + p + 1 β [ p ] ( τ + i j ) β [ p ] ( τ ) d τ = n R β [ p ] ( τ ) β [ p ] ( τ + i j ) d τ = n β [ 2 p + 1 ] ( p + 1 + i j ) = n β [ 2 p + 1 ] ( p + 1 i + j ) , ( H ˜ n , [ p ] ) i j = 0 1 B j , [ p ] ( x ) B i , [ p 1 ] ( x ) d x = n 0 1 β [ p ] ( n x j + p + 1 ) β [ p 1 ] ( n x i + p ) d x = i + p n i + p β [ p ] ( τ + i j + 1 ) β [ p 1 ] ( τ ) d τ = R β [ p 1 ] ( τ ) β [ p ] ( τ + i j + 1 ) d τ = β [ 2 p ] ( p + i j + 1 ) = β [ 2 p ] ( p i + j ) , ( M ˜ n , [ p 1 ] ) i j = 0 1 B j , [ p 1 ] ( x ) B i , [ p 1 ] ( x ) d x = 0 1 β [ p 1 ] ( n x j + p ) β [ p 1 ] ( n x i + p ) d x = n 1 i + p n i + p β [ p 1 ] ( τ + i j ) β [ p 1 ] ( τ ) d τ = n 1 R β [ p 1 ] ( τ ) β [ p 1 ] ( τ + i j ) d τ = n 1 β [ 2 p 1 ] ( p + i j ) = n 1 β [ 2 p 1 ] ( p i + j ) .
Thus,
[ ( n 1 K n , [ p ] ) i j ] i , j = p + 1 n = [ β [ 2 p + 1 ] ( p + 1 i + j ) ] i , j = p + 1 n = T n p ( f p ) ,
[ ( H ˜ n , [ p ] ) i j ] i , j = p + 1 n = [ β [ 2 p ] ( p i + j ) ] i , j = p + 1 n = T n p ( g p ) ,
[ ( n M ˜ n , [ p 1 ] ) i j ] i , j = p + 1 n = [ β [ 2 p 1 ] ( p i + j ) ] i , j = p + 1 n = T n p ( h p ) .
It follows from (72) that the principal submatrix of n 1 K n [ p ] T n + p ( f p ) corresponding to the row and column indices i , j = p + 1 , , n is the zero matrix, which implies (66). Similarly, (73) and (74) imply (67) and (68), respectively. ☐
Theorem 6.
Let p 2 , let ν = ( ν 1 , ν 2 ) Q 2 be a vector with positive components, and assume that n = n ν (it is understood that n varies in the infinite subset of N such that n = n ν N 2 ). Then,
{ A n , [ p ] } n σ , λ κ ( θ ) = ν 2 ν 1 h p ( θ 1 ) f p ( θ 2 ) g p ( θ 1 ) g p ( θ 2 ) ¯ g p ( θ 1 ) ¯ g p ( θ 2 ) ν 1 ν 2 f p ( θ 1 ) h p ( θ 2 ) .
Proof. 
The thesis follows immediately from Theorem 1 and (62)–(65) as soon as we have proven that
{ A ˜ n , [ p ] } n σ , λ κ ( θ ) .
We show that
{ A ˜ n , [ p ] , i j } n GLT κ i j ( θ ) , i , j = 1 , 2 .
Once this is done, the thesis (75) follows immediately from Theorem 5 and GLT 1 as the matrix A ˜ n , [ p ] is symmetric. Actually, we only prove (76) for ( i , j ) = ( 1 , 2 ) because the proof for the other pairs of indices ( i , j ) is conceptually the same. Setting p = ( p , p ) and keeping in mind the assumption n = n ν , by Lemma 5 and Equations (5), (51) and (52), we have
A ˜ n , [ p ] , 12 = H ˜ n 1 , [ p ] ( H ˜ n 2 , [ p ] ) T = ( T n 1 + p ( g p ) + R n 1 , [ p ] ) ( T n 2 + p ( g p ) + R n 2 , [ p ] ) T = ( T n 1 + p ( g p ) + R n 1 , [ p ] ) ( T n 2 + p ( g p ¯ ) + ( R n 2 , [ p ] ) T ) = ( T n + p ( g p ( θ 1 ) g p ( θ 2 ) ¯ ) + T n 1 + p ( g p ) ( R n 2 , [ p ] ) T + R n 1 , [ p ] ( H ˜ n 2 , [ p ] ) T ) = T n + p ( κ 12 ) + V n , [ p ] ,
where rank ( V n , [ p ] ) 4 p ( n 1 + p ) + 4 p ( n 2 + p ) . Thus, { V n , [ p ] } n σ 0 by Proposition 1, and (76) (for ( i , j ) = ( 1 , 2 ) ) follows from GLT 2 and GLT 3. ☐
Remark 5.
Using (58), it is not difficult to see that the functions f p ( θ ) and g p ( θ ) in (69) and (70) can be expressed in terms of h p ( θ ) as follows:
f p ( θ ) = ( 2 2 cos θ ) h p ( θ ) , g p ( θ ) = ( e i θ 1 ) h p ( θ ) .
Therefore, the 2 × 2 matrix-valued function κ ( θ ) appearing in Theorem 6 can be simplified as follows:
κ ( θ ) = 1 ν 1 ν 2 h p ( θ 1 ) h p ( θ 2 ) ν 2 ( e i θ 2 1 ) ν 1 ( e i θ 1 1 ) ν 2 ( e i θ 2 1 ) ν 1 ( e i θ 1 1 ) .
In particular, det ( κ ( θ ) ) = 0 for all θ. According to the informal meaning behind the spectral distribution { A n , [ p ] } n λ κ ( θ ) reported in Remark 1, this means that, for large n, one half of the eigenvalues of A n , [ p ] are approximately zero and one half is given by a uniform sampling over [ π , π ] 2 of
trace ( κ ( θ ) ) = 1 ν 1 ν 2 h p ( θ 1 ) h p ( θ 2 ) ν 1 2 ( 2 2 cos θ 1 ) + ν 2 2 ( 2 2 cos θ 1 ) .

8. Conclusions

We have illustrated through specific examples the applicative interest of the theory of block GLT sequences and of its multivariate version, thus bringing to completion the purely theoretical work [34]. It should be said, however, that the theory of GLT sequences is still incomplete. In particular, besides filling in the details of the theory of multilevel block GLT sequences—the results of Section 5 have been obtained as a combination of the results in [14,34], but formal proofs of them are still missing and will be the subject of a future paper—, it will be necessary to develop the theory of the so-called reduced GLT sequences, as explained in ([13] Chapter 11).

Author Contributions

C.G. authored Section 1, Section 2, Section 3 and Section 4 and co-authored Section 5. S.S.-C. co-authored Section 5 and Section 6; he also conceived several important ideas for the proofs of the results of Section 3, Section 4 and Section 7. M.M. co-authored Section 6 and authored Section 7.

Funding

This research was funded by the Italian INdAM (Istituto Nazionale di Alta Matematica) through the grant PCOFUND-GA-2012-600198 and by the INdAM GNCS (Gruppo Nazionale per il Calcolo Scientifico) through a national grant.

Acknowledgments

C.G. is an INdAM Marie-Curie fellow under grant PCOFUND-GA-2012-600198. All authors are members of the INdAM GNCS, which partially supported this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tilli, P. Locally Toeplitz sequences: Spectral properties and applications. Linear Algebra Appl. 1998, 278, 91–120. [Google Scholar] [CrossRef]
  2. Avram, F. On bilinear forms in Gaussian random variables and Toeplitz matrices. Probab. Theory Relat. Fields 1988, 79, 37–45. [Google Scholar] [CrossRef]
  3. Böttcher, A.; Grudsky, S.M. Toeplitz Matrices, Asymptotic Linear Algebra, and Functional Analysis; Birkhäuser Verlag: Basel, Switzerland, 2000. [Google Scholar]
  4. Böttcher, A.; Grudsky, S.M. Spectral Properties of Banded Toeplitz Matrices; SIAM: Philadelphia, PA, USA, 2005. [Google Scholar]
  5. Böttcher, A.; Silbermann, B. Introduction to Large Truncated Toeplitz Matrices; Springer: New York, NY, USA, 1999. [Google Scholar]
  6. Böttcher, A.; Silbermann, B. Analysis of Toeplitz Operators, 2nd ed.; Springer: Berlin, Germany, 2006. [Google Scholar]
  7. Grenander, U.; Szego, G. Toeplitz Forms and Their Applications, 2nd ed.; AMS Chelsea Publishing: New York, NY, USA, 1984. [Google Scholar]
  8. Parter, S.V. On the distribution of the singular values of Toeplitz matrices. Linear Algebra Appl. 1986, 80, 115–130. [Google Scholar] [CrossRef]
  9. Tilli, P. A note on the spectral distribution of Toeplitz matrices. Linear Multilinear Algebra 1998, 45, 147–159. [Google Scholar] [CrossRef]
  10. Tyrtyshnikov, E.E. A unifying approach to some old and new theorems on distribution and clustering. Linear Algebra Appl. 1996, 232, 1–43. [Google Scholar] [CrossRef]
  11. Tyrtyshnikov, E.E.; Zamarashkin, N.L. Spectra of multilevel Toeplitz matrices: Advanced theory via simple matrix relationships. Linear Algebra Appl. 1998, 270, 15–27. [Google Scholar] [CrossRef]
  12. Zamarashkin, N.L.; Tyrtyshnikov, E.E. Distribution of eigenvalues and singular values of Toeplitz matrices under weakened conditions on the generating function. Sb. Math. 1997, 188, 1191–1201. [Google Scholar] [CrossRef]
  13. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications, Volume I; Springer: Cham, Switzerland, 2017. [Google Scholar]
  14. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications, Volume II; Springer; To appear.
  15. Serra-Capizzano, S. Generalized locally Toeplitz sequences: Spectral analysis and applications to discretized partial differential equations. Linear Algebra Appl. 2003, 366, 371–402. [Google Scholar] [CrossRef]
  16. Serra-Capizzano, S. The GLT class as a generalized Fourier analysis and applications. Linear Algebra Appl. 2006, 419, 180–233. [Google Scholar] [CrossRef]
  17. Barbarino, G. Equivalence between GLT sequences and measurable functions. Linear Algebra Appl. 2017, 529, 397–412. [Google Scholar] [CrossRef] [Green Version]
  18. Böttcher, A.; Garoni, C.; Serra-Capizzano, S. Exploration of Toeplitz-like matrices with unbounded symbols is not a purely academic journey. Sb. Math. 2017, 208, 1602–1627. [Google Scholar] [CrossRef]
  19. Beckermann, B.; Serra-Capizzano, S. On the asymptotic spectrum of finite element matrix sequences. SIAM J. Numer. Anal. 2007, 45, 746–769. [Google Scholar] [CrossRef]
  20. Bertaccini, D.; Donatelli, M.; Durastante, F.; Serra-Capizzano, S. Optimizing a multigrid Runge–Kutta smoother for variable-coefficient convection-diffusion equations. Linear Algebra Appl. 2017, 533, 507–535. [Google Scholar] [CrossRef]
  21. Donatelli, M.; Garoni, C.; Manni, C.; Serra-Capizzano, S.; Speleers, H. Spectral analysis and spectral symbol of matrices in isogeometric collocation methods. Math. Comput. 2016, 85, 1639–1680. [Google Scholar] [CrossRef]
  22. Garoni, C. Spectral distribution of PDE discretization matrices from isogeometric analysis: The case of L1 coefficients and non-regular geometry. J. Spectr. Theory 2018, 8, 297–313. [Google Scholar] [CrossRef]
  23. Garoni, C.; Manni, C.; Pelosi, F.; Serra-Capizzano, S.; Speleers, H. On the spectrum of stiffness matrices arising from isogeometric analysis. Numer. Math. 2014, 127, 751–799. [Google Scholar] [CrossRef]
  24. Garoni, C.; Manni, C.; Serra-Capizzano, S.; Sesana, D.; Speleers, H. Spectral analysis and spectral symbol of matrices in isogeometric Galerkin methods. Math. Comput. 2017, 86, 1343–1373. [Google Scholar] [CrossRef]
  25. Garoni, C.; Manni, C.; Serra-Capizzano, S.; Sesana, D.; Speleers, H. Lusin theorem, GLT sequences and matrix computations: An application to the spectral analysis of PDE discretization matrices. J. Math. Anal. Appl. 2017, 446, 365–382. [Google Scholar] [CrossRef]
  26. Roman, F.; Manni, C.; Speleers, H. Spectral analysis of matrices in Galerkin methods based on generalized B-splines with high smoothness. Numer. Math. 2017, 135, 169–216. [Google Scholar] [CrossRef]
  27. Donatelli, M.; Mazza, M.; Serra-Capizzano, S. Spectral analysis and structure preserving preconditioners for fractional diffusion equations. J. Comput. Phys. 2016, 307, 262–279. [Google Scholar] [CrossRef]
  28. Salinelli, E.; Serra-Capizzano, S.; Sesana, D. Eigenvalue–eigenvector structure of Schoenmakers–Coffey matrices via Toeplitz technology and applications. Linear Algebra Appl. 2016, 491, 138–160. [Google Scholar] [CrossRef]
  29. Al-Fhaid, A.S.; Serra-Capizzano, S.; Sesana, D.; Ullah, M.Z. Singular-value (and eigenvalue) distribution and Krylov preconditioning of sequences of sampling matrices approximating integral operators. Numer. Linear Algebra Appl. 2014, 21, 722–743. [Google Scholar] [CrossRef]
  30. Beckermann, B.; Kuijlaars, A.B.J. Superlinear convergence of conjugate gradients. SIAM J. Numer. Anal. 2001, 39, 300–329. [Google Scholar] [CrossRef]
  31. Donatelli, M.; Garoni, C.; Manni, C.; Serra-Capizzano, S.; Speleers, H. Robust and optimal multi-iterative techniques for IgA Galerkin linear systems. Comput. Methods Appl. Mech. Eng. 2015, 284, 230–264. [Google Scholar] [CrossRef]
  32. Donatelli, M.; Garoni, C.; Manni, C.; Serra-Capizzano, S.; Speleers, H. Robust and optimal multi-iterative techniques for IgA collocation linear systems. Comput. Methods Appl. Mech. Eng. 2015, 284, 1120–1146. [Google Scholar] [CrossRef]
  33. Donatelli, M.; Garoni, C.; Manni, C.; Serra-Capizzano, S.; Speleers, H. Symbol-based multigrid methods for Galerkin B-spline isogeometric analysis. SIAM J. Numer. Anal. 2017, 55, 31–62. [Google Scholar] [CrossRef]
  34. Garoni, C.; Serra-Capizzano, S.; Sesana, D. Block generalized locally Toeplitz sequences: Topological construction, spectral distribution results, and star-algebra structure. In Structured Matrices in Numerical Linear Algebra: Analysis, Algorithms, and Applications; Springer INdAM Series; Springer; To appear.
  35. Garoni, C.; Serra-Capizzano, S.; Sesana, D. Spectral analysis and spectral symbol of d-variate Q p Lagrangian FEM stiffness matrices. SIAM J. Matrix Anal. Appl. 2015, 36, 1100–1128. [Google Scholar] [CrossRef]
  36. Benedusi, P.; Garoni, C.; Krause, R.; Li, X.; Serra-Capizzano, S. Space–time FE–DG discretization of the anisotropic diffusion equation in any dimension: the spectral symbol. SIAM J. Matrix Anal. Appl. To appear.
  37. Dumbser, M.; Fambri, F.; Furci, I.; Mazza, M.; Serra-Capizzano, S.; Tavelli, M. Staggered discontinuous Galerkin methods for the incompressible Navier–Stokes equations: Spectral analysis and computational results. Numer. Linear Algebra Appl. 2018. [Google Scholar] [CrossRef]
  38. Mazza, M.; Ratnani, A.; Serra-Capizzano, S. Spectral analysis and spectral symbol for the 2D curl–curl (stabilized) operator with applications to the related iterative solutions. Math. Comput. 2018. [Google Scholar] [CrossRef]
  39. Barbarino, G.; Serra-Capizzano, S. Non-Hermitian Perturbations of Hermitian Matrix-Sequences and Applications to the Spectral Analysis of Approximated PDEs; Technical Report 2018-004; Department of Information Technology, Uppsala University: Uppsala, Sweden, 2018. [Google Scholar]
  40. Bhatia, R. Matrix Analysis; Springer: New York, NY, USA, 1997. [Google Scholar]
  41. Schumaker, L.L. Spline Functions: Basic Theory, 3rd ed.; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  42. De Boor, C. A Practical Guide to Splines, Revised ed.; Springer: New York, NY, USA, 2001. [Google Scholar]
  43. Buffa, A.; Sangalli, G.; Vázquez, R. Isogeometric analysis in electromagnetics: B-splines approximation. Comput. Methods Appl. Mech. Eng. 2010, 199, 1143–1152. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison between the spectrum of C n and the rearranged version ϕ of the symbol κ ( x , θ ) for a 11 ( x ) = 2 + cos ( π x ) , a 12 ( x ) = a 21 ( x ) = e x sin ( π x ) , a 22 ( x ) = 2 x + sin ( π x ) and n = 40 .
Figure 1. Comparison between the spectrum of C n and the rearranged version ϕ of the symbol κ ( x , θ ) for a 11 ( x ) = 2 + cos ( π x ) , a 12 ( x ) = a 21 ( x ) = e x sin ( π x ) , a 22 ( x ) = 2 x + sin ( π x ) and n = 40 .
Axioms 07 00049 g001
Figure 2. B-splines B 1 , [ p , k ] , , B n ( p k ) + k + 1 , [ p , k ] for p = 3 and k = 1 , with n = 10 .
Figure 2. B-splines B 1 , [ p , k ] , , B n ( p k ) + k + 1 , [ p , k ] for p = 3 and k = 1 , with n = 10 .
Axioms 07 00049 g002
Figure 3. Reference B-splines β 1 , [ p , k ] , β 2 , [ p , k ] for p = 3 and k = 1 .
Figure 3. Reference B-splines β 1 , [ p , k ] , β 2 , [ p , k ] for p = 3 and k = 1 .
Axioms 07 00049 g003

Share and Cite

MDPI and ACS Style

Garoni, C.; Mazza, M.; Serra-Capizzano, S. Block Generalized Locally Toeplitz Sequences: From the Theory to the Applications. Axioms 2018, 7, 49. https://doi.org/10.3390/axioms7030049

AMA Style

Garoni C, Mazza M, Serra-Capizzano S. Block Generalized Locally Toeplitz Sequences: From the Theory to the Applications. Axioms. 2018; 7(3):49. https://doi.org/10.3390/axioms7030049

Chicago/Turabian Style

Garoni, Carlo, Mariarosa Mazza, and Stefano Serra-Capizzano. 2018. "Block Generalized Locally Toeplitz Sequences: From the Theory to the Applications" Axioms 7, no. 3: 49. https://doi.org/10.3390/axioms7030049

APA Style

Garoni, C., Mazza, M., & Serra-Capizzano, S. (2018). Block Generalized Locally Toeplitz Sequences: From the Theory to the Applications. Axioms, 7(3), 49. https://doi.org/10.3390/axioms7030049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop