2.1. Discrete Sampling
To motivate the need for a bandlimited inverse, we consider how finite measuring systems are inherently band-limited. We start by defining an ideal finite measurement as a sample of a continuous linear system,
where
is the
ith discrete measurement,
is the measurement’s response function (MRF),
is the truth signal, and
r is a positional index. Given a finite number of measurements
N, inverting the measurement system in Equation (
1) is an ill-posed problem, since the existence, uniqueness, and stability of a solution depend on the configuration, spacing, and number of
, as well as assumptions about
.
To illustrate the difficulty in estimating
from a set of
, imagine if each
is a delta function centered at position
(i.e.,
) and then
. Although each
may accurately represent the value of the truth signal at the point
, it is impossible to uniquely infer the entire continuous signal (
) without further information, since an infinite number of signals may also cross at the same discrete values. However, if
is assumed to be band-limited, sufficient sampling configurations can uniquely determine
. Under band-limited sampling theory, only a periodic band-limited signal
can be perfectly reconstructed from a finite set of samples [
3].
We define a band-limited signal
to be a signal which only posses frequency content below a maximum cutoff frequency, or band limit
B (i.e., the Fourier transform of
has a finite region of support). Let
be the Fourier transform of
,
that is band-limited if
. According to Nyquist, a uniformly sampled band-limited signal can be perfectly reconstructed from a set of ideal discrete samples
as
where
,
is a Dirichlet kernel (i.e., band-limited identity function) corresponding to the band limit of
, and
n represents an arbitrary sample index corresponding to position
when the sample distances between
is at least twice the highest frequency. Since
is a periodic band-limited identity,
is periodic. Note that Equation (
2) is equivalent to bandlimited interpolation. Even though multiple sampling configurations (i.e., the particular spacing and number of sample positions
) can satisfy the reconstruction in Equation (
2), all such configurations are equivalent under the band-limited assumption [
3]. For convenience, we can assume that
is spaced on a uniform grid, where the grid spacing is
with
as the highest frequency within the band limit of
(i.e.,
is the Nyquist sampling distance).
The relationship between
and
can be found by combining Equations (
1) and (
2),
where
is a discrete band-limited sample of
, and
is the Dirichlet kernel function centered at
. Note that
, unless
is already band-limited within
’s corresponding bandlimit.
Equation (
3) can be expressed in vector and matrix form as
where
and
a are length ‘N’ vectors of
and
, respectively,
z is a length
M vector of
, and
M is a matrix with rows of
and is
, where
M is the total number of
, and
N is the total number of
. From Equation (
6), it can be seen that the band-limited reconstruction process is equivalent to finding the inverse of
M; however, the invertibility of
M depends on the chosen band-limited discretization and spacing of each MRF.
In the case of uniformly spaced samples and constant MRF apertures, i.e., , M is square and invertible (assuming each is defined over the assumed band limit). When measurements are dependent, i.e., they do not increase the rank of the inverse of M, they do not improve the resolution of the estimate but do provide redundancy for noise reduction. Irregular sampling and variable apertures tend to have more dependent measurements. A unique band-limited solution only exists if a band-limited inverse of M exists.
In order to define the band-limited inverse of
M, it is helpful to study the discretization of ideal and band-limited measurements. Under band-limited assumptions, the discrete equivalent to an ideal measurement is found by substituting
into Equation (5):
where
is a Dirichlet kernel centered at
that is discretely sampled over the grid defined by
.
To express Equation (
2) in matrix form, we define a band-limtied identity matrix
D constructed with rows
sampled over the grid defined by
. The multiplication of
a by
D is an identity operation, as
a is unchanged by the band limit of
D. This band-limited identity operation,
, is the discrete band-limtied reconstruction equivalent to Equation (
2). Using this result, we can express a band-limited inverse of
M:
where
W is a band-limited inverse of
M. The existence of
W depends on
M being full rank over a band-limited subspace. When this criteria is met,
W is the SVD pseudo-inverse of
M. When
M is full row rank,
W is the Moore–Penrose pseudo-inverse, and
D becomes the identity matrix. The existence of
W depends on the sampling positions
, the choice of discrete positions
, and the band limit. For a more rigorous study on how sampling conditions affect the existence of this band-limited inverse, see [
1,
4,
9,
10].
In summary, an ideal band-limited reconstruction can be accomplished from a finite number of measurements if a band-limited inverse of M exists, where M is the discrete band-limited sampling matrix of all measurement response functions.
2.3. Frequency QR
Here, we define a method for computing a band-limited inverse using single-rank updates. When computed in full, this method is equivalent to performing QR decomposition on a frequency basis (FQR). When computed sequentially, it can progressively construct a band-limited inverse from low to high frequencies, stopping when the magnitude of each frequency component falls below the assumed SNR noise floor in the frequency domain. This stops the reconstruction at the point where measurement noise theoretically overtakes the magnitude of reconstructable content. In the noise-free case, these criteria stop the reconstruction at the inherent sampling band limit without going over. An additional example comparing the result of FQR to that of a QR decomposition is provided in the following section.
2.3.1. Partitioned Band-Limited Matrix
Let
represent an orthonormal basis vector of the frequency domain such that
.
where
F is a matrix of the basis vectors
and
I is the identity matrix. Typically, each
corresponds to a DFT kernel within the calculation of the DFT. However, other frequency bases may be used when advantageous. More examples of frequency bases and their advantages are discussed in a later section.
A band-limiting frequency filter can be applied in the frequency domain by applying the filter weights
to each corresponding basis vector
. Taking advantage of the orthogonal basis,
can be partitioned into a set of singular matrices:
where
denotes the formation of a diagonal matrix with
x along its diagonal,
h is a frequency filter, and each
is a singular matrix defined by the outer product:
where
are the values of
h corresponding to each basis vector
, and
is a column vector of
weighted by
:
Note that the matrix product , because the row space of each is orthogonal. However, the column spaces of are not orthogonal, meaning that .
Using the partitioned representation of
, we can express the progression of band-limited sampling matrices
as
where each
progressively increases in bandwidth as
n increases. The direct inversion of each
leads us to the existence of a progressive band-limited inverse:
Regrettably, while each
may have orthogonal row spaces, the progressive band-limited inverse cannot be formed by the sum of
, i.e.,
since each
does not have orthogonal column spaces. However, given that sequential
only differ by singular matrix updates, it stands to reason that
can also be partitioned into singular band-limited matrices.
2.3.2. Derivation
Our derivation of the progressive band-limited inverse
begins with the following theorem, which relates the generalized inverse of the sum of two matrices to the generalized inverse of each individual matrix [
12]:
where
X and
Y are two arbitrary matrices,
denotes the projection onto the column space of
X,
S represents the disjoint portion of the row space of
Y from the row space of
X, and
T represents the disjoint portion of the column space of
Y from the column space of
X.
Recursively applying the theorem in Equation (
29), we can find each progressive band-limited inverse. Applying the theorem to
and the zero matrix, we trivially find the initial bandlimited inverse:
The next bandlimited inverse can be found by performing a rank update to
using
:
where
is the disjoint portion of the column space of
from
, and
is the row space projection of
. As the row space of each
is already disjoint
and
, upon simplifying using this result, we find the following:
where
can be found using
,
, and
. Applying the theorem to the
nth rank update, we can form a recursive relationship, which iteratively constructs a band-limited inverse:
where
can be found by applying a rank update to
with the disjoint column space projection of each
.
To find
recursively, we use the fact that each
’s column space is disjoint and
spans the column subspace of
to re-express
with the sum of the disjoint column spaces of
:
By substitution, we find a recursive relationship to calculate each singular matrix
:
where
is the vector inverse of
and
is a column space basis vector of
. Note that this recursive relationship is equivalent to the Gram–Schmidt process used in QR factorization [
13,
14,
15].
Substituting
into the recursive relationship for
in Equation (
33), we have the following:
which reveals that the row space of
is spanned by orthogonal
. As each band-limited update is singular, we can slove for the column space vector
, which updates each
. By multiplying both sides of Equation (
37) by
and noting that
, we find an expression for
:
Note that Equation (
38) is equivalent to the process of forward substitution used to solve a matrix inverse using a QR decomposition. Each
forms a band-limited singular matrix partition of
, which—when summed together—form progressive band-limited inverses:
It is important to note that although the row space of each
is orthogonal, and the column space (spanned by
) is not orthogonal, which is similar to the inequality in Equation (
28). Specifically, each
may contain content from previous
, which are removed through forward substitution each band-limited update. This means that each band-limited inverse
is not necessary equivalent to the pseudo-inverse of
M band-limited by each previous
,
since
M† may contain content associated with higher-frequency features than contained in
. Therefore,
only when
has a higher bandwidth than the maximum bandlimit of
M.
As pointed out in the derivation, the vectors
and
can be solved for using Gram–Schmidt orthogonalization and forward substitution methods, respectively. These are the basic steps for QR decomposition and inverse solutions.
Appendix A shows how FQR is equivalent to applying a frequency rotation to a matrix and then solving for its partial inverse using QR iterations. A summary of the algorithm steps matching the notation given in this paper is shown in Algorithm 1. Note that many algorithmic improvements can be made to make the orthogonalization and forward substitution methods more numerically stable and efficient [
16,
17]
Algorithm 1: Frequency-constrained QR. |
|
|
|
|
|
|
2.3.3. Observations
When computed over all N nonzero singular updates, this method is identical to taking a full inverse. In theory, this progressive band-limited inverse can be used as an inversion method for any arbitrary matrix X. However, the computational complexity of this method is no better than using QR decomposition and forward substitution methods using .
The main difference between this inversion method and other inversion methods is the order in which mutually independent information is inverted. For comparison, we describe how the SVD and QR decompositions break up a matrix inverse. The computation of the inverse using SVD decomposition breaks a matrix down starting with its largest eigenvectors, cutting off the inversion when eigenvalues become too small. The computation of the inverse using QR decomposition breaks a matrix down starting with its first column, cutting off the inversion when new column’s contributions become too small. The computation of the inverse using FQR breaks a matrix down starting with its lowest frequency projection, cutting off the inversion when higher frequency’s contributions become too small. All three methods (SVD, QR, and FQR) are essentially rotations of each other. In the noise-free case, all three of these methods are equivalent when run to completion, but in the presence of computation noise or model errors, the methods posses different sensitivities.
As a sampling matrix is inherently band-limited, the FQR inverse is advantageous to limit the inversion of noise content outside the sampling bandlimit.
2.3.4. SNR Filtering
When the inherent bandlimit of
M is unknown, but the expected noise floor or signal-to-noise ratio (SNR) is known, FQR can be modified to stop inversion when the magnitude of new
drops below the expected noise level. This is very similar to the process of Wiener filtering [
18], except that no regularization is added to the inversion process. As the expected SNR is decreased, the recoverable bandwidth shrinks and FQR iterations reach the raised expected noise level earlier. Further SNR filtering can be implemented by adding a weighting or taper on the magnitude of each
. Such a taper can be used to mitigate the Gibbs phenomenon present at the edge of band-limited signals.